sed
Sound Event Detection and Time-Frequency Segmentation from Weakly Labelled Data (PyTorch implementation)
Urban SED
URBAN-SED
Welcome to the companion site for the URBAN-SED dataset. Here you will find information and download links for the dataset presented in: Scaper: A Library for Soundscape Synthesis and Augmentation...
urbansed.weebly.com
https://github.com/justinsalamon/scaper
justinsalamon/scaper
A library for soundscape synthesis and augmentation - justinsalamon/scaper
github.com
https://github.com/marl/urbanorchestra
marl/urbanorchestra
Making music from urban environments (HAMR 2018 Hack) - marl/urbanorchestra
github.com
https://github.com/mashrin/UrbanSound-Spectrogram
mashrin/UrbanSound-Spectrogram
Spectrogram for UrbanSound8K audio dataset. Contribute to mashrin/UrbanSound-Spectrogram development by creating an account on GitHub.
github.com
https://github.com/linusng/sonyc-ust-challenge-2019
linusng/sonyc-ust-challenge-2019
DCASE Challenge 2019 - Task 5 Urban Sound Tagging (3rd place, Fine-level) - linusng/sonyc-ust-challenge-2019
github.com
Sound Event Detection Weakly Labelled
https://arxiv.org/abs/1804.04715
Sound Event Detection and Time-Frequency Segmentation from Weakly Labelled Data
Sound event detection (SED) aims to detect when and recognize what sound events happen in an audio clip. Many supervised SED algorithms rely on strongly labelled data which contains the onset and offset annotations of sound events. However, many audio tagg
arxiv.org
https://arxiv.org/abs/2002.05033
Active Learning for Sound Event Detection
This paper proposes an active learning system for sound event detection (SED). It aims at maximizing the accuracy of a learned SED model with limited annotation effort. The proposed system analyzes an initially unlabeled audio dataset, from which it select
arxiv.org
Active learning for sound event classification by clustering unlabeled data | Semantic Scholar
This paper proposes a novel active learning method to save annotation effort when preparing material to train sound event classifiers. K-medoids clustering is performed on unlabeled sound segments, and medoids of clusters are presented to annotators for la
www.semanticscholar.org
https://paperswithcode.com/task/sound-event-detection/
Papers With Code : Sound Event Detection
See leaderboards and papers with code for Sound Event Detection
paperswithcode.com
https://paperswithcode.com/task/sound-event-detection/codeless
Papers With Code : Sound Event Detection
See leaderboards and papers with code for Sound Event Detection
paperswithcode.com
https://paperswithcode.com/paper/end-to-end-polyphonic-sound-event-detection
Papers with Code: End-to-End Polyphonic Sound Event Detection Using Convolutional Recurrent Neural Networks with Learned Time-Fr
No code available yet.
paperswithcode.com