Giter Club home page Giter Club logo

gpv's Introduction

Voice activity detection in the wild via weakly supervised sound event detection

This repository contains the evaluation script as well as the pretrained models from our Interspeech2020 paper Voice activity detection in the wild via weakly supervised sound event detection.

The aim of our approach is to provide general-purpose VAD models (GPVAD), which are noise-robust in real-world scearios and not only in synthetic noise scenarios.

Edit 2021.05.11 You are not satisfied with the performance provided by GPV? Checkout our follow-up work Data-driven GPVAD. There we also provide the training scripts for the models.

Framework

Results

Results (from the paper)

Data Model F1-macro F1-micro AUC FER Event-F1
Clean VAD-C 96.55 97.43 99.78 2.57 78.9
Clean GPV-B 86.24 88.41 96.55 11.59 21.00
Clean GPV-F 95.58 95.96 99.07 4.01 73.70
Noisy VAD-C 85.96 90.28 97.07 9.71 47.5
Noisy GPV-B 73.90 75.75 89.99 24.25 8.0
Noisy GPV-F 81.99 84.26 94.63 15.74 35.4
Real VAD-C 77.93 78.08 87.87 21.92 34.4
Real GPV-B 77.95 75.75 89.12 19.65 24.3
Real GPV-F 83.50 84.53 91.80 15.47 44.8

The pretrained models from the paper can be found in pretrained/, since they are all rather small (2.7 M), they can also be deployed for other datasets. The evaluation script is only here for reference since the evaluation data is missing. If on aims to reproduce the evaluation, please modify TRAINDATA = {} in evaluation.py and add two files:

  1. wavlist, a list containing a (preferably) absolute audioclip paths in each line.
  2. label, a tsv file containing DCASE style labels. Header needs to be filename onset offset event_label and each following line should be an event label for a given filename with onset and offset.

What does this repo contain?

  1. Three models: vad-c, gpv-b and gpv-f. All these models share the same back-bone CRNN model, yet differ in their training scheme (refer to paper).
  2. The evaluation script for our paper evaluation.py, even though its relatively useless when one does not have access to any evaluation data.
  3. A simple prediction script forward.py, which can produce Speech predictions with time-stamps for a given input clip/utterance.

Usage

Since the utilized data (DCASE18, Aurora4) is not directly available for either training nor evaluation purposes, we only provide the evaluation script as well as the three pretrained models in this repository.

Furthermore, if one wises to simply run inference, please utilize the forward.py script.

The requirements are:

torch==1.5.0
numba==0.48
loguru==0.4.0
pandas==1.0.3
sed_eval==0.2.1
numpy==1.18.2
six==1.14.0
PySoundFile==0.9.0.post1
scipy==1.4.1
librosa==0.7.1
tqdm==4.43.0
PyYAML==5.3.1
scikit_learn==0.22.2.post1
soundfile==0.10.3.post1

If you want just to test the predictions of our best model gpvf just run:

git clone https://github.com/RicherMans/GPV
cd GPV;
pip3 install -r requirements.txt
python3 forward.py -w YOURAUDIOFILE.mp3

Advanced Usage

Two possible input types can be used for the forward.py script.

  1. If one aims to evaluate batch-wise, the script supports a filelist input, such as: python3 forward.py -l wavlist.txt. A filelist should have nor specified format and only contain a single input audio in each line. A simple wavlist.txt generator would be find . -name *.wav -type f > wavlist.txt or find . -name *.mp4 -type f > mp3list.txt.
  2. Single audio-read compatible input clip, such as myfile.wav or myaudio.mp3 etc. Then one can just run python3 forward.py -w myaudio.mp3.

Other options include:

  1. -model: The three models can be adjusted via the -model option. Three models are available: gpvf, gpvb and vadc.
  2. -th: One can pass via the -th option either two thresholds (then double threshold is used), otherwise if only a single value has been given, common binarization is utilized. Our paper results solely utilized -th 0.5 0.1. Not that double thresholding is only affecting gpvf due to its large amount of output events (527).
  3. -o: Outputs the predictions to the given directory, e.g., python3 forward.py -w myaudio.mp3 -o myaudio_predictions

Citation

If you use this repo in your work (or compare to other VAD methods), please cite:

@inproceedings{Dinkel2020,
  author={Heinrich Dinkel and Yefei Chen and Mengyue Wu and Kai Yu},
  title={{Voice Activity Detection in the Wild via Weakly Supervised Sound Event Detection}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3665--3669},
  doi={10.21437/Interspeech.2020-0995},
  url={http://dx.doi.org/10.21437/Interspeech.2020-0995}
}

@article{Dinkel2021,
author = {Dinkel, Heinrich and Wang, Shuai and Xu, Xuenan and Wu, Mengyue and Yu, Kai},
doi = {10.1109/TASLP.2021.3073596},
issn = {2329-9290},
journal = {IEEE/ACM Transactions on Audio, Speech, and Language Processing},
pages = {1542--1555},
title = {{Voice Activity Detection in the Wild: A Data-Driven Approach Using Teacher-Student Training}},
url = {https://ieeexplore.ieee.org/document/9405474/},
volume = {29},
year = {2021}
}

gpv's People

Contributors

dependabot[bot] avatar diego-ii avatar elmiraghorbani avatar richermans avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

gpv's Issues

Improving Performance on Shorter Audio Clips

Using your GPVAD/VADC, I wish to process smaller chunks (i.e. ~200ms chunks) of audio files. However, when the duration is this low, the performance of the VAD is poor. What can I do to better the performance? I assume this must be done in the training side. Would you recommend downloading the datasets and splicing them into these smaller chunks, retraining from scratch?

Curious to hear your thoughts. Thank you!

Train

Hello, do you have any plans to share training part?

ModuleNotFoundError: No module named 'numba.decorators'

Hello,
I am trying to run GPV on my Linux.
I use venv. Here is my pip list

(project_env) ds@ds-Standard-PC-i440FX-PIIX-1996:~/GPV$ pip list
DEPRECATION: The default format will switch to columns in the future. You can us                                                                                                             e --format=(legacy|columns) (or define a format=(legacy|columns) in your pip.con                                                                                                             f under the [list] section) to disable this warning.
aiocontextvars (0.2.2)
audioread (2.1.8)
certifi (2020.6.20)
cffi (1.14.1)
chardet (3.0.4)
contextvars (2.4)
cycler (0.10.0)
dcase-util (0.2.16)
decorator (4.4.2)
future (0.18.2)
idna (2.10)
immutables (0.14)
joblib (0.16.0)
kiwisolver (1.2.0)
librosa (0.7.1)
llvmlite (0.33.0)
loguru (0.4.0)
matplotlib (3.3.0)
numba (0.50.1)
numpy (1.18.2)
pandas (1.0.3)
Pillow (7.2.0)
pip (9.0.1)
pkg-resources (0.0.0)
pycparser (2.20)
pydot-ng (2.0.0)
pyparsing (2.4.7)
PySoundFile (0.9.0.post1)
python-dateutil (2.8.1)
python-magic (0.4.18)
pytz (2020.1)
PyYAML (5.3.1)
requests (2.24.0)
resampy (0.2.2)
scikit-learn (0.22.2.post1)
scipy (1.4.1)
sed-eval (0.2.1)
setuptools (39.0.1)
six (1.14.0)
SoundFile (0.10.3.post1)
torch (1.5.0)
tqdm (4.43.0)
urllib3 (1.25.10)
validators (0.17.1)

This error I've got when launched the program

(project_env) ds@ds-Standard-PC-i440FX-PIIX-1996:~/GPV$ python3 forward.py -w so                                                                                                             und.mp3
Traceback (most recent call last):
 File "forward.py", line 10, in <module>
   import librosa
 File "/home/ds/GPV/project_env/lib/python3.6/site-packages/librosa/__init__.py                                                                                                             ", line 12, in <module>
   from . import core
 File "/home/ds/GPV/project_env/lib/python3.6/site-packages/librosa/core/__init                                                                                                             __.py", line 123, in <module>
   from .time_frequency import *  # pylint: disable=wildcard-import
 File "/home/ds/GPV/project_env/lib/python3.6/site-packages/librosa/core/time_f                                                                                                             requency.py", line 11, in <module>
   from ..util.exceptions import ParameterError
 File "/home/ds/GPV/project_env/lib/python3.6/site-packages/librosa/util/__init                                                                                                             __.py", line 77, in <module>
   from .utils import *  # pylint: disable=wildcard-import
 File "/home/ds/GPV/project_env/lib/python3.6/site-packages/librosa/util/utils.                                                                                                             py", line 15, in <module>
   from .decorators import deprecated
 File "/home/ds/GPV/project_env/lib/python3.6/site-packages/librosa/util/decora                                                                                                             tors.py", line 9, in <module>
   from numba.decorators import jit as optional_jit
ModuleNotFoundError: No module named 'numba.decorators'

I found that Numba will remove the shim for numba.decorators.jit at 0.50 here so I tried to changed numba in requirements to numba==0.48 and also changed torch to torch==1.5.0, because 1.4.1 was not found. After this It seems all works.

bug in feature extraction

Hi

Extracting mel spectrogram features (line 38 at forward.py), you get the mel of resampled signal with the original sampling rate (sample_rate) instead of SAMPLE_RATE.

process the audioset

Thanks for your hard work! I wrote a train.py to tain the network, but I can't get the same results as your pretrained model provided, I wonder if something wrong with my audioset processing, Could you share your processing audioset code or train.py?

Thoughts on streaming the forward pass?

This is some really good work!
I have a question: Have you tried using your algorithm to process an audio stream? How would performance be affected? And how feasible would real-time processing be?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.