Giter Club home page Giter Club logo

human-voice-dataset's Introduction

VOCOBOX

Voice Controller for Digital Instruments

Description

Vocobox intend to provide singers with a software turning the voice to a musical controller. Voice features (pitch, volume, ...) are used to control external software or hardware producing music.

We rather want to build a voice-to-instrument application than an audio-to-midi application. For this reason we found sufficient to control synthetizer in terms of frequency and amplitude, without clearly defining note on/off events. It makes mapping easier, and result is good enough.

If we you want to go straight to example output, go here.

VOCOBOX 1.0 (01/01/2015)

At this step we are mainly evaluating pitch detection algorithms using the Human Voice Dataset, a dataset we build to gather examples of singers' voice (e.g. all notes in the voice range). We define scores such as pitch detection latency and precision and compare them graphically.

We also evaluate pitch detection in real time by recording the voice with a microphone as input and by generating a synthetizer sound as output.

See the Component section of this document to learn more about algorithms used in this project.

FUTURE VERSIONS

To get notified of futured version, simply follow Vocobox on Twitter or here on Github.

We are currently having fun with sequence detection.

Collaborators are most welcome! See end of this page.

Applications

Controlling Synthetizers with CSV files

Our first attempt to analyze voice signal was written in R using Seewave and Aubio via an R binding written for the experiment.

To control JSyn synthetizer, we export frequency and amplitude change commands in two CSV files. Each file contains two columns, the first being elapsed time since song started, the second indicating a value change (frequency changes for pitch.csv, and amplitude changes for envelope.csv). Note that frequency and amplitude can change independently.

Having the original wav file available allows to play audio source in background while executing command events.

To run synthetizer control based on a csv files, see VocoboxControllerCsv.

Controlling Synthetizers with WAV files

The pitch and amplitude change events of a wav file are sent to a synthetizer via its sendFrequency() / sendAmplitude() methods. In these demonstrations, we use JSyn based synthetizers. As the direct control of oscillator's amplitude from input file is sufficiently good to mimic notes, we do not need additional computation to define note on and note off.

Below are few synthetized sounds and their wave file controller.

Input Do-re-mi piano source Do-re-mi voice source
Output Do-re-mi synth controlled by piano Do-re-mi synth controlled by voice
Chart

See this examples folder for more input/output/chart results.

To run synthetizer control based on a wav file, see VocoboxControllerFileRead.

Controlling Synthetizers in real time with available audio inputs (microphone, lines)

When starting the application, the list of available source are listed by tarsos, and an estimation algorithm is proposed. We found Yin performs best. Running live synthetizer control allows to see pitch detection is pretty efficient.

To run synthetizer control based on live voice, see VocoboxControllerMic

Benchmark Pitch Detection algorithm on note datasets

This document explain how we use the Human Voice Dataset (a serie of wav files containing human sung notes) to evaluate pitch detection algorithm on isolated notes.

Components

Audio analysis

Audio signal analysis is powered by TarsosDSP. Yin implementation outperforms any other algorithm for pitch detection and has become the default implementation for the voice analysis module.

Vocobox delivers pitch detection through following analyzers

Analyzer Comment
VoiceInputStreamListen Analyse audio signal from available inputs (microphones, but also lines, etc). When running a Jack server, audio sources made available by Jack appear in source list!
VoiceFileRead Analyse audio signal from (mono) wav files. After reading, a collection of audio analysis events are collected an can be send to a synthetizer.

Note that you can process FFT using Spectro Edit, as provided by Jzy3d Spectro. It is used below to draw note signal analysis. JSyn and TarsosDSP also provides FFT processing.

Synthetizers

Synthetizer powered by JSyn are available in a dedicated maven module. The below implementations are basic, we can do much more with JSyn!

Synthetizer Comment
JsynMonoscilloSynth A single oscillator.
JsynMonoscilloRampSynth A single oscillator having a LinearRamp on frequency and amplitude change commands, handling numerous pitch / amplitude change events without audio artifact.
JsynOcclusiveNoiseSynth A synthetizer using a non frequency-defined sound (here : a white noise) when confidence value of pitch detection is below a threshold. It allows a kind of audio debugging of pitch detection. Brutal tone change make the synthetizer sound harsh but smooth changes in tone balance could produce interesting effects.
JsynCircuitSynth A synthetizer based on JSyn Circuit, allowing easier abstraction of synthetizer element groups. Here, we use circuit SynthCircuitBlaster that is derived from JSyn examples. Note the circuit provides its control panel to Vocobox UI.
JsynOscilloSpectroHarpSynth An experimental synthetizer based on FFT analysis of a file. A file is played, its FFT is processed, and all frequency band energies defines amplitude of one the 93 oscillators covering 0-4kHz.

Charts

Charts are powered by Jzy3d. They are used as synthetizer command logs : parameter changes of the synthetizer are tracked and mapped to multiple 2d charts. Below is the list of available charts. See here a video of charts in action.

Chart Comment
Frequency chart Shows the synthetizer frequency changes with a pink scatter plot. Confidence is used to define alpha, so there is nothing displayed if pitch detection has confidence 0.
Amplitude chart Shows the synthetizer amplitude changes with a cyan scatter plot. Amplitude events below the note relevance threshold (default 0.1) are drawn in gray.

Few features interesting with Jzy3d

  • easy charting
  • performance and liveness
  • coming soon : log chart will help to let frequency charts look like note charts without having to do the frequency-to-note conversion by ourself.
  • underlying JOGL let it run everywhere (any Java Windowing toolkit including Android)

Real time

Human perception of real time

We found in P.Brossier Thesis that human can't perceive audio events when they are separated by less than 50ms to a few milisecond :

As auditory nerve cells need to rest after firing, several phenomena may occur within the inner ear. Depending on the nature of the sources, two or more events will be merged into one sensation. In some cases, events will need to be separated by only a few millisecond to be perceived as two distinct events, while some other sounds will be merged if they occur within 50 ms, and sometimes even longer. These effects, known as the psychoacoustic masking effects, are complex, and depend not only of the loudness of both sources, masker and maskee, but also on their frequency and timbre [Zwicker and Fastl, 1990]. The different masking effects can be divided in three kinds [Bregman, 1990]. Pre-masking occurs when a masked event is followed immediately by a louder event. Post-masking instead occurs when a loud event is followed by a quiet noise. In both case, the quiet event will not be perceived โ€“ i.e. it will be masked. The third kind of masking effect is simultaneous masking, also referred to as frequency masking, as it is strongly dependent on the spectrum of both the masker and the maskee.

We thus consider 5ms to be the timeframe within which we should be able to do computational work to be able to produce audio without cues. It is like being able to render images of an animation below 1/25s to display at a rate suitable with persistence of vision.

Real time capabilities of the Java platform

In this project we are rather working on data than delivering production ready software so we could say it is not a matter if standard versions of Java can't deal with the above speed constraints due to non predictability of garbage collection.

But Java can be a good platform for real time, as shown by Metronome GC, a Garbage Collector able work in deterministic time, with the promise of not spending more than 3ms in collecting garbage. 3ms is great because it is lower to than the perception capabilities of our brain as exposed above.

Performance of components

We noticed Tarsos processes files in much faster time a player would read it.

Bibliograhy

Several interesting papers related to voice frequency detection can be found in the doc/papers folder

Getting and building source code

Create a Vocobox directory

cd dev
mkdir vocobox
cd vocbox
mkdir external
mkdir public
cd public

Get the voice dataset

git clone https://github.com/vocobox/human-voice-dataset

Get and build Vocobox

git clone https://github.com/vocobox/vocobox
cd vocobox/dev/java
mvn clean install

Maven should retrieve TarsosDSP, JSyn, and Jzy3d from Jzy3d's maven repository.

Following is not necessary, but if you want to build the dependencies yourself, you can get our forks enabling JSyn and TarsosDSP on maven:

cd ../external/
git clone https://github.com/vocobox/jsyn
git clone https://github.com/vocobox/TarsosDSP tarsosdsp
cd jsyn
mvn clean install -D skipTests
cd ../tarsosdsp
mvn clean install -D skipTests

Contributing

Please join us and share your contributions through pull-requests.

You can contact [email protected] for questions.

Licensing

IF YOU INTEND TO REUSE THIS SOFTWARE, PLEASE VERIFY COMPONENTS LICENCE!

Thanks

To Phil Burk and Jochen Six for their kind help and advices regarding the excellent tools JSyn and TarsosDSP.

human-voice-dataset's People

Contributors

martin-pernollet avatar vocobox avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.