Giter Club home page Giter Club logo

shahabks / my-voice-analysis Goto Github PK

View Code? Open in Web Editor NEW
276.0 12.0 89.0 16.05 MB

My-Voice Analysis is a Python library for the analysis of voice (simultaneous speech, high entropy) without the need of a transcription. It breaks utterances and detects syllable boundaries, fundamental frequency contours, and formants.

Home Page: https://shahabks.github.io/my-voice-analysis/

License: MIT License

Python 100.00%
speech-analysis acoustic-model voice-analysis python-library praatscript

my-voice-analysis's Introduction

GitHub stars GitHub forks

myspsolution.praat has been revised please upload the new version on the master branch, my-voice-analysis setup.py and ini.py have been revised too. my-voice-analysis PYPI also has been upgraded. March 2019

myprosody package includes all my-voice-analysis' functions plus new functions which you might consider to use instead. The latest myproody update is available here, in Github as well as PYPI, the python library.

NOTE:

1- Both My-Voice-Analysis and Myprosody work on Python 3.7 
2- If you install My-Voice-Analysis through PyPi, please use: 
      mysp=__import__("my-voice-analysis") instead of import myspsolution as mysp
3- It it better to keep the folder names as single entities for instance "Name_Folder" or "NameFolder" without space in the dirctoy path

my-voice-analysis

My-Voice Analysis is a Python library for the analysis of voice (simultaneous speech, high entropy) without the need of a transcription. It breaks utterances and detects syllable boundaries, fundamental frequency contours, and formants. Its built-in functions recognise and measures

  1. gender recognition,
  2. speech mood (semantic analysis),
  3. pronunciation posterior score
  4. articulation-rate,
  5. speech rate,
  6. filler words,
  7. f0 statistics,

The library was developed based upon the idea introduced by Nivja DeJong and Ton Wempe [1], Paul Boersma and David Weenink [2], Carlo Gussenhoven [3], S.M Witt and S.J. Young [4] and Yannick Jadoul [5]. Peaks in intensity (dB) that are preceded and followed by dips in intensity are considered as potential syllable cores. My-Voice Analysis is unique in its aim to provide a complete quantitative and analytical way to study acoustic features of a speech. Moreover, those features could be analysed further by employing Python’s functionality to provide more fascinating insights into speech patterns. This library is for Linguists, scientists, developers, speech and language therapy clinics and researchers.
Please note that My-Voice Analysis is currently in initial state though in active development. While the amount of functionality that is currently present is not huge, more will be added over the next few months.

Installation

my-voice-analysis can be installed like any other Python library, using (a recent version of) the Python package manager pip, on Linux, macOS, and Windows:

                                    pip install my-voice-analysis

or, to update your installed version to the latest release:

                                     pip install -u my-voice-analysis

NOTE:

After installing My-Voice-Analysis, copy the file myspsolution.praat from

                                      https://github.com/Shahabks/my-voice-analysis  

and save in the directory where you will save audio files for analysis.

Audio files must be in *.wav format, recorded at 44 kHz sample frame and 16 bits of resolution.

Example usage

Gender recognition and mood of speech: Function myspgend(p,c)

                [in]  import myspsolution as mysp
                     
                     p="Walkers" # Audio File title
                     c=r"C:\Users\Shahab\Desktop\Mysp" # Path to the Audio_File directory (Python 3.7)
                     mysp.myspgend(p,c)
                
                [out] a female, mood of speech: Reading, p-value/sample size= :0.00 5

Pronunciation posteriori probability score percentage: Function mysppron(p,c)

                [in]   import myspsolution as mysp

                       p="Walkers" # Audio File title
                       c=r"C:\Users\Shahab\Desktop\Mysp" # Path to the Audio_File directory (Python 3.7)
                       mysp.mysppron(p,c)
                       
               [out]   Pronunciation_posteriori_probability_score_percentage= :85.00

Detect and count number of syllables: Function myspsyl(p,c)

                 [in]   import myspsolution as mysp

                        p="Walkers" # Audio File title
                        c=r"C:\Users\Shahab\Desktop\Mysp" # Path to the Audio_File directory (Python 3.7)
                        mysp.myspsyl(p,c)
                        
                [out]   number_ of_syllables= 154

Detect and count number of fillers and pauses: Function mysppaus(p,c)

                 [in]   import myspsolution as mysp

                        p="Walkers" # Audio File title
                        c=r"C:\Users\Shahab\Desktop\Mysp" # Path to the Audio_File directory (Python 3.7)
                        mysp.mysppaus(p,c)
                        
                [out]   number_of_pauses= 22

Measure the rate of speech (speed): Function myspsr(p,c)

                [in]   import myspsolution as mysp

                        p="Walkers" # Audio File title
                        c=r"C:\Users\Shahab\Desktop\Mysp" # Path to the Audio_File directory (Python 3.7)
                        mysp.myspsr(p,c)
                
                [out]   rate_of_speech= 3 # syllables/sec original duration

Measure the articulation (speed): Function myspatc(p,c)

                 [in]   import myspsolution as mysp

                        p="Walkers" # Audio File title
                        c=r"C:\Users\Shahab\Desktop\Mysp" # Path to the Audio_File directory (Python 3.7)
                        mysp.myspatc(p,c)
                        
                [out]  articulation_rate= 5 # syllables/sec speaking duration

Measure speaking time (excl. fillers and pause): Function myspst(p,c)

                 [in]   import myspsolution as mysp

                        p="Walkers" # Audio File title
                        c=r"C:\Users\Shahab\Desktop\Mysp" # Path to the Audio_File directory (Python 3.7)
                        mysp.myspst(p,c)
           
                [out]   speaking_duration= 31.6 # sec only speaking duration without pauses

Measure total speaking duration (inc. fillers and pauses): Function myspod(p,c)

                 [in]   import myspsolution as mysp

                        p="Walkers" # Audio File title
                        c=r"C:\Users\Shahab\Desktop\Mysp" # Path to the Audio_File directory (Python 3.7)
                        mysp.myspod(p,c)
                        
                [out]   original_duration= 49.2 # sec total speaking duration with pauses

Measure ratio between speaking duration and total speaking duration: Function myspbala(p,c)

                 [in]   import myspsolution as mysp

                        p="Walkers" # Audio File title
                        c=r"C:\Users\Shahab\Desktop\Mysp" # Path to the Audio_File directory (Python 3.7)
                        mysp.myspbala(p,c)

                [out]   balance= 0.6 # ratio (speaking duration)/(original duration)

Measure fundamental frequency distribution mean: Function myspf0mean(p,c)

                 [in]   import myspsolution as mysp

                        p="Walkers" # Audio File title
                        c=r"C:\Users\Shahab\Desktop\Mysp" # Path to the Audio_File directory (Python 3.7)
                        mysp.myspf0mean(p,c)

                 [out]  f0_mean= 212.45 # Hz global mean of fundamental frequency distribution

Measure fundamental frequency distribution SD: Function myspf0sd(p,c)

                  [in]   import myspsolution as mysp

                        p="Walkers" # Audio File title
                        c=r"C:\Users\Shahab\Desktop\Mysp" # Path to the Audio_File directory (Python 3.7)
                        mysp.myspf0sd(p,c)

                 [out]  f0_SD= 57.85 # Hz global standard deviation of fundamental frequency distribution

Measure fundamental frequency distribution median: Function myspf0med(p,c)

                  [in]   import myspsolution as mysp

                        p="Walkers" # Audio File title
                        c=r"C:\Users\Shahab\Desktop\Mysp" # Path to the Audio_File directory (Python 3.7)
                        mysp.myspf0med(p,c)

                 [out]  f0_MD= 205.7 # Hz global median of fundamental frequency distribution

Measure fundamental frequency distribution minimum: Function myspf0min(p,c)

                  [in]   import myspsolution as mysp

                        p="Walkers" # Audio File title
                        c=r"C:\Users\Shahab\Desktop\Mysp" # Path to the Audio_File directory (Python 3.7)
                        mysp.myspf0min(p,c)

                 [out]  f0_min= 77 # Hz global minimum of fundamental frequency distribution

Measure fundamental frequency distribution maximum: Function myspf0max(p,c)

                  [in]   import myspsolution as mysp

                         p="Walkers" # Audio File title
                         c=r"C:\Users\Shahab\Desktop\Mysp" # Path to the Audio_File directory (Python 3.7)
                         mysp.myspf0max(p,c)

                  [out] f0_max= 414 # Hz global maximum of fundamental frequency distribution

Measure 25th quantile fundamental frequency distribution: Function myspf0q25(p,c)

                   [in]   import myspsolution as mysp

                          p="Walkers" # Audio File title
                          c=r"C:\Users\Shahab\Desktop\Mysp" # Path to the Audio_File directory (Python 3.7)
                          mysp.myspf0q25(p,c)

                   [out]  f0_quan25= 171 # Hz global 25th quantile of fundamental frequency distribution

Measure 75th quantile fundamental frequency distribution: Function myspf0q75(p,c)

                    [in]   import myspsolution as mysp

                           p="Walkers" # Audio File title
                           c=r"C:\Users\Shahab\Desktop\Mysp" # Path to the Audio_File directory (Python 3.7)
                           mysp.myspf0q75(p,c)

                    [out]  f0_quan75= 244 # Hz global 75th quantile of fundamental frequency distribution

Overview: Function mysptotal(p,c)

                     [in]   import myspsolution as mysp

                           p="Walkers" # Audio File title
                           c=r"C:\Users\Shahab\Desktop\Mysp" # Path to the Audio_File directory (Python 3.7)

                           mysp.mysptotal(p,c)

                    [out]  number_ of_syllables     154
                           number_of_pauses          22
                           rate_of_speech             3
                           articulation_rate          5
                           speaking_duration       31.6
                           original_duration       49.2
                           balance                  0.6
                           f0_mean               212.45
                           f0_std                 57.85
                           f0_median              205.7
                           f0_min                    77
                           f0_max                   414
                           f0_quantile25            171
                           f0_quan75                244

Development

My-Voice-Analysis was developed by Sab-AI Lab in Japan (previously called Mysolution). It is part of a project to develop Acoustic Models for linguistics in Sab-AI Lab. That is planned to enrich the functionality of My-Voice Analysis by adding more advanced functions as well as adding a language models. Please see Myprosody https://github.com/Shahabks/myprosody and Speech-Rater https://shahabks.github.io/Speech-Rater/)

Pronunciation

My-Voice-Analysis and MYprosody repos are two capsulated libraries from one of our main projects on speech scoring. The main project (its early version) employed ASR and used the Hidden Markov Model framework to train simple Gaussian acoustic models for each phoneme for each speaker in the given available audio datasets, then calculating all the symmetric K-L divergences for each pair of models for each speaker. What you see in these repos are just an approximate of those model without paying attention to level of accuracy of each phenome rather on fluency In the project's machine learning model we considered audio files of speakers who possessed an appropriate degree of pronunciation, either in general or for a specific utterance, word or phoneme, (in effect they had been rated with expert-human graders). Here below the figure illustrates some of the factors that the expert-human grader had considered in rating as an overall score

image

S. M. Witt, 2012 “Automatic error detection in pronunciation training: Where we are and where we need to go,”

References and Acknowledgements

  1. DeJong N.H, and Ton Wempe [2009]; “Praat script to detect syllable nuclei and measure speech rate automatically”; Behavior Research Methods, 41(2).385-390.
  2. Paul Boersma and David Weenink; http://www.fon.hum.uva.nl/praat/
  3. Gussenhoven C. [2002]; “ Intonation and Interpretation: Phonetics and Phonology”; Centre for Language Studies, Univerity of Nijmegen, The Netherlands.
  4. Witt S.M and Young S.J [2000]; “Phone-level pronunciation scoring and assessment or interactive language learning”; Speech Communication, 30 (2000) 95-108.
  5. Jadoul, Y., Thompson, B., & de Boer, B. (2018). Introducing Parselmouth: A Python interface to Praat. Journal of Phonetics, 71, 1-15. https://doi.org/10.1016/j.wocn.2018.07.001 (https://parselmouth.readthedocs.io/en/latest/)
  6. Projects https://parselmouth.readthedocs.io/en/docs/examples.html
  7. "Automatic scoring of non-native spontaneous speech in tests of spoken English", Speech Communication, Volume 51, Issue 10, October 2009, Pages 883-895
  8. "A three-stage approach to the automated scoring of spontaneous spoken responses", Computer Speech & Language, Volume 25, Issue 2, April 2011, Pages 282-306
  9. "Automated Scoring of Nonnative Speech Using the SpeechRaterSM v. 5.0 Engine", ETS research report, Volume 2018, Issue 1, December 2018, Pages: 1-28

MIT License

• Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: • The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. • THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

my-voice-analysis's People

Contributors

shahabks avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

my-voice-analysis's Issues

use on streaming audio?

have you thought about this. I guess I could pass audio snippets in like every 10 seconds during a conversation, curious if you've thought about streaming audio..

parselmouth.PraatError: Unknown variable:

This llibrary looks awesome! I'm trying to repeat the first example locally.

I created a folder called my-voice-analysis/Mysp and placed the myspsolution.praat file in that folder and a wav file called about_time.wav that is a 4 second clip of a man speaking. From the my-voice-analysis firectory, I try running:

mysp=__import__("my-voice-analysis")
p="about_time" # Audio File title
c = r"Mysp"
mysp.myspgend(p,c)

I get the following error:

>>> mysp.myspgend(p,c)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/anaconda3/envs/conda3/lib/python3.7/site-packages/my-voice-analysis/__init__.py", line 231, in myspgend
    objects= run_file(sourcerun, -20, 2, 0.3, "yes",sound,path, 80, 400, 0.01, capture_output=True)
parselmouth.PraatError: Unknown variable:
« (ln(f_one
Script line 295 not performed or completed:
« lnf0 = (ln(f_one)-5.65)/0.31 »
Script not completed.

Any idea? Thanks!

How to analyze uploaded file

Everything works fine on local with mysp.myspgend(p,c) function which p is the name of the file and the c is the related location. but I want to have an API that gives the file and it's gonna analyzes it. How can I handle this?

Bytes as an input

I have trouble to build a pipeline with Praat where my input to the script myspsolution.praat is not a path to the file but already loaded audio file. So I would like to pass to the Praat audio as bytes not a path do the audio file, do you know how to do that ?

Accessing the time stamps of filler words.

My understanding of the term filler words involves utterances such as 'umm' and 'uhh'. Is this what my-voice-analysis detects? I have been unable to access the timestamps of the filler words from the Textgrid file. Is there a way to access the timestamps of filler words that are distinct from pauses listed in the text grid file?

On a side note, I would like to thank you for your library.

Audio Not Clear Message Despite Rewriting Soundfile to correct sample rate and .wav format

I keep getting the 'Try again the sound of the audio was not clear' message despite rewriting my wav file to have the correct requirements using librosa.

#!/Users/UserName/speech-analysis-env/bin/python
import myspsolution as mysp
import librosa
import soundfile as sf

Audio file title

p="a"

path to audio file directory

c=r"/Users/rajka/my-voice-analysis-example/speech-files"

y, s = librosa.load("/Users/rajka/my-voice-analysis-example/speech-files/a.wav", sr=48000)
sf.write('a_1.wav', y, s, "PCM_24")

detect and count number of fillers and pauses

mysp.mysppaus(p,c)

Filler words detection

How are filler words being detected?
The function mysp.mysppaus(p,c) seems to only detect the pauses.

Saving results

Hi! I've been trying to save analysis results in some variable, but found out that output is a 'NoneType' object. Is there any chance to keep the results, so I can save them to csv or txt afterwards?

pronuntiation posteriori probability score percentage

I have a question, what exactly does the variable pronuntiation posteriori probability score percentage measure? The probability that that person will later pronounce as they do on the recording?

Thank you very much,

Mood as emotion

Is it possible to receive the emotion (sad/happy/whatever) instead of "reading" or "speaking passionately".

Pronunication Scoring

How is the pronunciation scored without text and alignment

Witt S.M and Young S.J [2000]; “Phone-level pronunciation scoring and assessment or interactive language learning”; Speech Communication, 30 (2000) 95-108.

requires the constrained phone loop

I keep getting "Try again the sound of the audio was not clear"

Am I missing something?
I am trying this on my Mac:

import myspsolution as mysp

p="test_audio.wav" # Audio File title
c=r"/Users/c.chengy/Desktop/Projects/my-voice-analysis/my-voice-analysis/audio_test"

mysp.myspgend(p,c)

When I look under Get Info of the audio file, I see:
Duration: 00:11
Sample rate: 44.1kHz
Bits per sample: 16

Error: Try again the sound of the audio was not clear

Hi!
I'm recording a file audio with this code in order to have a 44khz 16bit .wav file:

`import pyaudio
import wave

CHUNK = 1024
FORMAT = pyaudio.paInt16
CHANNELS = 2
RATE = 44000
RECORD_SECONDS = 20
WAVE_OUTPUT_FILENAME = "/Users/federicozanini/Desktop/TesiMello/SpeechLearning/AudioFile/output2.wav"

p = pyaudio.PyAudio()

stream = p.open(format=FORMAT,
channels=CHANNELS,
rate=RATE,
input=True,
frames_per_buffer=CHUNK)

print("* recording")

frames = []

for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
data = stream.read(CHUNK)
frames.append(data)

print("* done recording")

stream.stop_stream()
stream.close()
p.terminate()

wf = wave.open(WAVE_OUTPUT_FILENAME, 'wb')
wf.setnchannels(CHANNELS)
wf.setsampwidth(p.get_sample_size(FORMAT))
wf.setframerate(RATE)
wf.writeframes(b''.join(frames))
wf.close()
`

But when I try to analyze the file with your library with this code:

`import os
import sys
import io
import Functions as myLib
mysp=import("my-voice-analysis")

p="output2.wav" # Audio File title
c=r"/Users/federicozanini/Desktop/TesiMello/SpeechLearning/AudioFile" # Path to the Audio_File directory (Python 3.7)
mysp.myspgend(p,c)
mysp.mysptotal(p,c)
`

I keep getting the same error: "Try again the sound of the audio was not clear"

can you help me? sorry I'm new to python so I could have made any stupid or very basic error. thanks for your help in advance!

Store output as variable

How do I store the output data into a variable? For example, how do I store the balance variable into an int file?

import myspsolution as mysp is not working

I can pip install pip install my-voice-analysis
but then I can't import myspsolution since I don't have any myspsolution installed, the module is called -> my-voice-analysis .
This can work by taking the init.py file in the module -> ../python3.6/site-packages/my-voice-analysis/__init__.py ;) copy this file in my working directory and renamed it myspsolution.py
as well it could in interesting to use os.path.join isntead of sound=p+"/"+m+".wav" to make this work better on linux or mac.
Anyways really nice module 🥇

Voice Memo for preparing audio files

Could anyone please tell me if Voice Memo can be used to create audio files?

I get "Try again the sound of the audio was not clear" and cannot run the program. Since I'm using the correct version of Python (3.7.17), and 44.1kHz/16bits audio files longer than 10 sec in duration in wav format, I thought the issue is with how I prepare my audio files.

Here is my program:
`mysp=import("my-voice-analysis")

p="NI" # Audio File title
c=r"C:\Users\i.nanako\Desktop\Data" # Path to the Audio_File directory (Python 3.7)

mysp.mysptotal(p,c)
mysp.myspgend(p,c)
mysp.myspsyl(p,c)
mysp.mysppaus(p,c)
mysp.myspsr(p,c)
mysp.myspatc(p,c)
mysp.myspst(p,c)
mysp.myspod(p,c)
mysp.myspbala(p,c)
mysp.myspf0mean(p,c)
mysp.myspf0sd(p,c)
mysp.myspf0med(p,c)
mysp.myspf0min(p,c)
mysp.myspf0max(p,c)
mysp.myspf0q25(p,c)
mysp.myspf0q75(p,c)
mysp.mysppron(p,c)`

Thank you!

Real Time Processing

Hi,
I would like to know if it is possible to use this library during a real time process from a microphone input
Thank you

Extracting the results from fucntions

I am working on some audio files and getting the results.
I am looking to save the results in a variable but unable to do so.

eg.

mysp=import("my-voice-analysis")
x = mysp.mysppaus(p,c)
print(x)

It returns:
[]
number_of_pauses= 72
None

How do I save the number 72 into the variable x.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.