Giter Club home page Giter Club logo

sonus's Introduction

sonus

Build Status Dependency Status

A dead simple STT library in Node

Sonus lets you quickly and easily add a VUI (Voice User Interface) to any hardware or software project. Just like Alexa, Google Assistant, and Siri, Sonus is always listening offline for a customizable hotword. Once that hotword is detected your speech is streamed to the cloud recognition service of your choice - then you get the results in realtime.

Platform Support

  • Linux - most major distros (Including Raspbian)
  • macOS
  • Windows

Streaming Recognition Services

  • Google Cloud Speech
  • Alexa Voice Services
  • Wit.ai
  • Microsoft Cognitive Services
  • Houndify

Installation

npm install --save sonus

Dependencies

Generally, running npm install should suffice. This module however, requires you to install SoX.

For most linux disto's

Recommended: use arecord, which comes with most linux distros. Alternatively:

sudo apt-get install sox libsox-fmt-all

For macOS

brew install sox

Usage

Configure out cloud speech recognition system of choice, like Google Cloud Speech API.

Note: You need to use the GOOGLE_APPLICATION_CREDENTIALS environment variable for your JSON keyfile, or check the examples to see how you can pass in the keyflie path.

Add sonus and said recognizer:

const Sonus = require('sonus')
const speech = require('@google-cloud/speech')
const client = new speech.SpeechClient()

Add your keyword and initialize Sonus with a Snowboy hotword:

const hotwords = [{ file: 'resources/snowboy.umdl', hotword: 'snowboy' }]
const sonus = Sonus.init({ hotwords }, client)

Create your own Alexa in less than a tweet:

Sonus.start(sonus)
sonus.on('hotword', (index, keyword) => console.log("!"))
sonus.on('final-result', console.log)

Versioning

This project uses semantic versioning as of v0.1.0

How do I set up Google Cloud Speech API?

Follow these instructions.

How do I make my own hotword?

Sonus uses Snowboy for offline hotword recognition. You can use their website or API to train a model for a new hotword. Hotword training must occur online through their web service.

If you've build a project with Sonus send a PR and include it here!

Authors

Evan Cohen: @_evnc
Ashish Chandwani: @ashishschandwa1

License

Licensed under MIT.

sonus's People

Contributors

ashishsc avatar black-snow avatar evancohen avatar germanbluefox avatar hackergrrl avatar jaumard avatar manzonif avatar mathiskeller avatar matrixoperator avatar rococtz avatar talsalmona avatar timaschew avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sonus's Issues

Google Speech error stop Sonus to work

When I have this error return by sonus, then nothing work anymore (no keyword detection) I have to restart my project to have it working again :(

error:  
{ code: 11,
  message: 'Audio data is being streamed too slow. Please stream audio data approximately at real time.',
  details: [] }
error:  
{ streamingError: 
   { code: 11,
     metadata: { _internal_repr: { 'content-disposition': [ 'attachment' ] } } } }

npm run sonus error UnhandledPromiseRejectionWarning after a while

Description :
Sonus is in the smart mirror project, in this project I have the error below after let's say 5 to 8 mn of running, the hotword detection works fine and I have no problem with the google speech api.

How to reproduce

pi@raspberrypi:~/smart-mirror-master $ npm run sonus

[email protected] sonus /home/pi/smart-mirror-master
node sonus.js
(node:31118) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: stdout maxBuffer exceeded

Comments
At first I thought it was an issue on the smart-mirror or in my config.js file, but then I ran
"npm run sonus" just to test the audio capture, just wait around 5 to 8 mn and to do nothing and the error occured too.

Configuration
For information, this is my configuration

pi@raspberrypi: ~/smart-mirror-master $ node -v
v6.9.1
pi@raspberrypi: ~/smart-mirror-master $ npm -v
3.10.8

Ideas
From what I have read around the web, it can be caused by two main things

  1. set the maxBuffer option when using child_process.exec
    stdout-buffer-issue-using-node-child-process

  2. a missing catch in a promise call
    promise-reject-possibly-unhandled-error

Do you have an idea ?

Laurent

Continus recognition

Great implementation, very instructive :-)

I'm wondering what would be the best implementation to keep speech recognition after a first hotword, which is mostly the case on a conversationnal boot, user won't say everytime the hotword, dosen't make sense.

The problem there is that sonus trigger the intent processing after the final-result emitted at the end of recognitionStream, so we need a way to manualy trigger the hotword in order to avoid a painfull repeat for the user.

Maybe a const detector > sonus.detector ?

Sonus hotword is getting detected on any speech and sound

Hi

I am using Sonus.annyang on Raspberry pi 3 and configured smart mirror as hotword for doing triggering

I had created smart-mirror pmdl also using snowboy site

But is seems many times hotword gets detected even by a sound or any speech in the room

It keeps on happening and I get to see many enteries in logs stating hotword detected even though I don't say smart mirror

sonus.on('hotword', (index, keyword) => {
console.log("hotword triggered");
new SoundPlayer().sound(ROOT_DIR + '/resources/ding.wav');

});

Some weird behaviour is happening with hotword detection:

code crash

Sometimes when I run the code using command :
node index.js
It just exit without any information

node index.js
Say amie
exit

example.js not working on macOS Sierra

The code I am trying to run via Terminal:

node example.js

'use strict'

const ROOT_DIR = '/Users/Patrick/Documents/sonus/'
const Sonus = require('sonus')
const speech = require('@google-cloud/speech')({
  projectId: 'voice-assistant-123919',
  keyFilename: ROOT_DIR + 'voice assistant-bf4caad39894.json'
})

const hotwords = [{ file: ROOT_DIR + 'jarvis.pmdl', hotword: 'jarvis' }]
const language = "en-US"
const sonus = Sonus.init({ hotwords, language, recordProgram: "rec" }, speech)

Sonus.start(sonus)
console.log('Say "' + hotwords[0].hotword + '"...')
sonus.on('hotword', (index, keyword) => console.log("!" + keyword))
sonus.on('partial-result', result => console.log("Partial", result))
sonus.on('error', error => console.log('error', error))
sonus.on('final-result', result => {
  console.log("Final", result)
  if (result.includes("stop")) {
    Sonus.stop()
  }
})

console.log(`Done!`);

Output:

Say "jarvis"...
Done!

The code above is slighly modified from the original example.js, but is structurly the same.

There are no error messages which really confuses me, and the code just runs all the way through without even waiting to listen for hotwords.

(It acts as if something is quiting it)

esct

keyboard went weired

Error event and crash

Hey,

I use the last version v0.1.8 and now once the hotword is detected I have an error (even if the sentences are correctly returned).
This is the error:

error:  
{ code: 11,
  message: 'Audio data is being streamed too slow. Please stream audio data approximately at real time.',
  details: [] }
error:  
{ streamingError: 
   { code: 11,
     metadata: { _internal_repr: { 'content-disposition': [ 'attachment' ] } } } }

Then it crash with:

_stream_writable.js:383
  cb();
  ^

TypeError: cb is not a function
    at afterWrite (_stream_writable.js:383:3)
    at onwrite (_stream_writable.js:374:7)
    at runCallback (timers.js:672:20)
    at tryOnImmediate (timers.js:645:5)
    at processImmediate [as _immediateCallback] (timers.js:617:5)

More info:
OS: Mac OSX
Node: v7.10.0

On version 0.1.7 I didn't have the error event but I have the crash :(

Maybe it's because now Google Speech have reach v1 ?

Sonus.pause()

Hello,

when running sonus I get the following error:

Say "alexa"...
!alexa
Partial test
Final test
/home/pi/NodeJS/node_modules/sonus/index.js:118
Sonus.pause = sonus => sonus.mic.pause()
^

TypeError: Cannot read property 'mic' of undefined
at Object.CloudSpeechRecognizer.init.CloudSpeechRecognizer.startStreaming.recognitionStream.on.Sonus.init.opts.hotwords.forEach.detector.on.csr.on.Sonus.start.Sonus.pause.sonus [as pause] (/home/pi/NodeJS/node_modules/sonus/index.js:118:29)
at Writable. (/home/pi/NodeJS/VIKI/sonus.js:29:9)
at emitOne (events.js:77:13)
at Writable.emit (events.js:169:7)
at Writable. (/home/pi/NodeJS/node_modules/sonus/index.js:98:15)
at emitOne (events.js:77:13)
at Writable.emit (events.js:169:7)
at null. (/home/pi/NodeJS/node_modules/sonus/index.js:42:29)
at emitOne (events.js:77:13)
at emit (events.js:169:7)

I´m running this on my Raspberry Pi 2 (Model B). My node version is 4.6.2.
I took the code from examples. Is this a bug oder am I missing something? (my sonus.js is only placed somewhere else and I added absolute path to /node_modules/sonus/index.js path entries.

Greetings
Nycon

CloudSpeechRecognizer connection status

Great work BTW!

I'm interested in understanding if there is a way to detect the status of the cloud speech recognizer. I'm working on an embedded application and would love to be able to display an "offline" if the recognizer socket has disconnected or if the connection fails. Currently it fails silently.

Tested by disconnecting network and running any sonus example.

I looked through the google cloud docs and didn't see anything obvious.

Hopefully it's a quick fix, or you can point me to the google module that does it.

Thanks again,
Leif

Calling Sonus.start() after Sonus.stop() crashes SnowboyDetect with a "write after end"

To reproduce:

sonus = Sonus.init(.......);

Sonus.start(sonus);
Sonus.stop();
Sonus.start(sonus);
events.js:160
      throw er; // Unhandled 'error' event
      ^

Error: write after end
    at writeAfterEnd (_stream_writable.js:166:12)
    at SnowboyDetect.Writable.write (_stream_writable.js:217:5)
    at PassThrough.ondata (_stream_readable.js:555:20)
    at emitOne (events.js:96:13)
    at PassThrough.emit (events.js:188:7)
    at readableAddChunk (_stream_readable.js:176:18)
    at PassThrough.Readable.push (_stream_readable.js:134:10)
    at PassThrough.Transform.push (_stream_transform.js:128:32)
    at afterTransform (_stream_transform.js:77:12)
    at TransformState.afterTransform (_stream_transform.js:54:12)

Using Sonus in Electron

In the interest of documenting this issue:

There are a number of issues using native node modules in electron. Since Sonus depends on Snowboy and Google Cloud Speech, both of which have native components, it's difficult to get Sonus running in an Electron project.

Typically the solution would be to use electron-rebuild but this seems to fail for gRPC (and take a substantial amount of time on a low powered device for snowboy).

Existing Issues:

What's left to try?
Creating a node child process (not a pretty solution, but should work by running the processes outside of the Node runtime). This doesn't work well (read as "at all") for installable Electron apps...

Instructions on troubleshooting

I am attempting to build a library with this project. The hotword detection with the snowboy model works. But I never receive a transcribe event after that. How can I debug/troubleshoot what is going wrong. Any suggestions?

Run on pi3 with minibian

I'm trying to setup sonus on my pi 3 with minibian OS but doesn't work, I made a very small script who do:

const Sonus = require('sonus')
const speech = require('@google-cloud/speech')({
  projectId: 'projectID',
  keyFilename: './project-e522...efac7.json'
})

const hotwords = [{ file: './Hey_lisa.pmdl', hotword: 'hey lisa' }]
const language = 'fr-FR'
const sonus = Sonus.init({ hotwords, language }, speech)
Sonus.start(sonus)
sonus.on('hotword', (index, keyword) => console.log("!"))
sonus.on('final-result', console.log)

On my mac this script works perfectly but on my pi 3 the script start and after 2 seconds just finish without an error message :(

Any idea ? I install like the doc say sudo apt-get install sox libsox-fmt-all

Continue recognitionStream ?

Great implementation, very instructive :-)

I'm wondering what would be the best implementation to keep speech recognition after a first hotword, which is mostly the case on a conversationnal boot, user won't say everytime the hotword, dosen't make sense.

The problem there is that sonus trigger the intent processing after the final-result emitted at the end of recognitionStream, so we need a way to manualy trigger the hotword in order to avoid a painfull UX.

Maybe a const detector > sonus.detector ?

why cloudSpeechRecognizer.listening is always false ?

Hi !

I'm trying to use your example, but when i debug it the cloudSpeechRecognizer.listening is always false. So i can't stream my voice to send it too google cloud.

He can hear the hotword , but the command on annyang-example.js can't hear me.

I'm on Mac sierra, i have node 6. I installed sox .
I miss something in installation ?

Thanks.

Add different language support

Hi !

I'm contacted you on Twitter last days, I'm the founder of Gladys.

I just finally tried your module with billing enabled on Google Cloud Plateform. It works like a charm :)

Just one feedback, as I'm speaking french, I want to send my voice to Google Cloud speech with the fr-FR languageCode.

We should be able to specify in your lib which language we are using. It's simple, just a little parameter to add :

    const recognitionStream = _self.recognizer.createRecognizeStream({
      config: {
        encoding: 'LINEAR16',
        sampleRate: 16000,
        languageCode: LANGUAGE_HERE
      },
      singleUtterance: true,
      interimResults: true,
      verbose: true
    })

Maybe it can be an optional parameter of the Sonus constructor.

It's just a detail, otherwise your module just works great :) Planning to make a blog post about that to my community.

If you want I can submit you a PR to add language support, or you can do it yourself, as you prefer.

Thanks again,

Stream closed after final-result ?

Hey :)

Since the last version I have a strange behavior, I put a Amazon Polly response right after final-result to answer my command/question, the problem is that answer is send into the recognition and send back to final-result witch is wrong because of course this one will not be recognize as a good command.

Did you experience this ? Normally after final-result google speech should be off and wait for a hotword right ?

sonus stops sending events after overnight idle

index.js.txt
timeStamp.js.txt

using sonus in the smart mirror project, with latest code (0.1.9), after mirror runs overnight (in sleep mode)
on wakeup, there are no reco events being posted from sonus to smart mirror..
the sonus node process is still running

I am adding debug to see the flow, and will post on wed 12/22 after capture.
attached is the index.js with the trace entries so u can see where they happen
also the timestamp file used by the logging.. (goes in the smart-mirror folder)

sample

11/21/2017 7:55:05 AM module detector trigger hotword
11/21/2017 7:55:05 AM cloudSpeechRecognizer setup to listen
11/21/2017 7:55:05 AM module detector trigger streaming ready
11/21/2017 7:55:05 AM module detector ready for more
11/21/2017 7:55:05 AM module detector silence
11/21/2017 7:55:05 AM module detector silence
11/21/2017 7:55:05 AM module detector silence
11/21/2017 7:55:05 AM module detector silence
11/21/2017 7:55:05 AM module detector silence
11/21/2017 7:55:05 AM module detector silence
11/21/2017 7:55:06 AM module detector silence
11/21/2017 7:55:06 AM module detector silence
11/21/2017 7:55:06 AM module detector silence
11/21/2017 7:55:06 AM module detector silence
11/21/2017 7:55:06 AM module detector sound
11/21/2017 7:55:06 AM module detector sound
11/21/2017 7:55:06 AM module detector sound
11/21/2017 7:55:06 AM module detector sound
11/21/2017 7:55:07 AM module detector sound
11/21/2017 7:55:07 AM module detector sound
11/21/2017 7:55:07 AM module detector sound
11/21/2017 7:55:07 AM module detector sound
11/21/2017 7:55:07 AM module detector silence
11/21/2017 7:55:07 AM module detector silence
11/21/2017 7:55:07 AM module detector silence
11/21/2017 7:55:07 AM module detector silence
11/21/2017 7:55:08 AM module detector silence
11/21/2017 7:55:08 AM module detector silence
11/21/2017 7:55:08 AM module detector silence
11/21/2017 7:55:08 AM module detector silence
11/21/2017 7:55:08 AM module detector silence
11/21/2017 7:55:08 AM module detector silence
11/21/2017 7:55:08 AM module detector silence
11/21/2017 7:55:08 AM module detector silence
11/21/2017 7:55:09 AM module detector silence
11/21/2017 7:55:09 AM module detector silence
11/21/2017 7:55:09 AM module detector silence
11/21/2017 7:55:09 AM module detector silence
11/21/2017 7:55:09 AM module detector silence
11/21/2017 7:55:09 AM module detector silence
11/21/2017 7:55:09 AM module detector silence
11/21/2017 7:55:09 AM module detector silence
11/21/2017 7:55:10 AM module detector silence
11/21/2017 7:55:10 AM module detector silence
11/21/2017 7:55:10 AM module detector silence
11/21/2017 7:55:11 AM module detector silence
11/21/2017 7:55:11 AM module detector silence
11/21/2017 7:55:11 AM module detector silence
11/21/2017 7:55:11 AM module detector sound
11/21/2017 7:55:12 AM module detector sound
11/21/2017 7:55:12 AM module detector sound
11/21/2017 7:55:12 AM module detector sound
11/21/2017 7:55:12 AM module detector sound
11/21/2017 7:55:12 AM module detector sound
11/21/2017 7:55:12 AM module detector sound
11/21/2017 7:55:12 AM module detector sound
11/21/2017 7:55:12 AM module detector sound
11/21/2017 7:55:13 AM module detector silence
11/21/2017 7:55:13 AM module detector silence
11/21/2017 7:55:13 AM module detector sound
11/21/2017 7:55:13 AM module detector sound
11/21/2017 7:55:13 AM module detector sound
11/21/2017 7:55:13 AM module detector sound
11/21/2017 7:55:13 AM module detector silence
11/21/2017 7:55:13 AM module detector silence
11/21/2017 7:55:14 AM module detector silence
11/21/2017 7:55:14 AM module detector silence
11/21/2017 7:55:14 AM module detector silence
11/21/2017 7:55:14 AM module detector silence
11/21/2017 7:55:14 AM module detector silence
11/21/2017 7:55:14 AM module detector silence
11/21/2017 7:55:14 AM module detector silence
11/21/2017 7:55:14 AM module detector silence
11/21/2017 7:55:15 AM module detector silence
11/21/2017 7:55:15 AM module detector silence
11/21/2017 7:55:15 AM module detector silence
11/21/2017 7:55:15 AM module detector silence
11/21/2017 7:55:15 AM module detector silence
11/21/2017 7:55:15 AM module detector silence
11/21/2017 7:55:15 AM module detector silence
11/21/2017 7:55:15 AM module detector silence
11/21/2017 7:55:16 AM module detector silence
11/21/2017 7:55:16 AM module detector silence
11/21/2017 7:55:16 AM module detector silence
11/21/2017 7:55:16 AM module detector silence
11/21/2017 7:55:16 AM module detector silence
11/21/2017 7:55:16 AM module detector silence
11/21/2017 7:55:16 AM module detector silence
11/21/2017 7:55:16 AM module detector silence
11/21/2017 7:55:17 AM module detector silence
11/21/2017 7:55:17 AM module detector silence
11/21/2017 7:55:17 AM module detector silence
11/21/2017 7:55:17 AM module detector silence
11/21/2017 7:55:17 AM cloudSpeechRecognizer data
11/21/2017 7:55:17 AM cloudSpeechRecognizer data something else
11/21/2017 7:55:17 AM cloudSpeechRecognizer data something else no prior results
11/21/2017 7:55:17 AM module detector csr final result
11/21/2017 7:55:17 AM module detector csr no longer listening
11/21/2017 7:55:17 AM cloudSpeechRecognizer data <-------- this seems bad if not listening.
11/21/2017 7:55:17 AM cloudSpeechRecognizer some data
11/21/2017 7:55:17 AM module detector csr partial show
11/21/2017 7:55:17 AM cloudSpeechRecognizer data partial=show
11/21/2017 7:55:17 AM module detector silence
11/21/2017 7:55:17 AM cloudSpeechRecognizer data
11/21/2017 7:55:17 AM cloudSpeechRecognizer some data
11/21/2017 7:55:17 AM module detector csr partial show me
11/21/2017 7:55:17 AM cloudSpeechRecognizer data partial=show me
11/21/2017 7:55:17 AM cloudSpeechRecognizer data
11/21/2017 7:55:17 AM cloudSpeechRecognizer some data
11/21/2017 7:55:17 AM module detector csr partial show map
11/21/2017 7:55:17 AM cloudSpeechRecognizer data partial=show map
11/21/2017 7:55:17 AM cloudSpeechRecognizer data
11/21/2017 7:55:17 AM cloudSpeechRecognizer some data
11/21/2017 7:55:17 AM module detector csr partial show
11/21/2017 7:55:17 AM cloudSpeechRecognizer data partial=show
11/21/2017 7:55:17 AM cloudSpeechRecognizer data
11/21/2017 7:55:17 AM cloudSpeechRecognizer some data
11/21/2017 7:55:17 AM module detector csr partial show map
11/21/2017 7:55:17 AM cloudSpeechRecognizer data partial=show map
11/21/2017 7:55:17 AM module detector silence
11/21/2017 7:55:17 AM cloudSpeechRecognizer data
11/21/2017 7:55:17 AM cloudSpeechRecognizer some data
11/21/2017 7:55:17 AM module detector csr final result show map
11/21/2017 7:55:17 AM cloudSpeechRecognizer data final
11/21/2017 7:55:17 AM module detector csr no longer listening

Security handshake failed and speech recognition stops working

Description
I encountered this random bug on the smart mirror project, but it seems to come from sonus.
When I try the sonus test command (see below)

case 1 : I say the hotword sonus detects it, I wait a while and sometimes if I say nothing I have an handshake error.

case 2 : I say the hotword sonus detects it, I wait a while and I say "hello" or something and then I have an handshake error.

case 3 : I say the hotword sonus detects it, I speak, everthing is going well, I wait a while and I say the hotword then I have an handshake error.

After that it can detect the hotword but it doesn't detect the speech anymore.
Everytime I have at least 3 handshake errors

Unfortunalty it s not easy to reproduce, even if it occurs very often.

How to reproduce

pi@raspberrypi:~/smart-mirror $ npm run sonus

> [email protected] sonus /home/pi/smart-mirror
> node sonus.js

!h: 2
E1227 00:11:15.508035697    1888 handshake.c:128]            Security handshake failed: {"created":"@1482793875.507858875","description":"Handshake read failed","file":"../src/core/lib/security/transport/handshake.c","file_line":237,"referenced_errors":[{"created":"@1482793875.507826480","description":"FD shutdown","file":"../src/core/lib/iomgr/ev_epoll_linux.c","file_line":948}]}
E1227 00:11:17.511256783    1885 handshake.c:128]            Security handshake failed: {"created":"@1482793877.511095482","description":"Handshake read failed","file":"../src/core/lib/security/transport/handshake.c","file_line":237,"referenced_errors":[{"created":"@1482793877.511069961","description":"FD shutdown","file":"../src/core/lib/iomgr/ev_epoll_linux.c","file_line":948}]}
E1227 00:11:18.513975132    1885 handshake.c:128]            Security handshake failed: {"created":"@1482793878.513764665","description":"Handshake read failed","file":"../src/core/lib/security/transport/handshake.c","file_line":237,"referenced_errors":[{"created":"@1482793878.513719300","description":"FD shutdown","file":"../src/core/lib/iomgr/ev_epoll_linux.c","file_line":948}]}
E1227 00:11:19.516665305    1885 handshake.c:128]            Security handshake failed: {"created":"@1482793879.516573118","description":"Handshake read failed","file":"../src/core/lib/security/transport/handshake.c","file_line":237,"referenced_errors":[{"created":"@1482793879.516554004","description":"FD shutdown","file":"../src/core/lib/iomgr/ev_epoll_linux.c","file_line":948}]}

Configuration

pi@raspberrypi:~/smart-mirror $ ulimit -n
65536
pi@raspberrypi:~/smart-mirror $  node -v
v6.9.2
pi@raspberrypi:~/smart-mirror $ npm -v
3.10.9

Comments
I have this error on the smart-mirror master branch and on the dev branch too
I make sure to clean up everything with "git clean -xdf -e config.js" and "npm install" each time

Ideas
After some googling, some grpc users seems to have the same issues and it looks related to a connection limit but I only have the smart-mirror running ... so it's weird
grpc/grpc#7985

Thanks in advance for your help !

resultNoMatch: possible phrases error

Hi I am using resultNoMatch callback:
Sonus.annyang.addCallback('resultNoMatch', function(phrases){
console.log("NoresultmatchFound",phrases)

	}); 

But getting following console output:

Phrases are not getting displayed instead some random function

NoresultmatchFound function require(path) {
try {
exports.requireDepth += 1;
return self.require(path);
} finally {
exports.requireDepth -= 1;
}
}

Sonus not installing on raspberry pi3

Hey, having a problem installing sonus on:

RPi3
Debian Jessie with Pixel Version January 2017
Sonus version - 0.1.4
Node v7.4.0
npm v4.0.5

All dependencies installed

  • sudo apt-get install swig3.0 python-pyaudio python3-pyaudio sox libsox-fmt-all
  • pip install pyaudio
  • sudo apt-get install libatlas-base-dev

When I run npm install for sonus to install from package - it fails at
[email protected] install: 'node-pre-gyp install fallback-to-build'

It does not create a folder for sonus in the node modules. When I run the Electron app (v1.4.14), it says module sonus cannot be found. This is my barebones app - https://github.com/shekit/small-peeqo-tests/tree/master/sonustest

I was able to successfully install grpc separately using the --unsafe-perm flag but when I try to install sonus it fails at the same point.

Allow singleUtterance: false

Setting singleUtterance: true in recognizer.createRecognizeStream leaves it to Google to detect when an utterance has stopped & end the recognition stream. A problem I encountered is that often Google is too eager to stop the detection, i.e., it detects slight pauses in speech as an end of utterance. This is good for detecting short command-type utterances, such as 'turn off light', etc., but for longer free-form dictation it's problematic.

Adding an option to Sonus to allow configurable singleUtterance would give flexibility to developers to implement their own end-of-utterance detection & stream teardown. So I guess this is a feature request.

But I also wanted to post this here because I made some changes to my fork of Sonus to allow this, & thought I'd share it, even though it's got some stuff specific to my use hard coded into & is therefore not pull-request-worthy. But it might contain the seed of something that can be put into Sonus, if desired.

CloudSpeechRecognizer.startStreaming = (options, audioStream, cloudSpeechRecognizer) => {
// . . .
  const recognitionStream = recognizer.createRecognizeStream({
      // . . .
      singleUtterance: options.noSingleUtterance ? false : true,
      // . . .
  })
  // . . .
  // Add recognitionStream to the cloudSpeechRecognizer object so it can be shut down from
  // Sonus if the noSingleUtterance option has been passed in.
  if (options.noSingleUtterance) cloudSpeechRecognizer.recognitionStream = recognitionStream; 
 // . . .
Sonus.init = (options, recognizer) => {
  // . . .
  sonus.trigger = (index, hotword) => {
    // . . .
    let triggerHotword = (index == 0) ? hotword : models.lookup(index)
    // If trigger hotword is 'FreeDescription', set an option to send to CloudSpeechRecognizer
    // so that it will start a stream with singleUtterance: false, & we'll handle our own silence 
    // detection & recognitionStream teardown.
    if (triggerHotword === 'FreeDescription') opts.noSingleUtterance = true;
    // . . .
  // . . .
  // Add a sonus.on listener to receive events from an instantiated sonus in order to shut down 
  //  a recognitionStream.
  sonus.on('recognitionStreamShutdown', function() {
    // recognitionStream will be made available on the csr object, so we can shut it down as follows:
    if (csr.listening && csr.recognitionStream) {
      csr.listening = false;
      sonus.mic.unpipe(csr.recognitionStream);
      csr.recognitionStream.end();
      delete csr.recognitionStream;
    }
  });
}

These changes make it possible to configure singleUtterance to true or false based on the hotword used to trigger Sonus. They also make it possible to stop the recognition stream by emitting a recognitionStreamShutdown event from somewhere in an app. So now deciding when an utterance is over & when to stop the recognition stream with Google is in the developers hands. Developers will still need to handle the fact that Google is going to issue isFinal properties on its results when it thinks the utterance is over, so if you want your transcription to span the length of your configured timeout period, you'll need to concatenate the various isFinal results that Google issues.

As an example, I used the silence & sound events emitted by Sonus (as determined by the Snowboy detector), along with a setTimeout, to determine when an utterance had ended (i.e., after a certain amount of sustained silence, the utterance is determined to be over). Here's the code I used:

// Set Snowboy hotwords.
var hotwords = [{file: '/home/benja/app/ListenRobot.pmdl', hotword: 'ListenRobot', sensitivity: '0.5'}, {file: '/home/benja/app/FreeDescription.pmdl', hotword: 'FreeDescription', sensitivity: '0.5'}];
// Create an instance of Sonus.
var sonusLanguage = 'en-US';
var sonus = Sonus.init({hotwords, sonusLanguage, recordProgram: 'arecord', device: 'bluealsa:HCI=hci0,DEV=00:6A:8E:16:C5:F2,PROFILE=sco'}, speech);
// Start the Sonus instance.
Sonus.start(sonus);
var silenceTimeout = null, bufferSonus = false, sonusBuffer = '';
// Event: When Snowboy informs us a hotword has been detected.
sonus.on('hotword', function(index, keyword) {
  console.log('!');
  if (keyword === 'FreeDescription') bufferSonus = true;
});
// Event: When Snowboy detects silence.
sonus.on('silence', function() {
  // After an appropriate period of uninterrupted silence, send an event to sonus
  //  to tear down the recognitionStream, but only while streaming from a 
  //  'FreeDescription' trigger hotword.
  if (!silenceTimeout) silenceTimeout = setTimeout(function() {
    if (bufferSonus && sonusBuffer !== '') {
    console.log('Complete sonusBuffer flushed: ', sonusBuffer);
    Sonus.annyang.trigger('sonusBuffer ' + sonusBuffer);
    sonusBuffer = '';
    bufferSonus = false;
   }
   sonus.emit('recognitionStreamShutdown');
  }, 4300);
});
// Event: When Snowboy detects sound.
sonus.on('sound', function() {
  clearTimeout(silenceTimeout);
  silenceTimeout = null;
});
// Event: When a final transcript has been received from Google Cloud Speech.
sonus.on('final-result', function(result) {
  console.log(result);
  if (bufferSonus) sonusBuffer += result;
});

Perhaps this can help someone.

libcblas.so.3: cannot open shared object file: No such file or directory

Known Error

On a fresh install of Raspbian you may encounter this issue when running Sonus:

module.js:583
  return process.dlopen(module, path._makeLong(filename));
                 ^

Error: libcblas.so.3: cannot open shared object file: No such file or directory
    at Error (native)
    at Object.Module._extensions..node (module.js:583:18)
    at Module.load (module.js:473:32)
    at tryModuleLoad (module.js:432:12)
    at Function.Module._load (module.js:424:3)
    at Module.require (module.js:483:17)
    at require (internal/module.js:20:19)
    at Object.<anonymous> (/home/pi/matrix-sonus-test/node_modules/snowboy/lib/node/index.js:8:29)
    at Module._compile (module.js:556:32)
    at Object.Module._extensions..js (module.js:565:10)

Fix

To solve this issue you have to install libatlas-base-dev

sudo apt-get install libatlas-base-dev

Python interface

Have you a Python binding?
Not totally comfortable with JS at the moment.
Have also been using Google speech apis (gRPC version) for live streaming recognition on Raspbian.

It does work and I have written it all in Python but cannot do the wake word yet.

Cheers

Why is google speech needed?

Hello,

I'm curious to why google speech is needed to initialize sonus, while it is advertising to be offline. The main reason I am not using any cloud software for speech is that I don't want a recording device at all time in my house that connects to the internet.

Regards!

Add support for TTS

Up for consideration: should we add TTS to Sonus (provide geraric wrappers like we'll do for streaming cloud speech), or does that fall outside of the scope of the module?

Sonus not working on v7.9.0

Hi
I am getting following error while running sonus:
ERROR (Input():snowboy-io.cc:315) Fail to open input file "node_modules/snowboy/resources/common.res"
terminate called after throwing an instance of 'std::runtime_error'
what(): ERROR (Input():snowboy-io.cc:315) Fail to open input file "node_modules/snowboy/resources/common.res"

Even though binaries are fixed in v7.x as per you comment in older issue reported
29

registerCommand Method: annayang

Hey

I am trying to add commands using Sonus.annyang.addCommands(commands);

const commands = {
'hello': function () {
console.log('You will obey');
},
'(give me) :flavor ice cream': function (flavor) {
console.log('Fetching some ' + flavor + ' ice ceam for you sr')
},
'turn (the)(lights) :state (the)(lights)': function (state) {
console.log('Turning the lights', (state == 'on') ? state : 'off')
}
}

This is working fine. But in my one of my application, I am not able to create above commands JSON object dynamically or static way because of some limitation so wanted to have a method in lib/annyang-core to add Commands to annyang:

file : lib/annyang-core.js

method : registerCommand(phrase , callback function)

which will add commands one by one to annayang

Uncaught, unspecified "error" event.

Hello again,

I´ve managed to make your example work.
And it´s working for some time, but stops working after a while.

`**pi@raspberrypi:~/NodeJS/VIKI $ node sonus.js
Say "alexa"...
!alexa
Partial es
Partial test
Final test
!alexa
Partial t
Partial test
Final test
!alexa
Partial Chef
Partial test
Final test
!alexa
events.js:146
throw err;
^

Error: Uncaught, unspecified "error" event. ([object Object])
at Writable.emit (events.js:144:17)
at Writable. (/home/pi/NodeJS/node_modules/sonus/index.js:92:8)
at emitOne (events.js:77:13)
at Writable.emit (events.js:169:7)
at null. (/home/pi/NodeJS/node_modules/sonus/index.js:37:62)
at emitOne (events.js:82:20)
at emit (events.js:169:7)
at Duplexify._destroy (/home/pi/NodeJS/node_modules/@google-cloud/speech/node_modules/pumpify/node_modules/duplexify/index.js:184:15)
at /home/pi/NodeJS/node_modules/@google-cloud/speech/node_modules/pumpify/node_modules/duplexify/index.js:175:10
at nextTickCallbackWith0Args (node.js:436:9)**
`
My sonus.js looks like this:

`'use strict'
var ini = require('node-ini');
var cfg = ini.parseSync('./config.ini');

const ROOT_DIR = '/home/pi/NodeJS/node_modules/sonus/'
const Sonus = require(ROOT_DIR + 'index.js')

const speech = require('@google-cloud/speech')({
projectId: cfg.google.projectid,
keyFilename: cfg.google.keyfile
})

const hotwords = [{ file: './resources/alexa.umdl', hotword: 'alexa' }]
const language = "de-DE"
const sonus = Sonus.init({ hotwords, language }, speech)

Sonus.start(sonus)
console.log('Say "' + hotwords[0].hotword + '"...')

sonus.on('hotword', (index, keyword) => console.log("!" + keyword))

sonus.on('partial-result', result => console.log("Partial", result))

sonus.on('final-result', result => {
console.log("Final", result)
if (result.includes("stop")) {
Sonus.stop();
}

})
`

My system is Raspi2 (Model B). All npm versions on my system correspond to ones given in your package.json.

Greetings
Nycon

Keep getting "Unexpected token" issue on run

Hi. Hoping you guys could help me with this one.

Whenever I run "npm run sonus" I get this error:

pi@raspberrypi:~/smart-mirror $ npm run sonus

> [email protected] sonus /home/pi/smart-mirror
> node sonus.js

/home/pi/smart-mirror/node_modules/sonus/index.js:5
const {Detector, Models} = require('snowboy')
      ^

SyntaxError: Unexpected token {
    at exports.runInThisContext (vm.js:53:16)
    at Module._compile (module.js:373:25)
    at Object.Module._extensions..js (module.js:416:10)
    at Module.load (module.js:343:32)
    at Function.Module._load (module.js:300:12)
    at Module.require (module.js:353:17)
    at require (internal/module.js:12:17)
    at Object.<anonymous> (/home/pi/smart-mirror/sonus.js:20:15)
    at Module._compile (module.js:409:26)
    at Object.Module._extensions..js (module.js:416:10)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] sonus: `node sonus.js`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the [email protected] sonus script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /home/pi/.npm/_logs/2017-05-29T20_11_45_582Z-debug.log

I'm running npm version 4.6.1 and Nodejs version 6.10.4. All on a Raspberry Pi 3.

Does anyone know what it could be?

Text to speech

Are you planning to use any text to speech conversion

Allow custom grammar

We should be allowing custom grammar (and other configuration options) to be passed into sonus.

Running on raspberry pi

---sonus.js

const Sonus = require('sonus')
const speech = require('@google-cloud/speech')({
projectId: 'aimevoice2017',
keyFilename: '/home/pi/AimeVoice-91efae41b5b4.json'
})

const hotwords = [{ file: 'resources/snowboy.umdl', hotword: 'snowboy' }]
const sonus = Sonus.init({ hotwords }, speech)
Sonus.start(sonus)
sonus.on('hotword', (index, keyword) => console.log("!"))
sonus.on('final-result', console.log)

node version v6.10.3

return process.dlopen(module, path._makeLong(filename));
^

Error: Module version mismatch. Expected 48, got 51.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.