Giter Club home page Giter Club logo

naomiproject / naomi Goto Github PK

View Code? Open in Web Editor NEW
242.0 242.0 47.0 5.34 MB

The Naomi Project is an open source, technology agnostic platform for developing always-on, voice-controlled applications!

Home Page: https://projectnaomi.com/

License: MIT License

Python 91.70% Shell 8.30%
assistant hacktoberfest home-automation iot jarvis jasper linux naomi personal-assistant raspberry-pi speech-recognition speech-synthesis speech-to-text text-to-speech vocal-assistant voice

naomi's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

naomi's Issues

Adding/updating STT engines

  • Update to the latest version of Phonetisaurus (requires the 1.6 version of OpenFST)
  • Adding Mozilla's DeepSpeech engine

Fix Wit.AI critical error

critical Bad request error from Wit.ai and Update: api url

DEBUG:urllib3.connectionpool:https://api.wit.ai:443 "POST /speech?v=20160526 HTTP/1.1" 400 55
CRITICAL:witai_stt_1_0_0.witai:Request failed with response: u'{\n  "error" : "Bad request",\n  "code" : "bad-request"\n}'
Traceback (most recent call last):
  File "/home/seb/ov-dev/plugins/stt/witai-stt/witai.py", line 95, in transcribe
    r.raise_for_status()
  File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 939, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
HTTPError: 400 Client Error: Bad Request for url: https://api.wit.ai/speech?v=20160526

Testing Code :)

IMHO we should strive to have 90% of the code coverage. That way we know that the code will just work.

[Feature-Request] - introducing NLP

NLP, standing for Natural Language Processing, would allow us to analyse the users speech structures more efficiently and make smarter things with OVIA

Currently, OVIA is very simple, you give her a list of word to recognize in your plugins and she will find them in the user speech, and process with them not more, using matching, you can pronounce the sentence which contains your order and it will launch an attached task.

An example with the current weather module, if your position is set to London,

User: "OVIA, whats the weather ?"
or
User: "OVIA, weather"
or
User: "OVIA, can you give me the weather please?"

OVIA will always have the same answer:

OVIA: "Today is a rainy day in London"

But what if i want the New-York weather ?

User: "OVIA, what's the weather in New-York?"

User: "OVIA, weather in New-York?"

User: "OVIA, New-York Weather"

Ovia will always have the same answer:

OVIA: "Today is a rainy day in London"

It does not work

Why OVIA is talking about the London's weather ? because it's hard-coded, she only recognize keywords in the sentence like "Weather", "temperature" using the matching methods ...
and the setup city in profile.yml is London,
Basically, when modules keywords are detected, it trigger the specific module and this module use the availlable informations in profile.yml, without going further, and then the TTS say the prepared sentence. its not "smart".

Using NLP, we could recognize and analyze intelligently each word functions, sentences structures, detect places in sentences, meanings, and even users feelings to adapt OVIA's behaviors and make her smarter, and maybe compare her to current proprietary solutions where privacy notions doesn't exist.

Issue in progress

Microphones HAT support

support various microphones for Raspberry Pi:

For the boards with hats, we could make lights effects like on the Amazon Echo or google home, while receiving a notification, requesting Naomi, Naomi status (Offline, updating software...)

  • Matrix Boards
    • With leds control support
    • Onboard sensors support
  • ReSpeaker Boards
    • With leds control support
  • wireless Bluetooth microphones with earphones integrated

Friendlier initial configuration

It has always bothered me that when the user first types "./Jasper.py" it just errors out with a message that the configuration file does not exist. The documentation around creating a configuration file has always been awful, and successfully creating a configuration file at this point requires reading the code itself looking for options. There is a program called "populate.py'" in the jasper directory that starts the job of configuring jasper, but it is fairly useless being that it is missing a lot of options and doesn't produce an actual working configuration anyway.

I would like to alter the beginning of "application.py" to ask if you would like to create a configuration file, then alter "populate.py" to at least get you part of the way there.

This project also touches on issue 15 (if you select "pocketsphinx" as your engine, it should download and configure pocketsphinx automagically, if you select deepspeech as your engine, it should download and install deepspeech automagically), issue 16 (gmail password stored in plain text), and issue 57 (support for email services other than gmail).

I will not try to solve these issues in this fix, but will at least try to get it to the point where I can successfully use it to create a configuration file given that I already have pocketsphinx and festival installed. I am also doing a little cleanup in making the language selection the first question that appears so that (given that someone has gone through the trouble to write a translation file) the rest of the configuration can be done in the user's chosen tongue.

Eventually, the goal would be to get this all working in such a way that the configuration can be done primarily verbally, with the text interface only being a backup for when you need to enter a complex email password.

[BugFix] - Flake errors with PEP8

Here are the errors:

jasper/app_utils.py:95:5: E722 do not use bare except'
jasper/mic.py:136:17: E722 do not use bare except'
jasper/populate.py:115:9: E722 do not use bare except'
jasper/populate.py:183:1: E305 expected 2 blank lines after class or function definition, found 1
plugins/speechhandler/birthday/birthday.py:34:9: E722 do not use bare except'
plugins/speechhandler/birthday/birthday.py:47:13: E722 do not use bare except'
plugins/speechhandler/notifications/notifications.py:32:9: E722 do not use bare except'
plugins/stt/pocketsphinx-stt/sphinxvocab.py:7:1: E722 do not use bare except'
plugins/stt/pocketsphinx-stt/tests/test_sphinxvocab.py:15:1: E305 expected 2 blank lines after class or function definition, found 1
plugins/stt/snowboy-stt/snowboydetect.py:80:5: E722 do not use bare except'
plugins/stt/snowboy-stt/snowboydetect.py:86:1: E305 expected 2 blank lines after class or function definition, found 1
plugins/stt/snowboy-stt/snowboydetect.py:104:9: E722 do not use bare except'
plugins/stt/snowboy-stt/snowboydetect.py:148:1: E305 expected 2 blank lines after class or function definition, found 1
plugins/tts/google-tts/google.py:17:9: E722 do not use bare except'
plugins/tts/mary-tts/marytts.py:41:9: E722 do not use bare except'

Its PEP8 errors (python syntax guide) dectected on TravisCI

Update requirements

Hi guys ! , the goal is to update the dependencies of Jasper2fork, you can see them HERE
don't forget to ...

  • Test all (or a part) the plugins with the news dependencies
  • Update python_requirements.txt
  • Make pull request ONLY on the dev branch
  • Update the wiki if it's needed
  • Fix pep8 error

Thanks ;) !

Benchmarking

Need to get benchmarks of how the system preforms to see how much growing room we have for expanding.
Remember that target platforms:

  • RPI2 B
  • RPI2 B+
  • RPI3

Suggestions on how this benchmark should be done please chime in with your comments.

Auth middleware for naomi and plugins

Is your feature request related to a problem? Please describe.
There is no current way of allowing users to authenticate with 3rd party services very easy. And each plugin requires the developer to implement auth code. This is not very scalable!

Describe the solution you'd like

I'd like to have one codebase to authenticate other services.

E.G. Allowing plugins to use these services w/o implementing auth code for each service in the plugin.

Describe alternatives you've considered

Implementing our own?

Firebase auth?

Still researching options.

Gmail plugin check

@chrobione: "The plugin tells me I have like 39k new emails. Even though i have that many emails, they are all marked read."

[Feature-Request] - Re-adding notification plugin functionality

The notification plugins mode that was implemented in the 1.0 don't seem to be implemented in 2.0.

The interaction here would be passive: Naomi will do some silent monitoring of some information stream (e.g., checking Twitter or email) and tell you when there is something important to report, such as a friend's birthday you don't want to miss or an appointment in the day.

It would be cool to add a sound when it's going to tell you something (one like when you trigger Naomi , to avoid surprises).

TestCase pocketsphinx fails

when I run the testcases locally

coverage run ./setup.py test

the following cases fail:

`testTranscribe (pocketsphinx-stt.tests.test_sphinxplugin.TestPocketsphinxSTTPlugin) ... No handlers could be found for logger "pocketsphinx-stt.sphinxplugin"
ERROR
testTranscribeJasper (pocketsphinx-stt.tests.test_sphinxplugin.TestPocketsphinxSTTPlugin) ... ERROR

ERROR: testTranscribe (pocketsphinx-stt.tests.test_sphinxplugin.TestPocketsphinxSTTPlugin)

Traceback (most recent call last):
File "/home/andreas/Software/j2f/plugins/stt/pocketsphinx-stt/tests/test_sphinxplugin.py", line 17, in setUp
'unittest-passive', ['JASPER'])
File "/home/andreas/Software/j2f/jasper/testutils.py", line 42, in get_plugin_instance
return plugin_class(*args)
File "/home/andreas/Software/j2f/plugins/stt/pocketsphinx-stt/sphinxplugin.py", line 44, in init
sphinxvocab.compile_vocabulary)
File "/home/andreas/Software/j2f/jasper/plugin.py", line 82, in compile_vocabulary
self.profile, compilation_func, self._vocabulary_phrases)
File "/home/andreas/Software/j2f/jasper/vocabcompiler.py", line 156, in compile
raise e
ValueError: FST model not specified!

ERROR: testTranscribeJasper (pocketsphinx-stt.tests.test_sphinxplugin.TestPocketsphinxSTTPlugin)
Traceback (most recent call last):
File "/home/andreas/Software/j2f/plugins/stt/pocketsphinx-stt/tests/test_sphinxplugin.py", line 17, in setUp
'unittest-passive', ['JASPER'])
File "/home/andreas/Software/j2f/jasper/testutils.py", line 42, in get_plugin_instance
return plugin_class(*args)
File "/home/andreas/Software/j2f/plugins/stt/pocketsphinx-stt/sphinxplugin.py", line 44, in init
sphinxvocab.compile_vocabulary)
File "/home/andreas/Software/j2f/jasper/plugin.py", line 82, in compile_vocabulary
self.profile, compilation_func, self._vocabulary_phrases)
File "/home/andreas/Software/j2f/jasper/vocabcompiler.py", line 156, in compile
raise e
ValueError: FST model not specified!


Ran 30 tests in 10.660s

FAILED (errors=2, skipped=1)`

Just as reference, I will look into this.

crackling output sound when using audio_engine pyaudio

When I use the audio_engine pyaudio the output sound is very bad quality and crackling. Comparing debug logs for sound output between pyaudio and alsa doesn't reveal anything special. They are pretty much the same with the exception of different device identifiers.

Here is the debug log when jasper plays sound through pyaudio:

profile audio settings:

...
audio_engine: pyaudio

audio:
  input_device: Logitech-USB-Headset-Audio
  output_device: bcm2835-ALSA-hw-0-0
  input_samplerate: 48000
DEBUG:jasper.mic:input_samplewidth not configured, using default.
DEBUG:jasper.mic:input_channels not configured, using default.
DEBUG:jasper.mic:input_chunksize not configured, using default.
DEBUG:jasper.mic:output_chunksize not configured, using default.
DEBUG:jasper.mic:output_padding not configured,using default.
DEBUG:jasper.mic:Input sample rate: 48000 Hz
DEBUG:jasper.mic:Input sample width: 16 bit
DEBUG:jasper.mic:Input channels: 1
DEBUG:jasper.mic:Input chunksize: 1024 frames
DEBUG:jasper.mic:Output chunksize: 1024 frames
DEBUG:jasper.mic:Output padding: no
DEBUG:espeak_tts_1_0_0.espeak:Executing espeak -v english -p 40 -s 160 --stdout 'Hello there, this is a longer introduction sentence for debug purposes. My name is, JASPER.. I like to talk, but not too much and sometimes I don'"'"'t understand what you want from me.'
DEBUG:pyaudio_1_0_0.pyaudioengine:output stream opened on device 'bcm2835-ALSA-hw-0-0' (22050 Hz, 1 channel, 16 bit)
DEBUG:pyaudio_1_0_0.pyaudioengine:output stream closed on device 'bcm2835-ALSA-hw-0-0'
DEBUG:espeak_tts_1_0_0.espeak:Executing espeak -v english -p 40 -s 160 --stdout 'How can I be of service?'
DEBUG:pyaudio_1_0_0.pyaudioengine:output stream opened on device 'bcm2835-ALSA-hw-0-0' (22050 Hz, 1 channel, 16 bit)
DEBUG:pyaudio_1_0_0.pyaudioengine:output stream closed on device 'bcm2835-ALSA-hw-0-0'

and here through alsa:

profile audio settings:

...
audio_engine: alsa

audio:
  input_device: hw-CARD-Headset-DEV-0
  output_device: hw-CARD-ALSA-DEV-0
  input_samplerate: 48000

debug logs:

DEBUG:jasper.mic:input_samplewidth not configured, using default.
DEBUG:jasper.mic:input_channels not configured, using default.
DEBUG:jasper.mic:input_chunksize not configured, using default.
DEBUG:jasper.mic:output_chunksize not configured, using default.
DEBUG:jasper.mic:output_padding not configured,using default.
DEBUG:jasper.mic:Input sample rate: 48000 Hz
DEBUG:jasper.mic:Input sample width: 16 bit
DEBUG:jasper.mic:Input channels: 1
DEBUG:jasper.mic:Input chunksize: 1024 frames
DEBUG:jasper.mic:Output chunksize: 1024 frames
DEBUG:jasper.mic:Output padding: no
DEBUG:espeak_tts_1_0_0.espeak:Executing espeak -v english -p 40 -s 160 --stdout 'Hello there, this is a longer introduction sentence for debug purposes. My name is, JASPER.. I like to talk, but not too much and sometimes I don'"'"'t understand what you want from me.'
DEBUG:alsa_1_0_0.alsaaudioengine:output stream opened on device 'hw-CARD-ALSA-DEV-0' (22050 Hz, 1 channel, 16 bit)
DEBUG:alsa_1_0_0.alsaaudioengine:output stream closed on device 'hw-CARD-ALSA-DEV-0'
DEBUG:espeak_tts_1_0_0.espeak:Executing espeak -v english -p 40 -s 160 --stdout 'How can I be of service?'
DEBUG:alsa_1_0_0.alsaaudioengine:output stream opened on device 'hw-CARD-ALSA-DEV-0' (22050 Hz, 1 channel, 16 bit)
DEBUG:alsa_1_0_0.alsaaudioengine:output stream closed on device 'hw-CARD-ALSA-DEV-0'

There is no difference in sound quality when I switch the sound output to a USB port with pyaudio. The dev branch from the depository jasperproject/jasper-client has the same issue. The master from that repository did not have that crackling sound issue.

I am assuming that has something to do with the way the sound files used to be executed through the aplay command, which was changed in the dev branch to go get rid of such dependencies.

Here are links to two recorded audio files. One using pyaudio and one using alsa as audio engine.

Add support for Mozilla DeepSpeech

The mozilla project's DeepSpeech is moving forward quickly, and provides a much larger vocabulary than PocketSphinx while maintaining a decent word error rate.

I do not yet have DeepSpeech running on a Raspberry Pi as the current english language model is too big (1.6 GiB) and without this DeepSpeech just returns raw phonemes, like someone who doesn't speak english trying to take notes on a lecture delivered in english. I do have DeepSpeech running directly on my laptop (8GiB ram) and also on a VirtualBox virtual machine (2GiB ram). I feel like the language model size problem can be dealt with by training a smaller model with a more limited vocabulary. Also, when Deepspeech initializes, it issues the following warning:
"Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage."
So that seems like a big clue. There has been quite a bit of interest in getting DeepSpeech working on the Raspberry Pi 3 line.

It would also be possible to use the Kaldi approach and use gstreamer to send audio data to DeepSpeech running on a home server somewhere. But if I had a home server, I think it would make more sense to run Naomi directly off that, close to the STT engine and using the Raspberry Pi as a relay to the IOT stuff, since it would almost certainly be exchanging more data with the STT engine than with the IOT devices.

For now, this will mostly be meant to encourage experimentation with DeepSpeech.

mstranslator-tts is not compatible with new Azure Portal to obtain keys,

Bing Speech API
The Bing Speech API's authentication endpoint and SDK namespaces are changing as of September 21. If you download the new SDK and use the Azure portal to obtain keys, you'll need a new subscription key. Old keys will still work for six months before deprecation. All users will need to change the reference to the SDK namespace.

audio_engine ALSA throws not a whole number of frames

The error always occurs if I try to keep the recording going during passive listening. For example saying a long sentence or saying jasper really slowly. This error does not happen when I change the audio_engine to pyaudio.

audio settings in profile.yaml

...
audio_engine: alsa

audio:
  input_device: hw-CARD-Headset-DEV-0
  output_device: hw-CARD-ALSA-DEV-0
  input_samplerate: 48000

Jasper.py --debug output when the error occurs:

DEBUG:alsa_1_0_0.alsaaudioengine:input stream opened on device 'hw-CARD-Headset-DEV-0' (48000 Hz, 1 channel, 16 bit)
DEBUG:jasper.mic:Started recording on device 'hw-CARD-Headset-DEV-0'
DEBUG:jasper.mic:Triggered on SNR of 15.3423190229dB
DEBUG:jasper.mic:Recording's SNR dB: 20.503474
DEBUG:jasper.mic:Recording's SNR dB: 20.682301
DEBUG:jasper.mic:Recording's SNR dB: 20.763940
DEBUG:jasper.mic:Recording's SNR dB: 20.933343
DEBUG:jasper.mic:Recording's SNR dB: 21.169174
DEBUG:jasper.mic:Recording's SNR dB: 21.306855
DEBUG:jasper.mic:Recording's SNR dB: 21.205832
DEBUG:jasper.mic:Recording's SNR dB: 20.996027
DEBUG:jasper.mic:Recording's SNR dB: 20.878649
DEBUG:jasper.mic:Recording's SNR dB: 20.861750
DEBUG:jasper.mic:Recording's SNR dB: 20.887086
DEBUG:jasper.mic:Recording's SNR dB: 20.832098
DEBUG:jasper.mic:Recording's SNR dB: 20.798086
DEBUG:jasper.mic:Recording's SNR dB: 20.776760
DEBUG:jasper.mic:Recording's SNR dB: 20.721067
DEBUG:jasper.mic:Recording's SNR dB: 20.759662
DEBUG:jasper.mic:Recording's SNR dB: 20.746816
DEBUG:jasper.mic:Recording's SNR dB: 20.781029
DEBUG:jasper.mic:Recording's SNR dB: 20.798086
DEBUG:jasper.mic:Recording's SNR dB: 20.806601
DEBUG:jasper.mic:Recording's SNR dB: 20.878649
DEBUG:jasper.mic:Recording's SNR dB: 20.916551
DEBUG:jasper.mic:Recording's SNR dB: 20.861750
DEBUG:jasper.mic:Recording's SNR dB: 20.840580
DEBUG:jasper.mic:Recording's SNR dB: 20.891301
DEBUG:jasper.mic:Recording's SNR dB: 20.746816
DEBUG:jasper.mic:Recording's SNR dB: 20.669340
DEBUG:jasper.mic:Recording's SNR dB: 20.647695
DEBUG:jasper.mic:Recording's SNR dB: 20.690930
DEBUG:jasper.mic:Recording's SNR dB: 20.712467
DEBUG:jasper.mic:Recording's SNR dB: 20.669340
DEBUG:jasper.mic:Recording's SNR dB: 20.660689
DEBUG:jasper.mic:Recording's SNR dB: 20.472571
DEBUG:jasper.mic:Recording's SNR dB: 20.405978
DEBUG:jasper.mic:Recording's SNR dB: 20.298356
DEBUG:jasper.mic:Recording's SNR dB: 20.253117
DEBUG:jasper.mic:Recording's SNR dB: 20.138977
DEBUG:jasper.mic:Recording's SNR dB: 20.125179
DEBUG:jasper.mic:Recording's SNR dB: 19.858763
DEBUG:jasper.mic:Recording's SNR dB: 19.661948
DEBUG:jasper.mic:Recorded 60 frames
DEBUG:alsa_1_0_0.alsaaudioengine:input stream closed on device 'hw-CARD-Headset-DEV-0'

Traceback (most recent call last):
  File "j2f/Jasper.py", line 5, in <module>
    jasper.main()
  File "/home/pi/j2f/jasper/__main__.py", line 55, in main
    app.run()
  File "/home/pi/j2f/jasper/application.py", line 302, in run
    self.conversation.handleForever()
  File "/home/pi/j2f/jasper/conversation.py", line 49, in handleForever
    input = self.mic.listen()
  File "/home/pi/j2f/jasper/mic.py", line 215, in listen
    self.wait_for_keyword(self._keyword)
  File "/home/pi/j2f/jasper/mic.py", line 178, in wait_for_keyword
    snr = self._snr([frame])
  File "/home/pi/j2f/jasper/mic.py", line 97, in _snr
    rms = audioop.rms(b''.join(frames), int(self._input_bits / 8))
audioop.error: not a whole number of frames



japer-client vs j2f

Just stumbled over this here and was wondering why this exists? Is there a reason why this is separate from the jasper-client project?

[BugFix] - Update marryTTS

INPUT_TEXT can contain unicode-characters which urllib.urlencode might not like and result in an error. Therefore encoding the phrase beforehand

Documentation

This issue is to remind us to write documentation for everything. And remember write the documents to help the next person learn when they read your docs.

[Feature-Request] - Today Module

I thought, we could create a module that eg in the morning you say the weather, the temperature, if you have an appointment, if yes with who..

For example: You get up in the morning, you said "Hello -Keyword-" and he answers you "Hello -yourname-, we are the -date- today it rains, the temperature outside is 15 degrees C, you have no appointment of planned, Have a nice day! " with more features of course ;)

Add support for latest git version of Phonetisaurus

The current version of Naomi (and Jasper) is not currently compatible with the latest Phonetisaurus. While I have no information that the latest phonetisaurus is significantly superior to the older version in terms of performance, I do know that the instructions for the older version require the user to download a file from https://www.dropbox.com/s/kfht75czdwucni1/g014b2b.tgz. I have no idea whose dropbox account this is, and would not want to count on it being available forever. The problem with compatibility between Naomi and the newer Phonetisaurus is that the Grapheme to Phoneme converter, phonetisaurus-g2p, has been split into phonetisaurus-g2pfst and phonetisaurus-g2prnn. Phonetisaurus-g2pfst is the logical successor to phonetisaurus-g2p, but no longer includes the --input command line option. It also relies on OpenFST 1.6.x, while the older version required OpenFST 1.3.x.

This update uses the instructions currently located at https://github.com/aaronchantrill/jasper-client/blob/master/PocketSphinx_setup.md (I will include that file with the pull request) and as far as I can tell, all the sources are stable releases direct from the maintainers. If anyone knows of a better source for one of these please let me know. These are all source code downloads, so the curious and security conscious can audit them. Unfortunately this means that it takes about 3 hours to compile everything on a Raspberry Pi.

I will test this on Raspbian Stretch running on a Raspberry Pi 3B and a Raspberry Pi 3B+ before issuing a pull request.

Extending Population

Taking what work aaronchantrill has done with updating the population process.. I would like to extend it further by addressing a couple concerns that where mentioned in Issue #67 as well as implementing a better UX.

By that I mean to formatting the questions better, rewording some of them, as well as changing the flow of the questions. So if you select a certain option for the config, if the next question references a previous answer you did not select it will skip over it. As well as fault tolerances so if you miss type something it will throw an error and ask again instead of defaulting and causing someone to start the whole process over again.

Finally adding in a styled CLI, so implementing colors & special characters to make the population process more enjoyable than what it actually is! 😄

Snowboy ships with precompiled library?

The STT plugin for snowboy includes a precompiled 32bit library.

Is this limitation intended? Shouldn't the library be something downloaded on the fly / during install - instead of being included in the source code?

Discord link expired on README

Describe the bug
Discord link expired on README

To Reproduce
Click the discord link

Expected behavior
The link should work :3

System
Regular ol' computer

Additional context
Hullo I want to be in your discord

Script to download and install everything.

This is a nice thing to do for the community.

Basic outline for user experience:

If your starting fresh
Download pi image from raspberry burn to SD.

If you already have image or just got your pi running for the first time

  1. Run a sh or bash script from wget to start install process of Jasper.
  2. Wait for Jasper to ask for help in finial configurations. This runs populate.py that asks for gmail creds, location and other personal info for being a personal assistant.
  3. System is configured and if you reboot the pi the jasper software restarts because its a service.
  • I know there is a huge amount of steps between 1 and 2, and then more steps between 2 and 3.
  • I will work on an advanced outline to explain it better.
  • A later thing.. is to have a voice question and answer phase of the setup, instead of typing configuration.

Twitter integration - as an independent plugin

Probably be a good idea to have twitter work with this to...
Basic functionality:
Get twitters with '@username' mentions
Have ability to disable notifications (i.e. human sleep time)
Send twitter, with playback of twitter msg before posting. (don't need rouge posting to twitter)

Local mode does not support accents

Bug while using French with jasper in local mode (lauching Jasper in text mode, without TTS/STT and on offline) , it does not support accents

to launch Jasper in this mode: python Jasper.py --local

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.