Giter Club home page Giter Club logo

melissa-core's Introduction

Melissa

Build Status Codacy Badge codecov.io

Melissa is a virtual assistant for OS X, Windows and Linux systems. She currently uses Google Chrome's speech-to-text engine, OS X's say command, Linux's espeak command or Ivona TTS along with some magical scripting which makes her alive, developed by Tanay Pant and a group of sorcerers. The Web UI for Melissa has been designed by Nakul Saxena.

Check out our wiki, where we have installation and configuration instructions, usage guide as well as relevant documentation.

Discussion and Support

If you face an issue or require support, please take a look through the GitHub Issues, as you may find some useful advice there. If you are still facing issues, feel free to create a post at our Google Group Forum describing the issue and the steps you have taken to debug it.

Licence

The MIT License (MIT)

melissa-core's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

melissa-core's Issues

Error messages while main.py is executing.

After I type sudo python main.py into the terminal, I get the following error messages.
aniketk@aniket-ThinkPad:~/Melissa-Core$ sudo python main.py
ALSA lib pcm_dsnoop.c:618:(snd_pcm_dsnoop_open) unable to open slave
ALSA lib pcm_dmix.c:1022:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
bt_audio_service_open: connect() failed: Connection refused (111)
bt_audio_service_open: connect() failed: Connection refused (111)
bt_audio_service_open: connect() failed: Connection refused (111)
bt_audio_service_open: connect() failed: Connection refused (111)
ALSA lib pcm_dmix.c:1022:(snd_pcm_dmix_open) unable to open slave
Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jack server is not running or cannot be started
Say something!
Melissa thinks you said 'what is the current time'
ALSA lib pcm_dsnoop.c:618:(snd_pcm_dsnoop_open) unable to open slave
ALSA lib pcm_dmix.c:1022:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
bt_audio_service_open: connect() failed: Connection refused (111)
bt_audio_service_open: connect() failed: Connection refused (111)
bt_audio_service_open: connect() failed: Connection refused (111)
bt_audio_service_open: connect() failed: Connection refused (111)
ALSA lib pcm_dmix.c:1022:(snd_pcm_dmix_open) unable to open slave
Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jack server is not running or cannot be started
These messages continue until I say 'go to sleep'. Any suggestions on this issue?

Refactoring STT for interactive conversations

As @neilnelson mentioned in the code of stt.py, we need a better implementation for the STT as well as its interaction with brain.py so that it allows us to hold spontaneous conversations with Melissa.

For example, a sample conversation for a better twitter module might be:

User: Send tweet
Melissa: What would you like to tweet?
User: I'm feeling drunk and high

Add macosx speech to text input?

perhaps add macosx speech to text input

but do it in a way where privacy is respected. please note apple's disclaimer on this feature of osx

When you use Dictation, you can choose to have either your Mac or Apple’s servers perform the speech recognition for you. If you use Enhanced Dictation, your Mac will convert what you say into text without sending your dictated speech to Apple.

If you use server-based Dictation, the things you dictate will be recorded and sent to Apple to convert what you say into text and your computer will also send Apple other information, such as your name and nickname; and the names, nicknames, and relationship with you (for example, “my dad”) of your address book contacts (collectively, your “User Data”). All of this data is used to help the dictation feature understand you better and recognize what you say. It is not linked to other data that Apple may have from your use of other Apple services.

ALSA, jack connection errors - Melissa does not listen

$ python main.py 
ALSA lib pcm_dmix.c:1024:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm.c:2267:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2267:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2267:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map
ALSA lib pcm_dmix.c:1024:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1024:(snd_pcm_dmix_open) unable to open slave
connect(2) call to /dev/shm/jack-1000/default/jack_0 failed (err=No such file or directory)
attempt to connect to server failed
Say something!

It doesn't listen anything, I think, at this point.

Review of AI projects - Wit

Review of AI projects - Wit

The idea here is to get a handle on the capabilities of similar AI projects for inclusion into Melissa. At this point we are not saying that these capability ideas will or will not work for Melissa and only that this is the landscape we can see.

I found Melissa in the comments area of Five Simple Ways to Build Artificial Intelligence in 2016.

Here is the first one.

Wit

I just bypassed the page requesting an email address and went to the Documentation. It looks like you accept Wit's terms-of-service, which we do not want to do, if you provide an email address.

  • 'Build your first app' page
    • Wit appears to be web-page, text based.
    • The idea of collecting user responses of particular information to build a user profile that enables tailored responses to questions such as "What is the weather?" is a good one.
    • What is a Wit 'intent', 'entity', 'intent attribute'? Is this Wit language or is there a recognized language for this AI area?
    • A Wit 'story' is a path of interaction between the user and Wit.
  • 'Recipes for conversational apps' page
    • Duckling - "Duckling is a Clojure library that parses text into structured data." Duckling has a BSD license that can be used without issue.
    • There is a lot of good information on this page showing how to use Duckling and matching words and sentences on queries, some of which Melissa does.
    • The implications of getting information such as motion picture theater movie times and airplane scheduling is something to think about.
  • HTTP API Reference
    • This page provides web-page interface to Wit's server.
    • Speech-recognition capability is shown on this page with a transfer of an audio file to the Wit server. This would not appear to have the immediate response that Melissa can provide.
    • I am getting the sense that the collected user information and some portion of the user assembled application is maintained on Wit's server but could easily be maintained on the web-site provider's server.
  • Node.js.
    • Wit javascript and jason at GitHub to be used for web-pages. Uses Wit's restrictive license.
  • Python
    • "pywit is the Python SDK for Wit.ai." At GitHub. Uses Wit's restrictive license.
  • Ruby
    • "wit-ruby is the Ruby SDK for Wit.ai." At GitHub. Uses Wit's restrictive license.
  • Wit conclusion.
    • There are a number of useful ideas that can be obtained from a study of Wit such as the use of Duckling, a web-page interface, server-based services, and structure and design of an AI API with others not listed here.
    • A web-page interface is something I have thought of here for my LAMP computer/server. And this direction suggests that Wit is targeting developers who write web-pages for their sites that will interface to Wit's server. For example, getting airplane tickets, restaurant locations, event schedules and other commercial connectivity has been promoted by Apple's Siri. This then becomes a way to encourage interest to the developer's site with possible revenues from routing customers to businesses, similar to web advertising revenues.

    I am not against this model which may become useful for Melissa but I am seeing these server-based services as part of the many available that would be centered on a user app that can maximize any useful connectivity without capture by any particular outside interest. The web-page is controlled by the web-page provider that in this case is tightly integrated with Wit through Wit's API and restrictive license.

    The request on Melissa for a Python GUI could substitute for and take ideas from Wit's web-page API.

Obtained error at pywapi when installing on Ubuntu 14.04LTS

From command-line output
Downloading/unpacking pywapi==0.3.8 (from -r requirements.txt (line 7))
Could not find any downloads that satisfy the requirement pywapi==0.3.8 (from -r requirements.txt (line 7))
Cleaning up...
No distributions at all found for pywapi==0.3.8 (from -r requirements.txt (line 7))
Storing debug log for failure in /home/nnelson/.pip/pip.log

From .pip/pip.log
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 278, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "/usr/lib/python2.7/dist-packages/pip/req.py", line 1178, in prepare_files
url = finder.find_requirement(req_to_install, upgrade=self.upgrade)
File "/usr/lib/python2.7/dist-packages/pip/index.py", line 277, in find_requirement
raise DistributionNotFound('No distributions at all found for %s' % req)
DistributionNotFound: No distributions at all found for pywapi==0.3.8 (from -r requirements.txt (line 7))

Integration of CMU Sphinx

Integration of CMU Sphinx, so that the users that have option to select either Google STT or Sphinx STT. Integrating CMU Sphinx will give us the advantage that Melissa will be able to run offline.

Review of AI projects - Jasper

Review of AI projects - Jasper

Jasper

After looking through a few of Jasper's python modules and noticing the MIT license I suggest we get all we can from Jasper. We would understand their code and mix it with ours, taking all that they have that we do not. This also gets us to a question of whether or not we will be ahead by merging with the Jasper project or whether our focus is sufficiently different to run a parallel effort.

I see they are using a direct word-action association that though effective limits the greater variation in user expression's that Api.ai would provide.

At the moment I am thinking we get all we can from Jasper, use a better user-expression-to-action method along Api.ai or as I have noted under the ConceptNet discussion, and begin developing domains along the Api.ai line.

Various suggestions

In main.py

It looks like fetchThreshold is not being used.

From line 22 'profile =' to line 44 'client_secret =' looks like initialization that would go into a class and then when 'brain' is called at lines 154 and 198 the class would be the only parameter. These initialization lines could expand to a much larger number with new feature additions.

'main' has a recursive loop with itself and 'passiveListen' that should cause it to error with "maximum recursion depth exceeded" if a person runs Melissa long enough. This should likely be the usual server loop such as 'while True:'.

It looks like we can change THRESHOLD_MULTIPLIER in passiveListen to 0.6 and replace the getScore calls with 'audioop.rms(data, 2)' directly.

The idea of passiveListen appears to be to wait until a person starts talking and then to wait a second and give a prompt after which the recognizer listens and then speech-recognition is done. This sequence confused me when I using Melissa in that I would start to say something, Melissa would immediately respond with 'Yes?' and ignore what I said and I would have to say it again.

I suspect that 'r.listen(source)' at line 141 already manages this threshold activity since it must decide when a person stops speaking, when the speech sentence ends, which is much the same logic as figuring out when it begins. It may be that we would wake Melissa when we want to speak for a while and sleep (put in standby) when we did not speak for a while if that seemed important.

If 'audio' object at line 141 had the same format as 'data' at line 100 and we measured the time taken at line 141 we might be able to automatically wake and sleep. That is, if the time taken was long and or the audio was very short, in the manner of noise, a sleep mode could be applied.

'r.adjust_for_ambient_noise(source)' may be something to put before the listen line.

The computation requirement of lines 103-105 can be reduced given that that loop runs 15 times with a sum over a 30 element list, 435-450 additions for a second of audio. If we divide the score by the length before appending to the list we can subtract the first list element from the average, pop the first element, append the newly-divided element, and add that element to the average.

For tts.py I changed line 12 to
tts_engine = 'espeak -v mb-en1 -s 150'

If this is a common user-interest parameter it might go in profile.yaml.

For play_music.py I set 'music_path:' to a music files directory and Melissa did not find those files. I realized that play_music.py is only looking for mp3 files and I use wav and flac files. sox, available on Linux and Windows, will play these and other common audio file types (not mp3) and it looks like it would not take much to add that.

I do not mind trying to work on some of these. Let me know.

Dynamically loading modules in brain.py

We should set up a mechanism to dynamically load the modules from GreyMatter to brain.py which will allow contributors to add third-party modules by just putting them in the GreyMatter folder.

In each of the modules, we can set a KEYWORDS constant to specify the keywords for our check_messsage() mechanism.

Integration of NLTK

We need to integrate NLTK (Natural Language Toolkit) for better understanding of the user's speech.

Python GUI for Melissa

I believe that it is time to build a python GUI for Melissa. Any discussions on which GUI Framework we should select?

Create GitHub Wiki for Melissa

Create GH Wiki for Melissa to display about information, installation details, configuration details and information for developers as the README.md is getting cluttered.

Non-fatal build failure on requirement PyAudio

After pip install - r requirements.txt

Building wheels for collected packages: oauthlib, pyaml, PyAudio, pywapi, PyYAML, selenium, wikipedia
  Running setup.py bdist_wheel for oauthlib
  Stored in directory: /home/akshay/.cache/pip/wheels/24/97/b6/b1f31b9c4b7710fe4e5a28e591349f68e43d6027aef320d056
  Running setup.py bdist_wheel for pyaml
  Stored in directory: /home/akshay/.cache/pip/wheels/b6/7f/d5/2e78837e29363d9634b407813124643aad805535b69d956808
  Running setup.py bdist_wheel for PyAudio
  Complete output from command /home/akshay/tech/applications/Melissa/venv/bin/python -c "import setuptools;__file__='/tmp/pip-build-odobg1_q/PyAudio/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" bdist_wheel -d /tmp/tmpkujwmp94pip-wheel-:
  running bdist_wheel
  running build
  running build_py
  creating build
  creating build/lib.linux-x86_64-3.5
  copying src/pyaudio.py -> build/lib.linux-x86_64-3.5
  running build_ext
  building '_portaudio' extension
  creating build/temp.linux-x86_64-3.5
  creating build/temp.linux-x86_64-3.5/src
  gcc -pthread -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector-strong --param=ssp-buffer-size=4 -fPIC -I/usr/include/python3.5m -c src/_portaudiomodule.c -o build/temp.linux-x86_64-3.5/src/_portaudiomodule.o
  src/_portaudiomodule.c: In function ‘_stream_callback_cfunction’:
  src/_portaudiomodule.c:43:8: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
       _a < _b ? _a : _b;      \
          ^
  src/_portaudiomodule.c:1351:32: note: in expansion of macro ‘min’
       memcpy(output_data, pData, min(output_len, bytes_per_frame * frameCount));
                                  ^
  src/_portaudiomodule.c:43:18: warning: signed and unsigned type in conditional expression [-Wsign-compare]
       _a < _b ? _a : _b;      \
                    ^
  src/_portaudiomodule.c:1351:32: note: in expansion of macro ‘min’
       memcpy(output_data, pData, min(output_len, bytes_per_frame * frameCount));
                                  ^
  src/_portaudiomodule.c:1354:20: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
       if (output_len < (frameCount * bytes_per_frame)) {
                      ^
  gcc -pthread -shared -Wl,-O1,--sort-common,--as-needed,-z,relro build/temp.linux-x86_64-3.5/src/_portaudiomodule.o -L/usr/lib -lportaudio -lpython3.5m -o build/lib.linux-x86_64-3.5/_portaudio.cpython-35m-x86_64-linux-gnu.so
  installing to build/bdist.linux-x86_64/wheel
  running install
  running install_lib
  creating build/bdist.linux-x86_64
  creating build/bdist.linux-x86_64/wheel
  copying build/lib.linux-x86_64-3.5/_portaudio.cpython-35m-x86_64-linux-gnu.so -> build/bdist.linux-x86_64/wheel
  copying build/lib.linux-x86_64-3.5/pyaudio.py -> build/bdist.linux-x86_64/wheel
  running install_egg_info
  running egg_info
  writing top-level names to src/PyAudio.egg-info/top_level.txt
  writing src/PyAudio.egg-info/PKG-INFO
  writing dependency_links to src/PyAudio.egg-info/dependency_links.txt
  warning: manifest_maker: standard file '-c' not found

  reading manifest file 'src/PyAudio.egg-info/SOURCES.txt'
  reading manifest template 'MANIFEST.in'
  warning: no files found matching '*.c' under directory 'test'
  writing manifest file 'src/PyAudio.egg-info/SOURCES.txt'
  Copying src/PyAudio.egg-info to build/bdist.linux-x86_64/wheel/PyAudio-0.2.9-py3.5.egg-info
  running install_scripts
  Traceback (most recent call last):
    File "<string>", line 1, in <module>
    File "/tmp/pip-build-odobg1_q/PyAudio/setup.py", line 122, in <module>
      extra_link_args=extra_link_args)
    File "/usr/lib64/python3.5/distutils/core.py", line 148, in setup
      dist.run_commands()
    File "/usr/lib64/python3.5/distutils/dist.py", line 955, in run_commands
      self.run_command(cmd)
    File "/usr/lib64/python3.5/distutils/dist.py", line 974, in run_command
      cmd_obj.run()
    File "/home/akshay/tech/applications/Melissa/venv/lib/python3.5/site-packages/wheel/bdist_wheel.py", line 213, in run
      archive_basename = self.get_archive_basename()
    File "/home/akshay/tech/applications/Melissa/venv/lib/python3.5/site-packages/wheel/bdist_wheel.py", line 161, in get_archive_basename
      impl_tag, abi_tag, plat_tag = self.get_tag()
    File "/home/akshay/tech/applications/Melissa/venv/lib/python3.5/site-packages/wheel/bdist_wheel.py", line 155, in get_tag
      assert tag == supported_tags[0]
  AssertionError

  ----------------------------------------
  Failed building wheel for PyAudio

[..and continues]

I hadn't installed PyAudio before doing this step. Could this be because of that? If yes, it might be a good idea to mention PyAudio before the instruction to do pip install

Add Spelling Action Module

Create a spelling.py file in melissa/actions/ and use the keyword "spell". Rest of the dynamics of this module are similar to #58.

Error when saying something other than "Who are you"?

I am at Chapter 3 of your book and I am doing the brain/general conversation module. While the "Who are you?" command works the undefined function doesn't. I am getting an unexpected error when I say other than "Who are you?" Can you help?

2016-09-13 7 59 48

Improve the documentation of the Wiki

The wiki needs some serious facelifting and elaboration of explanation, content and guidelines for new contributors. Fixing this issue will also help you get familiar with Melissa's Codebase.

I am available for mentorship on this issue if someone wishes to take this.

Easier Installation Process

We need to improve the installation process of Melissa, so that non-developers and beginners can also easily run Melissa on their machines.

Review of AI projects - Api.ai

Review of AI projects - Api.ai

Api.ai

  • Home page This looks like a well-designed site. The rotating pictures and text in the middle of the page show how English sentences are translated into JSON entities. The bottom of the page has nine topic areas for review.
  • Complete Solution A nice high-level page. The three components are:
    • Speech Recognition with topic specific language models and choice of recognizer.
    • Natural Language Processing This looks like the basis for the rotating pictures on the home page.
    • Fulfillment is what the AI responds with to the user's requests.
  • Get started in 5 steps A very useful page using the AI language seen on Wit.
    • (1) Create agent, (2) Create entities, (3) Create intents, (4) Test and train your agent, (5) Integrate.
      Create intents uses machine learning on the designer's word and phrase list to generalize how the information from user's statements can be obtained.
  • Knowledge Included "Domains are pre-defined knowledge packages."
    • The bottom of the page provides a list of domains and the languages available for them.
    • The response JSON item in the middle of the page is interesting.
    • Fulfillment is a response assembled by Api.ai for a request in a given domain.
  • Machine Learning "An intent represents a mapping between what a user says and what action should be taken by your software."
    • This page details how to enter data that provides an ability to translate a user's expression into an action. An entity appears to be a symbol (set of characters) to which some part of the user's expression is translated. Identified entities are connected to actions.
  • Conversation Support "Contexts are strings that represent the current context of a user’s request."
    • A context provides clarity to what would be a user's vague statement, such as in the use of pronouns, that do not identify a specific object without a context.
    • Examples are given showing how vague statements without context can be clarified and how contexts are created and used.
  • Integrations "You can now use your agents as knowledge extensions within the Assistant.ai app."
    • The idea here appears to be to mix Api.ai responses with other user applications.
  • Api.ai conclusion
    • The items that standout for Api.ai are
      • The domains and their multi-lingual support.
      • The use of machine learning to generalize the entity identification from a user's expressions.
      • Well designed AI construction pages.
      • Translation of recognized speech into a JSON return. (I can see now how Wit could do this similarly.)
    • Along with Wit, Api.ai is for web-page and app designers where expressions are obtained from users, translated and responded to by Api.ai or responded to by the designer app. The domains and machine learning aspects are particularly interesting.

Always on with Keyword detection

It would be great if Melissa puts itself in a suspended state after a pre-determined configurable period of inactivity (configurable in profile.json) and can be waken up only by it's keyword which would basically be the name of the user's VA (specified in profile.json).

Unittests for Different Components

Melissa needs tests, a lot of tests for the various components of its codebase to be run on Travis CI and locally before submitting a PR.

Add Date and Day Functions

Giving Melissa the ability to tell the date and day would be great. For that purpose, two new functions will have to be added: what_is_date() and what_is_day(). Keywords would be date and day respectively.

The file to touch is this.

Requirement pywapi externally hosted and unverified

On doing pip install -r requirements with python3 on archlinux,

Collecting pywapi==0.3.8 (from -r requirements.txt (line 4))                                                                                         
  Could not find a version that satisfies the requirement pywapi==0.3.8 (from -r requirements.txt (line 4)) (from versions: )                        
  Some externally hosted files were ignored as access to them may be unreliable (use --allow-external pywapi to allow).                              
No matching distribution found for pywapi==0.3.8 (from -r requirements.txt (line 4))

pip install -r requirements.txt --allow-external pywapi

Collecting pywapi==0.3.8 (from -r requirements.txt (line 4))
  Could not find a version that satisfies the requirement pywapi==0.3.8 (from -r requirements.txt (line 4)) (from versions: )
  Some insecure and unverifiable files were ignored (use --allow-unverified pywapi to allow).
No matching distribution found for pywapi==0.3.8 (from -r requirements.txt (line 4))

pip install -r requirements.txt --allow-external pywapi --allow-unverified pywapi

Collecting pywapi==0.3.8 (from -r requirements.txt (line 4))
  pywapi is potentially insecure and unverifiable.
  Downloading https://launchpad.net/python-weather-api/trunk/0.3.8/+download/pywapi-0.3.8.tar.gz

works.

This might be a problem if everyone faces it and is not warned about.

Runtime Performance Issues

Melissa currently has a very poor runtime which can be verified by running time python main.py. The real values in the output are impossibly high for me, while the user and sys values are quite reasonable.

Customizable VA Name and Gender

I think it would be a good idea to allow an option in the profile.json to customize the name and gender (the voice) of the VA.

Action-Hub Repository For Melissa

This would involve removing melissa/actions folder so that if contributors wish to add a new module, they won't have to touch the main engine of Melissa. Furthermore, a new repository would be added under Melissa-AI named Action-Hub which would contain the list of action modules that the user might wish to add to their installation of Melissa. This would also include the creation of a action_hub.py under melissa/ to create a command line interface to your action repository. melissa/actions/ would have to be added to the .gitignore file as well. We will have to separate the requirements of action modules from the core program as well.

The new workflow would look abstractly like the following:

  1. User runs Melissa for the first time which does not contain any action modules. Program notices the absence of actions folder and presents the user with a welcome screen for the Action Hub. The user can access the Action Hub anytime later by the command Action Hub.
  2. The screen might look something like the following:
Welcome to the Melissa Action Hub, here you can add new functionalities for your Melissa installation.

1. Weather Module
    Description: ........

2. Conversation Module
     Description: ........

......

You can enter the serial number of a particular module for installation, enter the serial numbers separated by comma (,) for installation of multiple modules together or zero (0) for exiting the Action Hub.

Select modules for installation: 

Some questions and challenges:

  1. What is the best approach for separating the requirements.txt of individual modules and organising then in a proper structure at the Action-Hub repository?
  2. The best approach for parsing through the description of each action module?

The successful completion of this issue will help us to concentrate on the development and enhancement of the core program and functionalities of Melissa separately with a better emphasis.

Fix The Readout Of Date

When I ran the tell_time module today (8th Oct, 2016), Melissa told me that the date was 10th Aug, 2016. This needs to be fixed. The appropriate file to touch is this.

PocketSphinx is broken

Okay, somewhere along the line, we have broken the PocketSphinx integration. Now setting stt to sphinx in profile.json gives me the following traceback:

Traceback (most recent call last):
  File "main.py", line 20, in <module>
    main()
  File "main.py", line 18, in main
    stt(profile_data)
  File "/Users/tanay/Desktop/Melissa-Core/GreyMatter/SenseCells/stt.py", line 75, in stt
    brain(profile_data, sphinx_stt())
  File "/Users/tanay/Desktop/Melissa-Core/GreyMatter/SenseCells/stt.py", line 40, in sphinx_stt
    config.set_string('-hmm', os.path.join(modeldir, hmm))
  File "/usr/local/lib/python2.7/site-packages/sphinxbase/sphinxbase.py", line 137, in set_string
    return _sphinxbase.Config_set_string(self, key, val)
TypeError: in method 'Config_set_string', argument 3 of type 'char const *'

Can't seem to figure out what's wrong, help needed @neilnelson!

Add Threading to Melissa

As suggested by @neilnelson here: #22.

  • Instead of waiting for an action to complete before another user expression was provided, a thread could be launched for each user expression and the return provided in a notification stack, not necessarily the notification procedure given above. The user could either wait for the task completion response from the stack that would seem the same in timing and flow as the current response, or the user could continue providing expressions and Melissa would give the responses when sufficient quiet time, a few seconds, was provided by the user. It is reasonable to think the some requests to Melissa could take a while. A speaking collision, where the user speaks and Melissa is giving its response might be handled by Melissa providing the response again.
  • Separate threads to handle camera input and screen output.
  • Another threading possibility is that if Melissa was used to interact with a family spread throughout a home then threads for each room might be useful.
  • Speaker recognition is similar to speech recognition and so having a different thread for each speaker in a group may be useful. See Alize for speaker recognition.

RuntimeError upon running main.py

After answering the configuration questions, after running main.py, the speech synthesis begins and then I get the following error:

Traceback (most recent call last):
File "main.py", line 20, in
main()
File "main.py", line 15, in main
main()
File "main.py", line 18, in main
stt(profile_data)
File "/Users/xxxxxxxx/Melissa-Core/GreyMatter/SenseCells/stt.py", line 43, in stt
decoder = Decoder(config)
File "/usr/local/lib/python2.7/site-packages/pocketsphinx/pocketsphinx.py", line 271, in init
this = _pocketsphinx.new_Decoder(*args)
RuntimeError: new_Decoder returned -1

No module named PocketSphinx==0.8 on Ubuntu 14.04.3

While executing sudo pip install -r requirements.txt, pip says that there is no module named PocketSphinx==0.8. After changing it to PocketSphinx==0.0.9 (PocketSphinx on PyPI) and removing SphinxBase==0.8 (as SphinxBase is installed while installing PocketSphinx), all requirements are successfully installed.
Is this problem occurring only on Ubuntu or on all platforms?
If it's occurring on all platforms, should I update requirements.txt and open a PR?

Multiple Runs Result in IndexError

I found that running Melissa multiple times results in IndexError. Here's the traceback that I received:

Traceback (most recent call last):
  File "main.py", line 7, in <module>
    main()
  File "main.py", line 5, in main
    stt()
  File "/Users/tanay/Desktop/Melissa-Core/melissa/stt.py", line 39, in stt
    brain.query(speech_text)
  File "/Users/tanay/Desktop/Melissa-Core/melissa/brain.py", line 81, in query
    scoring_row['group'][scoring_row['order']]:
IndexError: list index out of range

pywapi cannot be installed anymore

$ pip install -r requirements.txt --allow-external pywapi --allow-unverified pywapi
DEPRECATION: --allow-external has been deprecated and will be removed in the future. Due to changes in the repository protocol, it no longer has any effect.
DEPRECATION: --allow-unverified has been deprecated and will be removed in the future. Due to changes in the repository protocol, it no longer has any effect.
Collecting beautifulsoup4==4.4.1 (from -r requirements.txt (line 1))
  Using cached beautifulsoup4-4.4.1-py3-none-any.whl
Collecting imgurpython==1.1.6 (from -r requirements.txt (line 2))
  Using cached imgurpython-1.1.6.tar.gz
Collecting oauthlib==1.0.3 (from -r requirements.txt (line 3))
  Using cached oauthlib-1.0.3.tar.gz
Collecting pyaml==15.8.2 (from -r requirements.txt (line 4))
  Using cached pyaml-15.8.2.tar.gz
Collecting PyAudio==0.2.9 (from -r requirements.txt (line 5))
  Using cached PyAudio-0.2.9.tar.gz
Collecting pywapi==0.3.8 (from -r requirements.txt (line 6))
  Could not find a version that satisfies the requirement pywapi==0.3.8 (from -r requirements.txt (line 6)) (from versions: )
No matching distribution found for pywapi==0.3.8 (from -r requirements.txt (line 6))
$ pip --version
pip 8.0.2 from /usr/lib/python3.5/site-packages (python 3.5)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.