Giter Club home page Giter Club logo

tjdashboard's Introduction

tjdashboard [Beta]

process representation for TJBot

This app displays a dashboard that attempts to show the processes that occur on tjbot behind the scenes. In addition, it also provides UI/buttons that can be used to control the bot e.g. control the LED color, make your bot look around and describe what it sees, wave its arm etc.

How It Works

  • Starts up a webserver (express) and serves up an interface which can be assessed via a browser on the pi localhost, port 8068. http:pi.ip.address:8068.
  • TJBot events are streamed to the interface which displays them. Examples of events include hearing a new message, receiving a response from a call to the conversation service etc.
  • Note. This recipe utilizes the experimental TJBot library to encapsulate simple functions for the bot.

Hardware

Follow the full set of instructions on instructables to prepare your TJBot ready to run the code.

Note: You must have a servo motor connected to your Pi.

Wiring Your Servo Motor

Your servo motor has three wires - Power, Ground and Data in. In this recipe I use the Tower Pro servo motor and the wires are as follows - Red (Power), Brown (Ground), Yellow (Data in). For this recipe, a software PWM library is used to control the servo motor, and I wire my setup as follows.

  • Red (+5v, Pin 2)
  • Brown (Ground, Pin 14)
  • Yellow (Data in, Pin 26, GPIO7 )

Note: In the code, you can always change the pins used.

Build

Get the sample code (download or clone) and go to the application folder.

git clone [email protected]:victordibia/tjwave.git
cd tjwave

Update your Raspberry Pi. Please see the guide [here to setup network and also update your nodejs] (http://www.instructables.com/id/Make-Your-Robot-Respond-to-Emotions-Using-Watson/step2/Set-up-your-Pi/) installation sudo apt-get update sudo apt-get upgrade curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash - sudo apt-get install -y nodejs

Note : Raspberry pi comes with a really old version of nodejs and npm (0.10), hence the need to upgrade it to the latest version.

Install ALSA tools (required for recording audio on Raspberry Pi). (Some of the sample code integrate voice commands)

sudo apt-get install alsa-base alsa-utils
sudo apt-get install libasound2-dev

Vision Recognition with opencv

You can run simple machine vision algorithms locally on your raspberry pi. This is done using the nodejs opencv wrapper. But first, you have to install open cv on your pi.

sudo apt-get install build-essential
sudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
sudo apt-get install python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev
sudo apt-get install libopencv-dev

Setup Watson conversation

The app uses Watson conversation to understand intent behind text.

  • You will need to set up your watson conversation flow and set up a workspace. More on that here .
  • You import sample conversation flow in the folder (workspace.json) to get you started. This creates intents for actions like "hello" , "see" , "wave" , "introduce" etc
  • Finally, this sample uses both audio and LED. These two hardware devices are known to conflict - a workaround is to disable onboard audio and use USB audio on your Pi.

Install Dependencies

npm install

if you run into errors installing dependencies, try

sudo rm -rf node_modules
sudo npm install --unsafe-perm

Set the audio output to your audio jack. For more audio channels, check the config guide.

amixer cset numid=3 1    
// This sets the audio output to option 1 which is your Pi's Audio Jack. Option 0 = Auto, Option 2 = HDMI. An alternative is to type sudo raspi-config and change the audio to 3.5mm audio jack.

Create config.js

# On your local machine rename the config.default.js file to config.js.
cp config.default.js config.js

Open config.js using your favorite text editor # (e.g // nano) and update it with your Bluemix credentials for the Watson services you use.
nano config.js

Note: do not add your credentials to the config.default.js file.

Test Your Servo

Before running the main code (voice + wave + dance etc), you may test your LED setup and your Servo motor to make sure the connections are correct and the library is properly installed. When you run the test module, it should turn your LED to different colors and wave your robot arm at intervals.

sudo node wavetest.js

If the LED does not light up, you can try moving the power from 3.3 to 5 volts. If neither the 3.3v or 5v pins work, you will need a 1N4001 diode. The diode is inserted between the power pin of the LED (the shorter of the two middle pins) and the 5v pin on the Raspberry Pi.

If your robot arm does not respond, kindly confirm you have connected it correctly. See the PIN diagram here for more information on raspberry pi PINS.

##Running

Start the application. (Note: you need sudo access)

sudo node dashboard.js     

Then you should be able to speak to the microphone. Sample utterances are

can you raise your arm ?
can you introduce yourself ?
What is your name ?
can you dance ?
what do you see ?
Can you hear me ?

You can add more utterances by creating additional intents on your watson conversation dialog.

For the dance command, your robot processes wav files in the sounds folder. Please ensure you have a .wav file there and set that as your sound file.

tjdashboard's People

Contributors

victordibia avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.