Giter Club home page Giter Club logo

leobot's Introduction

Service Status
Docker Image Docker Hub
Code Style Checks CircleCI
Unit Tests Build Status
Install Tests Build Status
Code Coverage Coverage Badge

leobot

LeoBot

LeoBot telepresence robot

License

Linux

Docker

For convenience it is recommended to use Docker containers. Please follow these steps to run Docker container on your machine.

  1. Install Desktop OS Ubuntu Trusty or Xenial on your machine or in virtual machine
  2. Install Docker-CE using these instructions
  3. In order to executed Docker without sudo please execute
sudo usermod -aG docker $USER
  1. Logout and login to your machine again :)
  2. In case if you have NVidia graphic card customized Docker could be installed which will utilize your GPU. Please follow these extra steps.
  3. For development the following docker image will be used for NVidia Docker this one.
  4. Use the following command to start ordinary Docker container
docker run -it --name leobot_dev -p 8080:8080 -p 8090:8090 -p 9090:9090 -e DISPLAY -e LOCAL_USER_ID=$(id -u) -v /tmp/.X11-unix:/tmp/.X11-unix:rw rosukraine/leobot:latest

for NVidia Docker please use

nvidia-docker run -it --name leobot_dev -p 8080:8080 -p 8090:8090 -p 9090:9090 -e DISPLAY -e LOCAL_USER_ID=$(id -u) -v /tmp/.X11-unix:/tmp/.X11-unix:rw rosukraine/leobot-dev-nvidia:latest
  1. Black window of Terminator UI console will appear after some time.
  2. You can use it's features to split terminal window into smaller terminals and run few commands in parallel (Ctrl+Shift+E).
  3. If you want to run real robot add user to dialout group and restart Docker container
sudo usermod -a -G dialout user

In order to relaunch docker container after you closed Terminator window or rebooted machine please run

docker start leobot_dev

and for NVidia Docker

nvidia-docker start leobot_dev

After some time Terminator window will reappear.

IDEs

In case if you want to run PyCharm in Docker container please run

pycharm

To launch QtCreator please run

qtcreator

For VSCode type

vscode

URDF and RViz

In order to debug URDF please launch

roslaunch leobot_launch view_urdf.launch

To have a look on the state of the robot in RViz run

roslaunch leobot_launch rviz.launch

Windows

Docker Desktop

For OS Windows it is recommended to use Docker Desktop containers. Please follow these steps to run Docker container on your machine.

  1. Install Windows 10 on your machine or in virtual machine
  2. Install Docker Desktop using these instructions
  3. For development the following docker image will be used.
  4. Use the following command to start ordinary Docker container
docker run -d --name leobot_dev -p 8080:8080 -p 8181:8181 -p 8282:8282 -p 8090:8090 -p 9090:9090 rosukraine/leobot-dev-web:latest
  1. Command will spawn Docker container and exit.

In order to relaunch docker container please run

docker start leobot_dev

IDEs

In Docker Desktop only Cloud9 web IDE is available. Open http://localhost:8181 in your browser.

General

Starting the web server

Once you install all project dependencies, you can start the web server with such command

roslaunch leobot_launch web_server.launch

Additionally you can specify a custom port for the web server in docker container

roslaunch leobot_launch web_server.launch port:=1234

In this case you'll need to re-build the docker container to publish the specified port to your host machine (see docker run -p command at Docker section).

If everything goes well, you'll see the message

Web server started at port 8080

After that the web server will become available on your host Ubuntu OS at http://localhost:8080 as well as from LAN.

Navigating on known map

Start office simulation

Linux in Terminator

roslaunch leobot_launch simulation.launch

Windows in Cloud9 IDE Terminal

roslaunch leobot_launch gzweb.launch

Please note that simulation URL is http://localhost:8282

If you want to reduce usage of machine's resources and increase simulation speed on Linux machine you could run it without GUI in headless mode. For these purposes you could use the following command

roslaunch leobot_launch simulation.launch headless:=true gui:=false

Please note that Windows based Docker container is already running Gazebo in headless mode

Start art gallery simulation

Linux in Terminator

roslaunch leobot_launch simulation.launch world_file:=artgallery

Windows in Cloud9 IDE Terminal

roslaunch leobot_launch gzweb.launch world_file:=artgallery

Please note that simulation URL is http://localhost:8282

Launch navigation stack

Linux in Terminator

Please note that in order to launch second command split Terminator window by two using Ctrl-Shift-E. More information on Terminator shortcuts can be found here)

roslaunch leobot_launch navigation.launch

In RViz which appear after some time select "2D Nav Goal" and robot will travel to it. Like it is shown in this video.

Windows in Cloud9 IDE Terminal

Please create another terminal in Cloud9 IDE. More details could be found here.

roslaunch leobot_launch navigation.launch gui:=False

You could send Goal commands manually using command rostopic.

Building the map (Linux only)

Start simulation

roslaunch leobot_launch simulation.launch

Launch gmapping node

roslaunch leobot_launch gmapping.launch

Drive arround environment to build map using the following keys. Please note that console window with gmapping launch file should be active in order to teleoperate robot using keys

Save map to file

rosrun map_server map_saver -f <map_file_name>

Using a USB joystick (Linux only)

To use a USB joystick you need to rebuild the docker container. See item 7 in Docker.

Add the following parameter right after docker run

--device=/dev/input/js0

Notice: in this case you must have the joystick plugged in when you trigger docker run and every time you docker start the corresponding container. Otherwise these commands will fail with an error

docker: Error response from daemon: linux runtime spec devices: error gathering device information while adding custom device "/dev/input/js0": no such file or directory.

To avoid this, you can create one container that supports USB joystick and another default one (using different --name parameters).

You can test that joistick is available from the container with a command

cat /dev/input/js0

It should print strange symbols in the console when you press joystick buttons. Press Ctrl+C to exit.

When joystick is available in the container, trigger the simulation.launch and then start the tepeoperation with joystick support using command

roslaunch leobot_control teleop.launch joy_enabled:=true

If your USB joystick is connected to something different than /dev/input/js0 you can configure it adding such parameter

joy_device:=/dev/input/js1

To operate the robot you need to press the so-called deadman button simultaneously with the arrow buttons or manipulating the thumbstick. Most often that is one of the main buttons at the right side of joystick. Just experiment to find the right one. If you don't press the deadman button the robot won't move.

The joystick will be working even if the console window is minimized, unlike the keyboard teleoperation.

leobot's People

Contributors

abench avatar andriypt avatar lyubomyrd avatar maxzhaloba avatar systemdiagnosticss avatar vyosypenko avatar yarynad avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

leobot's Issues

Update the credits on web page

Currently, our web page contains such credit notice

LEOBOT©2018 Created by SYSTEMDI 
Developed by Yaryna Demkiv

I suggest to update it to include everyone who works on the project :) Here's how I see it

LeoBot©2018 Created by ROS Ukraine Community

where ROS Ukraine Community links to one of our sites, such as

  1. Page with list of LeoBot contributors https://github.com/ros-ukraine/leobot/graphs/contributors
  2. ROS Ukraine GitHub page https://github.com/ros-ukraine
  3. Facebook page https://www.facebook.com/groups/279366215828155

Bug in README instruction

Start art gallery simulation command doesn't work
roslaunch leobot_launch simulation.launch world_file:=artgallery.world

Error:
[Err] [SystemPaths.cc:410] File or path does not exist["/home/user/workspace/leobot/base/src/leobot/leobot_gazebo/worlds/artgallery.world.world"]
[Err] [Server.cc:380] Could not open file[/home/user/workspace/leobot/base/src/leobot/leobot_gazebo/worlds/artgallery.world.world]

Investigate and implement the VR APIs

This tasks consists of such steps:

  1. Investigate the VR APIs and estimate if they would work for us better than the current approach
  2. If that appears to be the case, implement the most suitable API

Technologies to consider:

  1. WebVR API. Resources:
  1. WebXR API which is designed as a replacement and successor of WebVR. Extended reality combines virtual reality, augmented reality and mixed reality (promo).

Disadvantages of current implementation:

  1. Need to touch the screen and confirm the fullscreen mode.
  2. Fullscreen mode in Android works only when vr.html is launched from desktop icon as HTML application.
  3. No fullscreen mode in iOS. I tested the current code from B#99_Add_landscape_and_lockorientation branch and could not initialise the VR page under iOS in Chrome, Safari and Firefox. The fullscreen popup appeared on the screen, but when I confirmed it, all browsers remained in windowed mode.

Create HTML and CSS for design

Create HTML, CSS for design which was created in #4
Please place it in the folder ./docs/design/management
You can use design link from #4 comments

image_transport ideas. h264?

Hello everyone.
I wonder what are the plans for delivering consumer live stream from the robot.

So I was thinking since it is museum robot project the picture quality be better at least HD for the consumer camera. Teleop can be done via SD or lower quality of cause.

  • Should the HD camera use ROS transport or just push the stream to RTMP server directly and the main web page just will know the live stream URL?
  • Since it is a learning platform we may utilize ROS transport and send live stream via messages. We can use this stream later for NN segmentation and put overlays over paintings etc..

Any plans for the type of camera and implementation of delivering this stream to the user?

Robot oscillates in simulation

Steps to reproduce:

  1. Run simulation.launch
  2. Run teleop.launch from leobot_control
  3. Operate the robot and turn left or right
  4. Robot starts swaying left to right with about 1 Hz frequency

Stage #1 Requirements

Create simulation of the robot with the following features:

  • Ability to connect remotely and get video stream from camera
  • Teleoperate robot remotely
  • Automatic obstacles avoidance

Product implementation milestones

  1. Run RViz and watch camera stream plus teleoperate robot from keyboard
  2. Run web browser and watch camera stream and teleoperate using buttons on the page
  3. Run web browser and watch camera stream and send goals to robot using some easy input (navigation with obstacle avoidance)
  4. Use stereo camera and project image into VR headset and use keyboard or joystick to teleoperate robot.

Implement head teleoperation from web using keyboard

I suggest to add bindings for such keys

Action Standard Key Laptop Keyboard Key
Turn head to left Delete `
Centre the head End 1
Turn head to right Page Down 2

Feel free to post your suggestions re: usability as well as on other questions.

Related task: #52

Usage of branches

I'm rather new to GitHub so I wanted to ask for advice on branching and merging of project code.

I've performed such steps:

  1. Created a branch F#16_web_server for the task "Create web package #16"
  2. Added some preliminary code to this branch (empty package)
  3. Created a pull request for this code
  4. @AndriyPt merged it into the default branch and closed the pull request. When I open the pull request page on GitHub it suggests to delete this branch but so far it remains in the repository.

Next thing I'm going to add more code to implement this issue.

Could you please help me with such questions:

  1. Should I delete the branch F#16_web_server as suggested by GitHub and create a new one with further commits for this task?
  2. Should I create another pull request when I finish this task?

Thanks :)
Max

Create web package

Create ROS package which will:

  • start web server which can run nodejs
  • renders page with text "Hello World"

Prevent screen locking on timeout when web pages are open

This issue should be fixed on all pages: main.html and vr.html

In theory, when there's a playing video on the page, browser prevents screen from locking. However we do have video streams on our pages, the screen locking still currently takes place in our case.

Course of action:

  1. First of all, investigate how to prevent browser from screen locking using the video streams we have and why this currently does not happen. This is the recommended solution.
  2. Only in case if we reasonably would not be able to use the first solution, we can use a third party library for this purpose.

Create Logo

Create logo of the robot.
Robot name is LeoBot so it should be something with Lion :)

Hide the navigation and address bars (go fullscreen)

The web page should occupy full screen size in Chrome, Safari and preferably in Firefox (mobile versions).

iOS Safari and Firefox show 2 additional elements by default: address bar at the top and navigation bar at page bottom.

Integrate Bluetooth Joystick to mobile VR set

Joystick will be used for teleoperation.
Mobile browser should read commands from joystick and move robot accordingly.
There should be ability to calibrate joystick on the web page.

Create simple look of robot

It should be wheeled robot
It will contain camera, speakers and microphone.
Also there should be monitor or any other device to visualise remote person.

Optimise CSS

Optimise the duplicated CSS code in selectors:

#head-control-button-left
#head-control-button-left:active
#head-control-button-left:focus
#head-control-button-center
#head-control-button-center:active
#head-control-button-center:focus
#head-control-button-right
#head-control-button-right:active
#head-control-button-right:focus

Related tasks: #52, #62

Starting a GUI application from docker container

I have a task to run render a URDF file using this tutorial: http://wiki.ros.org/urdf/Tutorials/Building%20a%20Visual%20Robot%20Model%20with%20URDF%20from%20Scratch. I faced several issues when trying to accomplish it, so I want to share my solutions with the comunity.

I've tried to use the ROS docker containers as described here: http://wiki.ros.org/docker/Tutorials/Docker.

Here are the steps I've taken:

  1. Downloaded the ROS docker image
    docker pull ros:kinetic-robot-xenial
  2. Tried to start the container with regular commands docker run -it ros and connect to it in new terminal window using docker exec -it <container_name> bash. In this case, terminal commands were wrapping to the same line and overwrote the beginning of the command, so I couldn't actually see the entire command and the terminal ended up rather unusable for me. As I found out, it happened because of this issue: https://unix.stackexchange.com/a/264236

So I decided to install Terminator within the docker container. This requires some additional steps to make it connect to the host X window environment. Here's my solution.

  1. Delete the existing container created from ros image (of you created any) with docker container rm <container_name>

  2. Create a new container passing the environment variables and virtual FS (copied from LeoBot):
    docker run -it -e DISPLAY -e LOCAL_USER_ID=$(id -u) -v /tmp/.X11-unix:/tmp/.X11-unix:rw ros
    Triggering printenv in docker container should list the variables you passed.
    During this step, when container is running, you can establish a new connection to it by finding out the container name docker container list and connecting with docker exec -it <container_name> bash.

  3. When you try to install any program inside the container (e.g. apt-get install terminator) apt will fail:

Reading package lists... Done
Building dependency tree       
Reading state information... Done
E: Unable to locate package terminator

For some reason this container comes with no preconfigured application repository, so you need to add one apt-get -qq update and then apt-get install terminator.

Then, when you try to start the terminator from container, it will fail with such message:

root@55d1d562bb95:/# terminator
No protocol specified
/usr/lib/python2.7/dist-packages/gtk-2.0/gtk/__init__.py:57: GtkWarning: could not open display
  warnings.warn(str(e), _gtk.Warning)
You need to run terminator in an X environment. Make sure $DISPLAY is properly set

If you passed the parameters when creating this container using docker run, the remaining bit of configuration is to allow connections to X window system on your local machine, i.e. run xhost local:root on your host. After this, it should be possible to launch the terminator from container, even without restarting it.

Other useful commands:

List all docker containers (running and stopped):
docker container list -a
Each time you trigger docker run, it creates a new container, so it's a good idea to:

Delete the container using name or ID from previous command:
docker container rm <container_name>

More info on ROS docker images:
https://store.docker.com/images/ros
https://hub.docker.com/_/ros/

Refactor Docker images

In order to get effective integration test run on CI servers and apply fix for Gazebo 7 the following changes need to be done

  1. Introduce leobot base Docker image with fixed Gazebo version (maybe it will be based on OSRF libgazebo 7 image)
  2. Introduce IDE image with all source code and IDEs and Gazebo fixes (inherited from the first one)

The first image will be used for CI servers. second one would be used for Development.

Optimise the HTML for teleoperation block

Currently the "control-panel" and "small-buttons" blocks have almost identical markup. I suggest to use the "content" CSS instruction to put labels into buttons and merge these HTML blocks into one. Feel free to leave your comments and considerations :)

Stage #2 Requirements

Creation of the physical robot. Extra functionality:

  1. Two way communication
  2. Automatic finding of the charging station
  3. Web UI
  4. Follow predefined path with set of way points. There should be possibility to pause execution e.g. excursion in art museum along Van Gogh works.

Builds fail on kinetic-devel

When I run catkin_make install from kinetic-devel it fails with such message

CMake Error at leobot/leobot_launch/cmake_install.cmake:51 (file):
  file INSTALL cannot find
  "/home/user/workspace/leobot/base/src/leobot/leobot_launch/launch/display.launch".
Call Stack (most recent call first):
  cmake_install.cmake:124 (include)

Travis build also fails (tested in PR #64 after merging changes from the main branch).

Feel free to assign to me if this issue is actual.

Management Web Site design

Create simple design for web site with left hand side menu
It will contain camera output in the middle and navigation arrows in the bottom with Forward, Back, Left and Right directions.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.