Giter Club home page Giter Club logo

aws-robomaker-sample-application-persondetection's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-robomaker-sample-application-persondetection's Issues

Update path to tts.voicer

With aws-robotics/tts-ros2#13, voicer.py was moved from under tts to tts.service. So after the next bloom release of tts-ros2, this package needs to be updated with references to tts.voicer replaced with tts.scripts.voicer.

Need better documentation for running on physical turtlebot

Users that are trying to deploy and run the sample applications on physical turtlebots are having difficulty getting them actually running. The PersonDetection app seems to be one of the worst. I have included the troubleshooting steps provided to us by one user who managed to get the person detection application running on a turtlebot. We should use this input to develop new documentation on how to get this running.

Troubleshooting RoboMaker sample app: Navigation and Person Recognition

Environment

  • Turtlebot 3 (Waffle Pi) with Raspberry Pi 3 B+, Raspbian OS, and ROS Kinetic full install.
  • Following the instructions from https://docs.aws.amazon.com/robomaker/latest/dg/gs-deploy.html to build and bundle the “Navigation and Person Recognition” sample app in Cloud9. Note: I created a new Development Environment in Cloud9 today, and tried to re-downloaded, build, and bundle the sample app from scratch. Unfortunately, the build colcon build --build-base armhf_build --install-base armhf_install failed with the following messages:
[  8%] Performing build step for 'KVS_SDK_IMPORT'
cmake version 3.5.1 CMake suite maintained and supported by Kitware (kitware.com/cmake).
Checking log4cplus at /ws/armhf_build/kinesis_manager/external/kinesis-video-native-build/downloads/local/lib/liblog4cplus.dylib/.so
log4cplus lib not found. Installing
/usr/bin/xz: (stdin): File format not recognized
/bin/tar: Child returned status 1
/bin/tar: Error is not recoverable: exiting now
kvssdk/CMakeFiles/KVS_SDK_IMPORT.dir/build.make:113: recipe for target 'kvssdk/KVS_SDK_IMPORT-prefix/src/KVS_SDK_IMPORT-stamp/KVS_SDK_IMPORT-build' failed
make[2]: *** [kvssdk/KVS_SDK_IMPORT-prefix/src/KVS_SDK_IMPORT-stamp/KVS_SDK_IMPORT-build] Error 2
CMakeFiles/Makefile2:122: recipe for target 'kvssdk/CMakeFiles/KVS_SDK_IMPORT.dir/all' failed
make[1]: *** [kvssdk/CMakeFiles/KVS_SDK_IMPORT.dir/all] Error 2
Makefile:138: recipe for target 'all' failed
make: *** [all] Error 2
Failed   <<< kinesis_manager    [ Exited with code 2 ]

Therefore, I had to give up and use the bundle file created two weeks ago.

  • Download the bundle to local PC, and copy it to Turtlebot.
  • Untar the bundle file to /home/pi/person-detection/.

Attempt 1: Failed

According to the documentation at https://github.com/aws-robotics/aws-robomaker-sample-application-persondetection, run the following commands:

export BUNDLE_CURRENT_PREFIX=/home/pi/person-detection
source $BUNDLE_CURRENT_PREFIX/setup.sh
roslaunch person_detection_robot deploy_person_detection.launch

This consistently fails to bring up the raspicam_node, and displays the following error messages:

[raspicam_node-5] process has died [pid 24637, exit code -11, cmd /opt/ros/kinetic/lib/raspicam_node/raspicam_node __name:=raspicam_node __log:=/home/pi/.ros/log/31a66b7a-4576-11e9-9605-b827eb6ec255/raspicam_node-5.log].
log file: /home/pi/.ros/log/31a66b7a-4576-11e9-9605-b827eb6ec255/raspicam_node-5*.log

Attempt 2: Failed

  1. Open a new Terminal.
  2. Go to the person_detection_robot package folder:
cd /home/pi/person-detection/opt/install/person_detection_robot/share/person_detection_robot/

a. Open file ./config/h264_encoder_config.yaml and update line 6 to subscription_topic: "/raspicam_node/image".
b. Open file ./launch/deploy_person_detection.launch and comment out the entire node raspicam_node.
3. Go to the raspicam_node package’s launch folder:

cd /home/pi/catkin_ws/src/raspicam_node/launch

a. Create a new file camerav3.launch, with the following content:

<launch>
  <arg name="enable_raw" default="true"/>
  <arg name="enable_imv" default="false"/>
  <arg name="camera_id" default="0"/>
  <arg name="camera_frame_id" default="raspicam"/>
  <arg name="camera_name" default="camerav2_1280x960"/>

  <node type="raspicam_node" pkg="raspicam_node" name="raspicam_node" output="screen">
    <param name="private_topics" value="true"/>

    <param name="camera_frame_id" value="$(arg camera_frame_id)"/>
    <param name="enable_raw" value="$(arg enable_raw)"/>
    <param name="enable_imv" value="$(arg enable_imv)"/>
    <param name="camera_id" value="$(arg camera_id)"/>

    <param name="camera_info_url" value="package://raspicam_node/camera_info/camerav2_1280x960.yaml"/>
    <param name="camera_name" value="$(arg camera_name)"/>
    <param name="width" value="640"/>
    <param name="height" value="480"/>

    <param name="framerate" value="30"/>
  </node>
</launch>
  1. Run roslaunch raspicam_node camerav3.launch. This should bring up the camera correctly. To confirm, try rqt or rostopic echo /raspicam_node/image.
  2. Open a new Terminal.
  3. Run the following commands:
export ROS_PACKAGE_PATH=/home/pi/catkin_ws/src:/opt/ros/kinetic/share:/home/pi/person-detection/opt
export LAUNCH_ID=1554865002672
export BUNDLE_CURRENT_PREFIX=/home/pi/person-detection
source $BUNDLE_CURRENT_PREFIX/setup.sh
  1. Run roslaunch person_detection_robot deploy_person_detection.launch. Most of the nodes should start correctly (except soundplay_node).
  2. However, NO video streaming happening in Kinesis Video Stream. To confirm, run rostopic echo /video/encoded and you will see an error message saying that kinesis_video_msgs isn't available or built properly.
  3. Ctrl+C to terminate the process.

Attempt 3: Failed

  1. Continue from the previous attempt, and stay in the current Terminal window.
  2. Run sudo apt-get install ros-kinetic-kinesis-video-msgs to install the missing library.
  3. Re-run roslaunch person_detection_robot deploy_person_detection.launch, followed by rostopic echo /video/encoded to check video encoding. This time you will see messages being posted to this topic.
  4. However, still NO video streaming happening in Kinesis Video Stream.
  5. Go back to the terminal window and examine the console output, you will see an error on Curl returned error code 28, followed by streaming errors every few seconds:
[ERROR] [1554947202.196626556]: [CurlHttpClient] Curl returned error code 28
[ERROR] [1554947202.196984158]: [AWSClient] HTTP response code: 1995602712
Exception name: 
Error message: Unable to connect to endpoint
0 response headers:
...
[ERROR] [1554947211.574767159]: [streamErrorReportHandler] Reporting stream error. Errored timecode: 15549472006740000 Status: 1375731806
[ERROR] [1554947216.594701267]: [streamErrorReportHandler] Reporting stream error. Errored timecode: 15549472006740000 Status: 1375731806
[ERROR] [1554947221.610790484]: [streamErrorReportHandler] Reporting stream error. Errored timecode: 15549472006740000 Status: 1375731806
...

Attempt 4: Succeeded (sort of)

  1. Continue from the previous attempt, and stay in the current Terminal window.
  2. Since the application is interacting with two different streams: 1/ Kinesis Video Stream, and 2/ Kinesis Rekognition Data Stream, I need to know which one was causing the error.
  3. Go to the person_detection_robot package’s config folder:
cd /home/pi/person-detection/opt/install/person_detection_robot/share/person_detection_robot/config

a. Open file ./kvs_config.yaml and update line 11 to topic_type: 1.
4. Re-run roslaunch person_detection_robot deploy_person_detection.launch.
5. Kinesis video streaming worked correctly!

This means that the Kinesis Rekognition data stream is the one having trouble, but that’s the problem for another day!

Launch files are incorrect for physical TB application

I suspect the changes at #4 were only tested on RoboMaker and not on a physical robot.

Specifically, moving
<rosparam if="$(eval image_topic != '')" param="/$(arg param_prefix)/subscription_topic" subst_value="true">$(arg image_topic)</rosparam> to the global scope caused it to override the subscription topic of the video streamer node, while it should have only affected the encoder node.
And I think that's not the only issue.
Opened #30 to try and address some of the problems found.

simulation_common has incorrect dependency definition

In: https://github.com/aws-robotics/aws-robomaker-sample-application-persondetection/blob/master/simulation_ws/src/aws_robomaker_simulation_common/package.xml

The package declares dependency on moveit_ros and dwa_local_planner but it doesn't seem like these are actually used.

On the other hand, the package uses move_base_msgs and geometry_msgs but doesn't declare them as dependencies: https://github.com/aws-robotics/aws-robomaker-sample-application-persondetection/blob/master/simulation_ws/src/aws_robomaker_simulation_common/nodes/route_manager#L24

Please audit and fix the dependency definitions.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.