Giter Club home page Giter Club logo

turtlebot3_slam_3d's Introduction

Turtlebot3 3D-SLAM using RTAB-Map with Jetson TX2

tb3_slam_3d

Sample repository for creating a three dimensional map of the environment in real-time and navigating through it. Object detection using YOLO is also performed, showing how neural networks can be used to take advantage of the image database stored by RTAB-Map and use it to e.g. localize objects in the map.

system_chart

Screencast:

Click for better quality.

turtlebot3_slam_3d_mapping

Objects found by the robot are mapped as below:

semantic_map

Quick Start:

mkdir catkin_ws/src -p
cd catkin_ws
wstool init src
wstool merge -t src https://raw.githubusercontent.com/ROBOTIS-JAPAN-GIT/turtlebot3_slam_3d/master/.rosinstall
wstool update -t src
rosdep install -y -r --from-paths src --ignore-src
catkin build turtlebot3_slam_3d

Launching Demo Program

Download one of the bag files:

# Small version (300MB)
https://drive.google.com/a/robotis.com/uc?export=download&confirm=SLgY&id=1sfMhQV5ipJm0ghrvQ8HpOw2tTr179aiP

# Full version (1.2GB)
https://drive.google.com/a/robotis.com/uc?export=download&confirm=BkOw&id=1BUQdcuxshEC-W6O9Jkel6sUP9-FxkLca

Setup the environment with:

cd catkin_ws
source devel/setup.bash
export ROS_PACKAGE_PATH=$ROS_PACKAGE_PATH:$(pwd)/src

Then, launch the sample mapping demo with the following. It takes a while to load the nodes and to play the bag file.

roslaunch turtlebot3_slam_3d demo_bag.launch bag_file:=/absolute/path/to/bag_file.bag

Default bag_file path is set to turtlebot3_slam_3d/rtab_bag.bag.

Map is saved to ~/.ros/rtabmap.db as default setting.

Generate Semantic Map

tb3_semantic_map

Semantic mapping can be generated using the code in the scripts directory. We perform this procedure offline to save calculation time during execution and to make the clustering algorithm easier.

To generate a detection database from a bag file:

  1. Start the bag file:
roslaunch turtlebot3_slam_3d demo_bag.launch bag_file:=/absolute/path/to/bag_file.bag
  1. Wait for the bag to load and for the robot to be displayed properly.
  2. Run the collector script. This will collect detection results and save them to files in the current directory (detections_raw.db and detections_dbscan.db).
rosrun turtlebot3_slam_3d detection_collector.py 
  1. Wait until the rosbag play has finished and all detections have been collected.

To display the detected objects in the map, start the bag file with publish_detection:=true.

$ roslaunch turtlebot3_slam_3d demo_bag.launch publish_detection:=true detection_db:=/absolute/path/to/detections_dbscan.db

Run with Turtlebot3

Hardware Components:

Part name Quantity
Main Components Waffle Pi 1
Jetson TX2 1
ZED Mini or RealSense D435 1
Supplementary Items 19V Battery for TX2 1
USB Hub (micro-B to Type A) 1
Camera Bracket ZED Mini Camera Bracket 1
M2x4mm 2
M3x8mm 2
Chassis Parts Waffle Plate 3
Plate Support M3x45mm 11
M3x8mm 22

Software Components:

ZED Mini Software Setup

Download ZED SDK for Jetson TX2 from https://www.stereolabs.com/developers/. Then, build the ROS wrapper with:

cd catkin_ws
catkin build zed_wrapper

Intel RealSense D435 Software Setup

Run the installation script from jetsonhacks to install the RealSense SDK on Jetson TX2:

cd catkin_ws/src/buildLibrealsense2TX
./installLibrealsense.sh 

Then, build the ROS wrapper with:

cd catkin_ws
catkin build realsense2_camera

Launch Files:

First, setup the environment:

cd catkin_ws
source devel/setup.bash
export ROS_PACKAGE_PATH=$ROS_PACKAGE_PATH:$(pwd)/src

To create your own map, run:

roslaunch turtlebot3_slam_3d turtlebot3_slam_3d.launch

To navigate through your map use:

roslaunch turtlebot3_slam_3d turtlebot3_zed_bringup.launch
roslaunch turtlebot3_slam_3d rtabmap.launch localization:=true

With Intel RealSense D435, use the following instead:

roslaunch turtlebot3_slam_3d turtlebot3_slam_3d.launch use_zed:=false
roslaunch turtlebot3_slam_3d turtlebot3_d435_bringup.launch
roslaunch turtlebot3_slam_3d rtabmap.launch localization:=true use_zed:=false

turtlebot3_slam_3d's People

Contributors

affonso-gui avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

turtlebot3_slam_3d's Issues

Test video

can you list the test video on youtube, I want to watch the effect of your algorithm

I have a question to get the yolo_output TF value by realsense D435.

hello.
Over the past few months, I've been using your code very well on a Gazebo simulation base.
Thank you.
And Recently I bought RealSense D435, But There is some problem.
I've been debugging for a few weeks and I can't solve the probolem.
I would really appreciate it if you answered my question.

My Goal is to get the yolo_output TF value by realsense D435.

I run below command to get yolo_output TF value by RealSense D435.

roslaunch turtlebot3_slam_3d turtlebot3_d435_bringup.launch
roslaunch turtlebot3_slam_3d darknet.launch generate_point_cloud:=true use_zed:=false
So, I can not get yolo_output TF value. TF tree is below.

I thought that yolo_output should come after camera_color_optical_frame , but I do not see it.
base_link->camera_link-> camera_aligned_depth_to_color_frame-> camera_color_optical_frame -> nothing
https://github.com/hyunoklee/test/blob/master/frames_real.png?raw=true

Is there anything I need to addition modify to get the yolo_output TF value by D435 ?

Should I add the following command to turtlebot3_d435_bringup.launch ?

AND At /.rosinstall file, Use the without_tf_publish version like below.

/////// /.rosinstall ////////////////////////////////////
uri: https://github.com/Affonso-Gui/realsense.git
version: without_tf_publish

Unlike Zed Mini, I would really appreciate it if you let me know why you should use without_tf_publish version at D435.

Actually I tried already all of the things mentioned above. But I can not get yolo_output TF value by realsense D435. please help.

Bring up zed camera

Hi @Affonso-Gui. I successfully run this code. Now, I wanna change to my environment which I am using ZED2 camera only with yolov4 object detection. So first step I try to change the input to ZED camera, live input. FYI, I have changed several things:

  1. changes node to /zed_node/left/image_rect_color
  2. changes node to /zed_node/depth/depth_registered
  3. changes node to /zed_node/left/camera_info
  4. changes node to /zed_node/depth/camera_info
  5. changes node to /zed_node/odom
  6. changes node to /zed_node/point_cloud/fused_cloud_registered
  7. comment out turtlebot3_robot_launch and publish_model.launch in turtlebot3_zed_bringup
  8. comment out visual_odom line in turtlebot3_slam_3d.launch since already set to false
  9. add nodelet depthimage_to_laserscan in rtabmap.launch
  10. merge YoloObjectDetector.cpp file to this github https://github.com/AkellaSummerResearch/darknet_ros.git, because I already test, able to run live zed for yolo object detection in ROS.
  11. add these lines in darknet_config.yaml

subscribers:

camera_reading:
topic: /zed_node/left/image_rect_color
queue_size: 1

image_view:

enable_opencv: true
wait_key_delay: 1
enable_console_output: true
use_darknet: true

  1. add these lines in yolov3.yaml

image_view:

enable_opencv: true
wait_key_delay: 1
enable_console_output: true
use_darknet: true

When I run roslaunch turtlebot3_slam_3d turtlebot3_slam_3d.launch, I got warning as below, and YOLO V3 window not pop-up.

/rtabmap/rtabmap: Did not receive data since 5 seconds! Make sure the input topics are published ("$ rostopic hz my_topic") and the timestamps in their header are set. If topics are coming from different computers, make sure the clocks of the computers are synchronized ("ntpdate"). If topics are not published at the same rate, you could increase "queue_size" parameter (current=600).
/rtabmap/rtabmap subscribed to (approx sync):
/zed_node/odom
/zed_node/left/image_rect_color
/zed_node/depth/depth_registered
/zed_node/left/camera_info
/rtabmap/scan

Also, I checked all changed node are working except /zed_node/point_cloud/fused_cloud_registered.

Already stuck with this thing about a week. Hope you can help. Thank you.

Which node publish topic which is darknet_ros/label_image ?

hi ~
I am looking for a 3D object detection solution to run on turtlebot3.
So I am very happy to find ROBOTIS-JAPAN-GIT/turtlebot3_slam_3d.

However, there is an issue during the execution of turtlebot3_slam_3d turtlebot3_slam_3d.launch.
I would very much appreciate your help.

There is no package publishing topic which is darknet_ros/label_image.
The darknet ros package publish only three topic like below .
. object_detector ([std_msgs::Int8])
. bounding_boxes ([darknet_ros_msgs::BoundingBoxes])
. detection_image ([sensor_msgs::Image])

So I remap from darknet_ros/detection_image to darknet_ros/label_image like below.

  <!-- COORDINATES -->
  <node pkg="nodelet" type="nodelet" name="label_mask" args="load jsk_pcl_utils/LabelToClusterPointIndices $(arg MANAGER)">
    <!--<remap from="~input" to="darknet_ros/label_image"/>-->
    <remap from="~input" to="darknet_ros/detection_image"/>
    <remap from="~output" to="darknet_ros/cluster_points"/>
  </node>

Then erro occurs at connect function of label_to_cluster_point_indices_nodelet.cpp.

//erro log

terminate called after throwing an instance of 'cv_bridge::Exception'
  what():  [bgr8] is a color format but [32SC1] is not so they must have the same OpenCV type, CV_8UC3, CV16UC1 ....

//erro point at source code

  void LabelToClusterPointIndices::convert(
    const sensor_msgs::Image::ConstPtr& label_msg)
  {
    vital_checker_->poke();
    cv_bridge::CvImagePtr label_img_ptr = cv_bridge::toCvCopy(
     label_msg, sensor_msgs::image_encodings::TYPE_32SC1);

I think there is no package publishing topic which is darknet_ros/label_image.
It will be very helpful if you let me know where darknet_ros/label_image Topic is publishing.
Thank you.

I can’t publish the 3D coordinate of the detected objects using intel cameras

I’m trying to use intel realsense depth camera (d435) and a tracking camera (T265) in this project to detect objects and localize them in real-time along with Rtab mapping.
I made my own launch file to run the cameras and the detection with darknet-ros then I added the part of publishing the TF of the detected object I took it from your launch file but it seems that does not work for me.

Screenshot from 2023-03-10 17-30-27

Screenshot from 2023-03-10 16-01-50

I tried to check the topic /cluster_decomposer/centroid_pose_array

But what I get is :
`subscribed to [/cluster_decomposer/centroid_pose_array]

no new messages
no new messages
no new messages
`

Knowing that I use my laptop to run this :
CUDA 11.2
Ubuntu 20.4
Ros Noetic

Screenshot from 2023-03-10 16-10-35

grid map segmentation

Hi,
Thanks for this great package. I would like to know if its possible to segment the grid map with classes based on the detected annotation for objects?

Thanks
Alex

detection_publisher drops all the time when the localisation flag is on

Hi,
detection_publisher drops all the time when the localisation flag is on. This is the error I got.

[detection_collector-13] process has died [pid 24949, exit code 1, cmd /home/sscrovers/catkin_ws/src/slam_3d/scripts/detection_publisher.py __name:=detection_collector __log:=/home/sscrovers/.ros/log/903a794a-968b-11e9-9dec-7c5cf8826c11/detection_collector-13.log].
log file: /home/sscrovers/.ros/log/903a794a-968b-11e9-9dec-7c5cf8826c11/detection_collector-13*.log

Thanks,

Object Annotated grid map generation

Hi,
In the repo, there is an image of a grid map, annotated with the name of the objects detected. Is this also generated in the code?
When I use the database viewer to view the rtabmap.db, there are no annotations present
Looking forward to a reply
Thanks

Error concerning the xacro file

Hello,

I'm getting this error when I run:
roslaunch turtlebot3_slam_3d turtlebot3_slam_3d.launch

when processing file: /home/nvidia/deconbot/src/turtlebot3_slam_3d/urdf/waffle_deep.urdf.xacro
while processing /home/nvidia/deconbot/src/turtlebot3_slam_3d/launch/turtlebot3_zed_bringup.launch:
while processing /home/nvidia/deconbot/src/turtlebot3_slam_3d/launch/publish_model.launch:
Invalid tag: Cannot load command parameter [robot_description]: command [/opt/ros/kinetic/lib/xacro/xacro --inorder '/home/nvidia/deconbot/src/turtlebot3_slam_3d/urdf/waffle_deep.urdf.xacro'] returned with code [2].

Param xml is
The traceback for the exception was written to the log file

I'm using the Zed camera, and Jetson tx2 also. Additionally, I'm using catkin_make to build everything (only thing different between our setups, not sure if this is the problem).

Any help would be much appreciated! Thanks

Can Raspberry Pi3 replace Jetson TX2?

I am using intel D435 and raspberry pie, remote pc.

I know Rtabmap works in the following way in Raspberry Pi3.

image

http://wiki.ros.org/rtabmap_ros/Tutorials/RemoteMapping

http://wiki.ros.org/rtabmap_ros/Tutorials/SetupOnYourRobot

But i think Raspberry Pi3 can't calculate yolo algorithms in this source files.

In this case, What parts of this source need to be modified so that the yolo algorithm can be computed on a Remote pc?

In other words, Is there any problem in compressing and importing the published image from intel d435 and running Detection image parts and Coordinates parts below figure?

image

No tf generated and No objects detected by detection collector

Hello,

Firstly, thank you for open sourcing such a great implementation.

I am facing a problem while using the repository. So I am providing the odometry data and rtabmap data from some other package and using this for obstacle detection. The issues I am facing are as follows-

  1. Objects are getting detected by the YOLO algorithm and I am able to see the data in bounding box topi, but tf is not always generated.
  2. The centroid pose array is always coming out to be 0 for all the objects.
  3. The detection collector is not printing any pose or values because of the above problem. Wanted to confirm if at all the code just generates tf for specific objects or all objects.

I have added the launch file snippets to the issue
darknet.txt
turtlebot3_slam_3d.txt

Would be glad if you could help with this.

Thank You

Transform error: Failed to lookup transformation from map to zed_left_camera_frame

i have a tx2 and zed camera and has been set up the environment ,but when i run the launch file turtlebot3_zed_bringup.launch or turtlebot3_slam_3d.launch ,my ros can't receive the /map ,i look up the rostopic ,i find there are the camera topic and map topic ,but the rviz can't display the mapdate and pointcould, what should i do?
[ERROR] [1547872278.454724]: Error opening serial: [Errno 2] could not open port /dev/ttyACM0: [Errno 2] No such file or directory: '/dev/ttyACM0'

ValueError: Input contains NaN, infinity or a value too large for dtype('float64').

Hi
I am running the script on my own dataset collected on a mobile robot with kinect sensor. The yolo is working fine and the rtabmap is also running ok. However I am getting nan values during the detection and this kills the node and I cant get to collect all the positions. I am not sure what this error is coming from. Can you help me with this. Thanks.

detection_collector.py 

Searching for objects...
Found a chair at [5.076215744018555, 0.1553346812725067, 1.0857899188995361]
Found a chair at [5.0635666847229, 0.27497056126594543, 1.1331884860992432]
Found a chair at [4.256170272827148, 0.3564226031303406, 0.823308527469635]
Found a chair at [nan, nan, nan]
Found a chair at [4.907439708709717, 0.2746533155441284, 0.9894275665283203]
Found a chair at [4.988054275512695, 0.2638717293739319, 1.059159755706787]
Found a chair at [4.907907009124756, 0.3024069666862488, 0.980076789855957]
Found a chair at [nan, nan, nan]
Found a chair at [4.8220624923706055, 0.2997586727142334, 0.9472278356552124]
Found a chair at [4.847901821136475, 0.22644220292568207, 0.9880084991455078]
Found a chair at [4.278569221496582, 0.40826913714408875, 0.7901732325553894]
Found a chair at [4.832341194152832, 0.28574058413505554, 0.9498862028121948]
Found a chair at [5.10916445246115e+18, -3.1987569355256644e+23, 1.5332093507875635e+17]
Found a chair at [4.783604145050049, 0.2373742163181305, 0.904579758644104]
Found a chair at [4.783695220947266, 0.28647559881210327, 0.9150972962379456]
Found a chair at [nan, nan, nan]
Found a chair at [nan, nan, nan]
^CWriting to detections_raw.db...
Writing to detections_dbscan.db...
Exception ValueError: ValueError("Input contains NaN, infinity or a value too large for dtype('float64').",) in <bound method DetectionCollector.__del__ of <__main__.DetectionCollector object at 0x7f71c2a0c190>> ignored

Start up procedure

I am built the Turtlebot3 with the zedmini cameras. I have the following networked on ROS my PC, TB3 PI, and the TX2.
my question is what is the procedure to start creating my own map?
Currently I am doing the following:
my PC: run -> roslaunch turtlebot3_slam_3d turtlebot3_slam_3d.launch
I get the following:

[ERROR] [1594135484.764127]: Error opening serial: [Errno 2] could not open port /dev/ttyACM0: [Errno 2] No such file or directory: '/dev/ttyACM0'
Waiting for image.
Waiting for image.
[ERROR] [1594135487.769527]: Error opening serial: [Errno 2] could not open port /dev/ttyACM0: [Errno 2] No such file or directory: '/dev/ttyACM0'
[ WARN] [1594135488.665722321]: /rtabmap/rtabmap: Did not receive data since 5 seconds! Make sure the input topics are published ("$ rostopic hz my_topic") and the timestamps in their header are set. If topics are coming from different computers, make sure the clocks of the computers are synchronized ("ntpdate"). If topics are not published at the same rate, you could increase "queue_size" parameter (current=100).
/rtabmap/rtabmap subscribed to (approx sync):
   /odom,
   /stereo_camera/left/image_rect_color,
   /stereo_camera/depth/depth_registered,
   /stereo_camera/left/camera_info,
   /scan
Waiting for image.

Could some please let know if I have to start up the turtlebot first or run a comand on the tx2 first.

thank you for your help

low fps, rtabmap parameters

hello!
I've got jetson tx2 and d435 camera and I tried to run rtabmap from your repo but I have got very low fps of mapCloud. Approximately 1 refresh per 2 seconds. What is your fps? What do you think, is this because i dont have wheels yet so i am publishing zeros in odom topic?
I dont have wheels yet, so i am publishing zeros to odom topic and it means that my camera is not moving. What do you think, could it be the problem?
here is my launch files: launch files

How to find object location??

Hello, I need guidance. Thank you in advance.
How can I find the 3D coordinates of the object detected?? Is there any way that I can also segment that part of pointclouds from the global map of the environment??

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.