Giter Club home page Giter Club logo

pose_graph_backend's Introduction

Pose Graph Backend

This repository contains an optimization-based pose-graph backend for multi-agent sensor fusion. This repository is used in a wider framework for multi-robot path planning, available here.

If you use this code in your academic work, please cite (PDF):

@inproceedings{bartolomei2020multi,
  title={Multi-robot Coordination with Agent-Server Architecture for Autonomous Navigation in Partially Unknown Environments},
  author={Bartolomei, Luca and Karrer, Marco and Chli, Margarita},
  booktitle={2020 {IEEE/RSJ} International Conference on Intelligent Robots and Systems ({IROS})},
  year={2020}
}

This project is released under a GPLv3 license.

Overview

This sensor fusion framework is built as part of a Collaborative SLAM client-server framework consisting of the following software packages:

  • Client's side (Onboard the UAV):

    • vins_client_server - link
    • image_undistort - link
    • pcl_fusion - link
  • Server's (Backend PC):

    • pose_graph_backend - this repository
    • Multi-agent Voxblox - link
    • comm_msgs - link

All the above must be installed to test the full system. This was developed and verified in ROS Melodic on Ubuntu 18.04 LTS.

Installation

This software has been tested under Ubuntu 18.04 LTS and ROS Melodic. Here we assume that ROS has been properly installed.

First, install these dependencies:

$ sudo apt-get install python-catkin-tools python-wstool libeigen3-dev

Then, create a catkin workspace:

$ mkdir -p catkin_ws/src
$ cd catkin_ws

Set-up the workspace:

$ catkin init
$ catkin config --extend /opt/ros/melodic
$ catkin config --cmake-args -DCMAKE_BUILD_TYPE=Release
$ catkin config --merge-devel

Clone the dependencies:

$ cd ~/catkin_ws/src
$ wstool init
$ wstool merge pose_graph_backend/dependencies_ssh.rosinstall # To clone with https: pose_graph_backend/dependencies_https.rosinstall
$ wstool up -j8

Finally, build the workspace:

$ cd ~/catkin_ws
$ catkin build

Once the build is complete, source the workspace:

$ source devel/setup.bash

Topics

The following are the ROS topics the Pose Graph backend node subscribes to. The X represents the x-th agent:

  • /keyframeX - custom keyframe message described in comm_msgs sent by pose graph node in vins_client_server.
  • /odometryX - current odometry estimate sent by vins_client_server.
  • /gpsX - GPS measurement directly from sensor.
  • /fused_pclX - custom pointcloud message described in comm_msgs sent by pcl fusion node.
  • /fake_gps2_X - Leica laser measurement data used in EuRoC dataset to simulate GPS messages.

The following are published by Pose Graph backend:

  • /pcl_transformX - pointcloud and its world frame pose, described in comm_msgs.
  • /pathX - RVIZ trajectory estimate.
  • /camera_pose_visualX - RVIZ camera visualization.

Parameters

Parameter Description
gps_align_num_corr number of required GPS-odometry correspondences to start initial GPS alignment.
gps_align_cov_max the threshold that the covariance of the initial GPS alignment must be under for acceptance. Increase this value if there is an issue with GPS initialization (system requires this for active gps agents).
rel_pose_corr_min minimum required number of correspondences following a loop detection relative pose optimization needed to accept the loop closure.
rel_pose_outlier_norm_min norm residual value that a residual of the loop detection relative pose optimization must be above in order to consider that residual an "outlier" and remove it from the following optimization.
local_opt_window_size size of the window of recent keyframes for sliding window local Pose Graph optimization.
loop_image_min_matches minimum keypoint matches in an image to continue with loop closure candidate.
loop_detect_sac_thresh threshold for classifying a point as an inlier in RANSAC 3D-2D P3P in loop detection. Lower value is a stricter condition.
loop_detect_sac_max_iter max number of iterations for the RANSAC 3D-2D P3P in loop detection.
loop_detect_min_sac_inliers minimum number of inliers of RANSAC 3D-2D P3P in loop detection to continue with loop closure candidate.
loop_detect_min_sac_inv_inliers minimum number of inliers of the INVERSE RANSAC 3D-2D P3P in loop detection to continue with loop closure candidate.
loop_detect_min_pose_inliers the final threshold of inliers from the relative Pose Graph optimization of the loop detection process to verify loop closure.
loop_detect_reset_time the amount of time following a loop closure where no new loop closures are looked for for that agent.
loop_detect_skip_kf number of keyframes to skip loop detection for. (1 processes every keyframe, 2 would process every other keyframe, etc.).
information_XXX information values for the different residuals in Pose Graph optimization. Higher values indicate more certainty about the measurement.
ignore_gps_altitude set to TRUE typically if running in a live setup due to large fluctuations in GPS altitude. Uses the odometry altitude value instead.
gps_active_X whether agent X has an active GPS.
gps_referenceX the reference point for local GPS coordinates.
gps_offsetX the translational position offset between IMU and GPS antenna onboard UAV.

General setup

The following describes how to run the system.
A series of configuration files are necessary to run different components:

  • A VINS-Mono yaml configuration file is required for the vins_client_server package.
  • A stereo yaml configuration file is required for the dense_stereo package from image_undistort to generate a dense point cloud from a pair of stereo images.
  • An ncamera yaml configuration file is required for the pcl_fusion package. Examples of these configuration files can be found in the folder pose_graph_backend/conf.

Usage examples

1. EuRoC Dataset

1.1 Single Agent

The following instructions shows how to run a single agent of the EuRoC dataset with a simulation of the full pipeline (using the Leica position measurements as a fake GPS).

First, launch the backend software in separate terminals:

$ roslaunch pose_graph_backend pose_graph_node_euroc.launch num_agents:=1
$ roslaunch voxblox_ros euroc_dataset.launch

Wait for the messages [PGB] Sucessfully read the parameters and the vocabulary and [PGB] Set callbacks to be printed in the terminal. This indicates that the pose-graph is ready. Note that the Voxblox node outputs visuals in RVIZ every 2 seconds.

Launch the client software in separate terminals (make sure the agent_id parameters match in the launch files):

$ roslaunch vins_estimator multi_euroc_0.launch
$ roslaunch pcl_fusion pcl_fusion_node_euroc.launch

Finally, launch the EuRoC rosbag specifying the right path:

$ roslaunch pose_graph_backend single_agent_play.launch  path:=/path/to/euroc/bagfile.bag

1.2 Multi Agent

Running the full pipeline with more than one agent is not recommended on a single PC unless you have a powerful PC (since it would be running visual-inertial odometry, dense stereo pointcloud construction and pointcloud filtering for every agent, as well as Pose Graph optimization, loop detection, and mesh reconstruction). A better alternative is to run the client-side software and record its output from the EuRoC data in a bag file for different EuRoC trajectories individually, and then play those bag files simultaneously while running the backend packages. This more closely reflects the actual load on your pc.

An example of the client software output from the first three EuRoC datasets can be found in the folder pose_graph_backend/data as prerecorded bag files. To simulate the multi-agent system follow these instructions.

Launch the backend software in separate terminals:

$ roslaunch pose_graph_backend pose_graph_node_euroc.launch
$ roslaunch voxblox_ros euroc_dataset.launch

Wait for the messages [PGB] Sucessfully read the parameters and the vocabulary and [PGB] Set callbacks to be printed in the terminal. This indicates that the pose-graph is ready.

Navigate to the folder pose_graph_backend/data and play each bag file in separate terminals:

$ rosbag play MH_01_PreRecordedUAV.bag
$ rosbag play MH_02_PreRecordedUAV.bag
$ rosbag play MH_03_PreRecordedUAV.bag

2. General Usage

The following instructions show how to run the Pose Graph for a custom sensor set-up. First, you need to create the necessary configuration files as described above. Then, you need to adapt the launch files to the number of agents and their IDs and map the topics to the right names.

To start up the Pose Graph, run the following launch files in separate terminals on the server side:

$ roslaunch pose_graph_backend pose_graph_node.launch num_agents:=NUM_AGENTS
$ roslaunch voxblox_ros voxblox_server_node.launch num_agents:=NUM_AGENTS

Wait for the message [PGB] Sucessfully read the parameters and the vocabulary to be printed in the terminal. This indicates that the pose-graph is ready. Note that the Voxblox node outputs visuals in RVIZ every 2 seconds.

On the robot's side, run the following instructions in separate terminals for all the robots. The launch files must be edited to reflect assigned UAV IDs in the agent_id parameter:

$ roslaunch vins_estimator vins_estimator.launch
$ roslaunch pcl_fusion pcl_fusion_node.launch

The pcl_fusion launch file should launch dense_stereo to create a dense point cloud from stereo images. The UAVs should now be able to begin flying, and the agents' trajectories should appear in RVIZ after the GPS frame initialization has been completed in the Pose Graph.

Contributing

Contributions that help to improve the code are welcome. In case you want to contribute, please adapt to the Google C++ coding style and run bash clang-format-all . on your code before any commit.

pose_graph_backend's People

Contributors

lucabartolomei avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pose_graph_backend's Issues

Got stuck at "params.voc_ptr = std::make_shared<BRISKVocabulary>(bow_voc_file)" in parameter-reader.cpp

Hi thank you for your great work!

I'm trying to build a project on top of pose_graph_backend, but somehow I can not initialize the node i.e can not establish topics.

So when i am debugging, I find that whole system got stuck at "params.voc_ptr = std::make_shared(bow_voc_file)" in parameter-reader.cpp, I don't know what happened, could you please help me on this?

By the way, I'm using Ubuntu 20 noetic.Really looking forward to your reply!

connections in comm_msgs::keyframe

Hi @lucaBartolomei !

I'm trying to understand the comm_msgs between client and server, in comm_msgs::keyframe I find "connections" message type, but I don't know the meaning of it, why do we need this message and how it works separately in the client side and server side.

This really confuses me, I'm really looking forward to your reply! Thank you!

ThreadSafeQueue blocking

Hello, I am trying to adapt this repo in my work, but it feels like it's not really polished. In debug, I found the ThreadSafeQueue's methods like PushIfBlocking block very offen, feels like some lock issue. I need to changed them to nonblock one to make it work. And also LoopDetection::addKeyFrame blocks, won't return. Is this somehow I use this repo in a wrong way, or you hanve some futher update for it not yet pushed?

How to run it without gps?

Hi Luca!

Great work of yours!

I'm tring to run voxblox multi agent in a real world demo, but I also want to do it without gps i.e using pure VIO. I want to let those agents to gradually merge into one map like ccm-slam or covins did. And I find pose_graph_backend is capable of doing this, because it has a map merging function.

So Luca, do you have any suggestions on how to modify this? Or is there some way I can fake this gps thing using pure visual-inertial odometry?

I'm really looking forward to your rely! Thank you!

How to measure bandwith?

Hi Luca!

I'm trying to reproduce some experiment results from your paper.

But when I measure the bandwith of keyframe topic using ' rostopic bw ' , the mean bandwith is around 100KB/s, not 24KB/s as your paper says.

So I'm wondering maybe I used the wrong tool to measure the bandwith, my question is that what is the tool you used for measuring bandwith?

I'm really looking forward to your reply! Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.