Giter Club home page Giter Club logo

occupancy-slam's Introduction

Occupancy-SLAM

C++ implementation of Occupancy-SLAM: An Efficient and Robust Algorithm for Simultaneously Optimizing Robot Poses and Occupancy Map. Yingyu Wang, Liang Zhao, and Shoudong Huang and Occupancy-SLAM: Simultaneously Optimizing Robot Poses and Continuous Occupancy Map. Liang Zhao, Yingyu Wang, and Shoudong Huang. In Robotics Science and Systems (RSS), 2022.

This paper considers the SLAM problem using 2D laser scans (and odometry). We propose an optimization based SLAM approach to optimize the robot trajectory and the occupancy map simultaneously. The key novelty is that the robot poses and the 2D occupancy map are optimized together, which is significantly different from existing occupancy mapping strategies where the robot poses need to be obtained first before the map can be estimated. In this formulation, the map is represented as a continuous occupancy map where each 2D point in the environment has a corresponding evidence value, and the state variables include all the robot poses and the occupancy values at the discrete grid cell nodes of the occupancy map. Based on this formulation, a multi-resolution optimization framework that uses occupancy maps with different resolutions in different stages is introduced. A variation of Gauss-Newton method is proposed to solve the optimization problem in different stages to obtain the optimized occupancy map and robot trajectory. The proposed algorithm is very efficient and can easily converge with initialization from either odometry inputs or scan matching, even when only limited key frame scans are used. Furthermore, we propose an occupancy submap joining method so that large-scale problems can be more effectively handled by integrating the submap joining method with the proposed Occupancy-SLAM. Evaluations using simulations and practical 2D laser datasets demonstrate that the proposed approach can robustly obtain more accurate robot trajectories and occupancy maps than the state-of-the-art techniques with comparable computational time.

Dependencies

  1. Eigen >= 3.4.0
  2. CMake >= 3.16
  3. OpenMP
  4. OpenCV
  5. libigl
  6. Intel MKL (Option)

Eigen 3.4.0

Download Eigen 3.4.0

cd eigen-3.4.0
mkdir build
cd build
cmake ..
sudo make install

CMake

sudo apt-get install cmake

OpenMP

sudo apt-get install libomp-dev

OpenCV

git clone https://github.com/opencv/opencv.git
mkdir build
cd build
cmake ..
make
sudo make install

libigl

Download libigl

Showcase Video

Alt text

Quickstart

Download datasets

Download datasets, and place the file under the main folder.

Modify the path of libigl

Specify the path to libigl in CMakeLists.txt.

Modify the path of MKL or disable

Specify the path to mkl or remove all contents about mkl in CMakeLists.txt.

Compile

git https://github.com/WANGYINGYU/Occupancy-SLAM.git
cd Occupancy-SLAM
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make -j12

Run

./Occupancy_SLAM

and then input

../Data/Museum/b0/Range.txt ../Data/Museum/b0/Pose.txt

Guidance (For Your Own Datasets)

Data Format

Preprocess the required data into the following format and store them as txt files.

Laser Scan

$m \times n$ matrix, where $m$ is the number of scans and $n$ is the number of beams. The range of the laser beam corresponds to each column of the matrix in the order of the corresponding angle from smallest to largest.

Pose

$m \times 3$ matrix, where $m$ is the number of poses. The three elements of each row of the matrix correspond to x, y and angle.

Odometry (Option)

$m \times 3$ matrix with the same format as poses. In addition, this input must be in increments format.

Parameters Setting

Set parameters in config.txt. Refer to the comments in MyStruct.cpp for the effect of each parameter.

Citation

If you find our work useful to your research, please cite the following paper:

  
@INPROCEEDINGS{Zhao-RSS-22, 
    AUTHOR    = {Liang Zhao AND Yingyu Wang AND Shoudong Huang}, 
    TITLE     = {{Occupancy-SLAM: Simultaneously Optimizing Robot Poses and Continuous Occupancy Map}}, 
    BOOKTITLE = {Proceedings of Robotics: Science and Systems}, 
    YEAR      = {2022}, 
    ADDRESS   = {New York City, NY, USA}, 
    MONTH     = {June}, 
    DOI       = {10.15607/RSS.2022.XVIII.003} 
} 

License

Our code is under MIT License.

Our Results of Occupancy Grid Map

Dataset Initialization Ours
ACES
Intel Lab
C5
Museum b0
Freiburg Building 079
Garage
MIT
Simu 1
Simu 2
Simu 3

occupancy-slam's People

Contributors

wangyingyu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.