Giter Club home page Giter Club logo

orb-slam2_with_semantic_label's Introduction

There are so many large files in .git folder and I hate them, so I move the code to https://github.com/qixuxiang/orb-slam2_with_semantic_labelling

orb-slam2_with_semantic_label

Authors: Xuxiang Qi([email protected]),Shaowu Yang([email protected]),Yuejin Yan([email protected])

Current version: 1.0.0

  • Note: This repository is mainly built upon ORB_SLAM2 and YOLO. Many thanks for their great work.

0.introduction

orb-slam2_with_semantic_label is a visual SLAM system based on ORB_SLAM2[1-2]. The ORB-SLAM2 is a great visual SLAM method that has been popularly applied in robot applications. However, this method cannot provide semantic information in environmental mapping.In this work,we present a method to build a 3D dense semantic map,which utilize both 2D image labels from YOLOv3[3] and 3D geometric information.

image

1. Related Publications

Deep Learning Based Semantic Labelling of 3D Point Cloud in Visual SLAM

2. Prerequisites

2.1 requirements

  • Ubuntu 14.04/Ubuntu 16.04/Ubuntu 18.04

  • ORB-SLAM2

  • CUDA

  • GCC >= 5.0

  • cmake

  • OpenCV

  • PCL1.7 or PCL1.8, may not work with PCL1.9

  • libTorch 1.4

    PS:(Ubuntu18.04 CUDA10.1 opencv3.4 Eigen3.2.10 PCL1.8 has tested successfully)

2.2 Installation

Refer to the corresponding original repositories (ORB_SLAM2 and YOLO for installation tutorial).

2.3 Build

git clone https://github.com/qixuxiang/orb-slam2_with_semantic_label.git

sh build.sh

3. Run the code

  1. Download yolov3.weights, yolov3.cfg and coco.names from darknet and put them in bin folder. Also, these files can be found in YOLO V3.Then, you should make a dir named img in bin folder, that is, you should execute command sudo mkdir img in bin folder. you can use libtorch-yolov3 replace libYOLOv3SE, see details https://blog.csdn.net/TM431700/article/details/105889614).

  2. Download a sequence from http://vision.in.tum.de/data/datasets/rgbd-dataset/download and uncompress it to data folder.

  3. Associate RGB images and depth images using the python script associate.py. We already provide associations for some of the sequences in Examples/RGB-D/associations/. You can generate your own associations file executing:

python associate.py PATH_TO_SEQUENCE/rgb.txt PATH_TO_SEQUENCE/depth.txt > associations.txt
  1. Change TUMX.yaml to TUM1.yaml,TUM2.yaml or TUM3.yaml for freiburg1, freiburg2 and freiburg3 sequences respectively. Change PATH_TO_SEQUENCE_FOLDERto the uncompressed sequence folder.You can run the project by:
cd bin
./rgbd_tum ../Vocabulary/ORBvoc.txt ../Examples/RGB-D/TUM2.yaml ../data/rgbd-data ../data/rgbd-data/associations.txt

image

update

  1. update 20200705: fix segment fault, make system run faster and use libtorch, thanks for vayneli!

Reference

[1] Mur-Artal R, Montiel J M M, Tardos J D. ORB-SLAM: a versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31(5): 1147-1163.

[2] Mur-Artal R, Tardos J D. ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras[J]. arXiv preprint arXiv:1610.06475, 2016.

[3] Redmon, Joseph, and A. Farhadi. "YOLOv3: An Incremental Improvement." (2018).

License

Our system is released under a GPLv3 license.

If you want to use code for commercial purposes, please contact the authors.

Other issue

  • We do not test the code there on ROS bridge/node.The system relies on an extremely fast and tight coupling between the mapping and tracking on the GPU, which I don't believe ROS supports natively in terms of message passing.

  • Welcome to submit any issue if you have problems, and add your software and computer system information details, such as Ubuntu 16/14,OpenCV 2/3, CUDA 9.0, GCC5.4,etc..

  • We provide a video here.

orb-slam2_with_semantic_label's People

Contributors

qixuxiang avatar vayneli avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

orb-slam2_with_semantic_label's Issues

munmap_chunk(): invalid pointer

您好,我在运行作者程序的时候,viewer窗口一闪而过,然后程序就停了,然后报错:munmap_chunk(): invalid pointer Aborted (core dumped),不知道您有遇见过吗

Floating point exception (core dumped)

ORB-SLAM2 Copyright (C) 2014-2016 Raul Mur-Artal, University of Zaragoza.
This program comes with ABSOLUTELY NO WARRANTY;
This is free software, and you are welcome to redistribute it
under certain conditions. See LICENSE.txt.

Input sensor was set to: RGB-D

Loading ORB Vocabulary. This could take a while...
Vocabulary loaded in 0.25s

Camera Parameters:

  • fx: 517.306
  • fy: 516.469
  • cx: 318.643
  • cy: 255.314
  • k1: 0.262383
  • k2: -0.953104
  • k3: 1.16331
  • p1: -0.005358
  • p2: 0.002628
  • fps: 30
  • color order: RGB (ignored if grayscale)

ORB Extractor Parameters:

  • Number of Features: 1000
  • Scale Levels: 8
  • Scale Factor: 1.2
  • Initial Fast Threshold: 20
  • Minimum Fast Threshold: 7

Depth Threshold (Close/Far Points): 3.09294


Start processing sequence ...
Images in the sequence: 792

New map created with 834 points
GLib-GIO-Message: 15:48:18.827: Using the 'memory' GSettings backend. Your settings will not be saved or shared with other applications.
receive a keyframe, id = 1
receive a keyframe, id = 2
receive a keyframe, id = 3
receive a keyframe, id = 4
receive a keyframe, id = 5
receive a keyframe, id = 6
receive a keyframe, id = 7
receive a keyframe, id = 8
receive a keyframe, id = 9
receive a keyframe, id = 10
receive a keyframe, id = 11
receive a keyframe, id = 12
receive a keyframe, id = 13
receive a keyframe, id = 14
receive a keyframe, id = 15
receive a keyframe, id = 16
receive a keyframe, id = 17
receive a keyframe, id = 18
receive a keyframe, id = 19
receive a keyframe, id = 20
receive a keyframe, id = 21
receive a keyframe, id = 22
receive a keyframe, id = 23
Floating point exception (core dumped)

这个浮点错误怎么办...

更新一下setup:
75架构的GPU必须用CUDA10,大部分的cuda问题segment可能都是CUDA版本和算力那的变量定的不对造成的,我用的CUDA10, YOLOV3那边已经编译的没有问题了.

error:segmentation fault(core dumped)

./rgbd_tum ../Vocabulary/ORBvoc.txt ../Examples/RGB-D/TUM1.yaml ../data/rgbd-
data ../data/rgbd-data/associate.txt

ORB-SLAM2 Copyright (C) 2014-2016 Raul Mur-Artal, University of Zaragoza.
This program comes with ABSOLUTELY NO WARRANTY;
This is free software, and you are welcome to redistribute it
under certain conditions. See LICENSE.txt.

Input sensor was set to: RGB-D

Loading ORB Vocabulary. This could take a while...
Vocabulary loaded in 6.45s

Camera Parameters:

  • fx: 517.306
  • fy: 516.469
  • cx: 318.643
  • cy: 255.314
  • k1: 0.262383
  • k2: -0.953104
  • k3: 1.16331
  • p1: -0.005358
  • p2: 0.002628
  • fps: 30
  • color order: RGB (ignored if grayscale)

ORB Extractor Parameters:

  • Number of Features: 1000
  • Scale Levels: 8
  • Scale Factor: 1.2
  • Initial Fast Threshold: 20
  • Minimum Fast Threshold: 7

Depth Threshold (Close/Far Points): 3.09294


Start processing sequence ...
Images in the sequence: 792

New map created with 835 points
Segmentation fault (core dumped)

关于泊松重建的结果问题

poisson_reconstruction(globalMap);
我把泊松重建这一行解注释之后得到了重建的ply文件,然后在python里用open3d库可视化了出来,发现还是一些离散的点,请问作者大佬有没有什么办法可以根据点云重建出曲面?

编译错误

我的运行环境时ubuntu16.04,ROS kinetic,cuda9.0,输入sh build.sh后,报错如下:
make[2]: *** No rule to make target '../Thirdparty/darknet/build/libYOLOv3SE.so', needed by '../lib/libORB_SLAM2.so'。 停止。
make[2]: *** 正在等待未完成的任务....
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/ORB_SLAM2.dir/all' failed
make[1]: *** [CMakeFiles/ORB_SLAM2.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2
Converting vocabulary to binary
build.sh: 46: build.sh: ./tools/bin_vocabulary: not found
请问怎么解决

目标检测和点云分割怎么融合?

image
点云分割和目标检测分别进行完之后,下面注释掉的部分要取消注释吗?
final_process()是将2D 检测框内的label 投到点云上,与点云分割结果的融合在代码的哪个部分呢?

error with segmentation.cc

@qixuxiang ,你好,我在编译时segmentation.cc报错。

CMakeFiles/ORB_SLAM2.dir/build.make:542: recipe for target 'CMakeFiles/ORB_SLAM2.dir/src/segmentation.cc.o' failed
make[2]: *** [CMakeFiles/ORB_SLAM2.dir/src/segmentation.cc.o] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/ORB_SLAM2.dir/all' failed
make[1]: *** [CMakeFiles/ORB_SLAM2.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

出错的地方是:

/home/jin/orb-slam2_with_semantic_label/src/segmentation.cc:129:22: error: ‘isnan’ was not declared in this scope
       if (isnan(theta) | isnan(phi) | isnan(rho)) continue;

请问该如何解决?感激不尽!!!

@Moonkisscy

@Moonkisscy

fatal error: cudnn.h: 没有那个文件或目录

这个问题尝试自己解决下吧,解决完就可以跑起来了。

欢迎入坑SLAM

Originally posted by @qixuxiang in #9 (comment)

我们应该使用哪个包?

是orb-slam2_with_semantic_label还是orb-slam2_with_semantic_labeling?我不太理解您移动到orb-slam2_with_semantic_labeling的意思,所以我配置的时候应该用哪个包?

窗口跳了一下就闪退是什么原因

zc@zc-6688:~/orb-slam2_with_semantic_label-master/bin$ ./rgbd_tum ../Vocabulary/ORBvoc.txt ../Examples/RGB-D/TUM2.yaml ../data/rgbd_dataset_freiburg2_360_kidnap ../data/rgbd_dataset_freiburg2_360_kidnap/associations.txt

ORB-SLAM2 Copyright (C) 2014-2016 Raul Mur-Artal, University of Zaragoza.
This program comes with ABSOLUTELY NO WARRANTY;
This is free software, and you are welcome to redistribute it
under certain conditions. See LICENSE.txt.

Input sensor was set to: RGB-D

Loading ORB Vocabulary. This could take a while...
Vocabulary loaded in 6.38s

Camera Parameters:

  • fx: 520.909
  • fy: 521.007
  • cx: 325.141
  • cy: 249.702
  • k1: 0.231222
  • k2: -0.784899
  • k3: 0.917205
  • p1: -0.003257
  • p2: -0.000105
  • fps: 30
  • color order: RGB (ignored if grayscale)

ORB Extractor Parameters:

  • Number of Features: 1000
  • Scale Levels: 8
  • Scale Factor: 1.2
  • Initial Fast Threshold: 20
  • Minimum Fast Threshold: 7

Depth Threshold (Close/Far Points): 3.07156


Start processing sequence ...
Images in the sequence: 1413

New map created with 781 points
段错误 (核心已转储)
zc@zc-6688:~/orb-slam2_w

segment result and semantic map

Great Job! I have run your project successfully with CUDA8.0 + opencv3.4 + pcl1.8 and got the point cloud with semantic label, but how can I get the segment result and semantic map as you show in your paper Figure 6?
I am a beginner with SLAM and hope for your response!@qixuxiang

run build.sh failed

CMakeFiles/Makefile2:72: recipe for target 'CMakeFiles/ORB_SLAM2.dir/all' failed
make[1]: *** [CMakeFiles/ORB_SLAM2.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2
Converting vocabulary to binary
BoW load/save benchmark
./tools/bin_vocabulary: symbol lookup error: ./tools/bin_vocabulary: undefined symbol: _ZN5DBoW24FORB10fromStringERN2cv3MatERKSs

environment:
ubuntu16.04
cuda10.1
opencv2.4.11
pcl1.8.0
CMakeError.log
CMakeOutput.log

libcurand.so.8.0: cannot open shared object file: No such file or directory

hello~
I compiled the code and when it run,it just as the topic.
I installed cuda 9.0 and I see cuda>=6.5 in your "readme", but I just have this problem.
Loading ORB Vocabulary. This could take a while... Vocabulary loaded in 5.92s libcurand.so.8.0: cannot open shared object file: No such file or directory libYOLOv3SE.so not found. or can't load dependency dlls
My computer is ubuntu 16.04 and I installed cuda9.0, I am wondering whether the environment must be cuda8.0?

segmentation.cc

Screenshot from 2021-03-30 20-19-12
Screenshot from 2021-03-30 20-19-33
In segmentation.cc at lines 211, "Eigen::MatrixXf normals_mat(num_planes, num_super_voxels);" by num_planes and num_super_voxels. the num_super_voxels is not equal to 3.but at lines 268 the normals_mat is "normals_mat.row(count_idx) << p_coeffs(0), p_coeffs(1), p_coeffs(2);" when running, the error occured because of the normal_mat.
Screenshot from 2021-03-30 20-33-51

So i changed the code at line 211 "Eigen::MatrixXf normals_mat(num_planes, 3);" and it works.
i wonder which normal_mat is correct?

代码成功运行但闪退

生成了可执行二进制文件rgbd——tum,在数据集运行过程中只要出现viewer界面,就立刻闪退,提示(浮点数例外)核心已转储

Error about ORB_SLAM2 testing

After I build ORB_SLAM2 , I want to test it , so I download "rgbd_dataset_freiburg1_xyz" from "http://vision.in.tum.de/data/datasets/rgbd-dataset/download" . Then I used the command

./Examples/Monocular/mono_tumVocabulary/ORBvoc.txt Examples/Monocular/TUM1.yaml Data/rgbd_dataset_freiburg1_xyz

But the system told me there was an error

bash: ./Examples/Monocular/mono_tumVocabulary/ORBvoc.txt: There is no file or directory

What should I do?
Need Help.

is this pytorch-version can running with no cuda?

for example,my old computer has no cuda ,but I really want to test this fantasitic network for my slam project(to handle the.pcd files),..what should I do? or how to change the run.py file ?
sincerely! please~

OpenCV Error

Hi, thanks for the work.
I compiled and gave a test.
The code seems work however failed with core dumped after several keyframes.
Have you met with this during the development?
Thanks.

HW:
Ubuntu 16.06, Cuda-9.0, Opencv-3.4.0

-------
Start processing sequence ...
Images in the sequence: 573

New map created with 945 points
receive a keyframe, id = 1
receive a keyframe, id = 2
receive a keyframe, id = 3
receive a keyframe, id = 4
receive a keyframe, id = 5
receive a keyframe, id = 6
receive a keyframe, id = 7
receive a keyframe, id = 8
receive a keyframe, id = 9
receive a keyframe, id = 10
receive a keyframe, id = 11
receive a keyframe, id = 12
receive a keyframe, id = 13
receive a keyframe, id = 14
receive a keyframe, id = 15
receive a keyframe, id = 16
receive a keyframe, id = 17
receive a keyframe, id = 18
receive a keyframe, id = 19
receive a keyframe, id = 20
receive a keyframe, id = 21
receive a keyframe, id = 22
receive a keyframe, id = 23
./rgbd_tum: 
OpenCV Error: Assertion failed (key_ != -1 && "Can't fetch data from terminated TLS container.") in getData, file /home/USERNAME/Downloads/opencv-3.4.0/modules/core/src/system.cpp, line 1532
terminate called after throwing an instance of 'cv::Exception'
  what():  /home/USERNAME/Downloads/opencv-3.4.0/modules/core/src/system.cpp:1532: error: (-215) key_ != -1 && "Can't fetch data from terminated TLS container." in function getData

Aborted (core dumped)

fail to build the program

[100%] Linking CXX executable ../bin/rgbd_tum
../lib/libORB_SLAM2.so:对‘pcl::SupervoxelClusteringpcl::PointXYZRGBA::SupervoxelClustering(float, float)’未定义的引用
../lib/libORB_SLAM2.so:对‘pcl::SupervoxelClusteringpcl::PointXYZRGBA::setUseSingleCameraTransform(bool)’未定义的引用

segment result

I have reproduced your code! you are great!But I can't get your result.as you show in video,i can't get the segment picture,in other words,the result is not good.and the dataset you use is small,my is bigger,so you deal with it?
Thank you.my email is [email protected].

关于pointcloundmapping.cc的代码我认为有个小问题

在633至637行
sor.setInputCloud(input_cloud_ptr);
sor.setLeafSize (0.005f, 0.005f, 0.005f);
sor.filter(*cloud_filtered);
std::cerr << "Number of points after filtered " << cloud_filtered->size() << std::endl;
seg.setPointCloud(input_cloud_ptr);
为什么对input_cloud_ptr进行滤波之后在后续分割阶段仍然使用了input_cloud_ptr而不是滤波后的cloud_filtered呢

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.