Giter Club home page Giter Club logo

airvo's Introduction

AirSLAM: An Efficient and Illumination-Robust Point-Line Visual SLAM System

Kuan Xu1, Yuefan Hao2, Shenghai Yuan1, Chen Wang2, Lihua Xie1

1: Centre for Advanced Robotics Technology Innovation (CARTIN), Nanyang Technological University
2: Spatial AI & Robotics Lab (SAIR Lab), Department of Computer Science and Engineering, University at Buffal

📄 [Arxiv] | 💾 [Project Site] |

📜 About AirSLAM

AirSLAM is an efficient visual SLAM system designed to tackle both short-term and long-term illumination challenges. Our system adopts a hybrid approach that combines deep learning techniques for feature detection and matching with traditional backend optimization methods. Specifically, we proposea unified convolutional neural network (CNN) that simultaneously extracts keypoints and structural lines. These features are then associated, matched, triangulated, and optimized in a coupled manner. Additionally, we introduce a lightweight relocalization pipeline that reuses the built map, where keypoints, lines, anda structure graph are used to match the query frame with themap. To enhance the applicability of the proposed system to real-world robots, we deploy and accelerate the feature detection and matching networks using C++ and NVIDIA TensorRT. Extensive experiments conducted on various datasets demonstrate that our system outperforms other state-of-the-art visual SLAM systems in illumination-challenging environments. Efficiency evaluations show that our system can run at a rate of 73Hz on a PC and 40Hz on an embedded platform.

👀 Updates

  • [2024.08] We release the code and paper for AirSLAM.
  • [2023.07] AriVO is accepted by IROS 2023.
  • [2022.10] We release the code and paper for AirVO. The code for AirVO can now be found here.

🏁 Test Environment

Dependencies

  • OpenCV 4.2
  • Eigen 3
  • Ceres 2.0.0
  • G2O (tag:20230223_git)
  • TensorRT 8.6
  • CUDA 12
  • python
  • ROS noetic
  • Boost

Docker (Recommend)

docker pull xukuanhit/air_slam:v4
docker run -it --env DISPLAY=$DISPLAY --volume /tmp/.X11-unix:/tmp/.X11-unix --privileged --runtime nvidia --gpus all --volume ${PWD}:/workspace --workdir /workspace --name air_slam xukuanhit/air_slam:v4 /bin/bash

📖 Data

The data for mapping should be organized in the following Automous Systems Lab (ASL) dataset format:

dataroot
├── cam0
│   └── data
│       ├── 00001.jpg
│       ├── 00002.jpg
│       ├── 00003.jpg
│       └── ......
├── cam1
│   └── data
│       ├── 00001.jpg
│       ├── 00002.jpg
│       ├── 00003.jpg
│       └── ......
└── imu0
    └── data.csv

After the map is built, the relocalization requires only moocular images. Therefore, you only need to place the query images in a folder.

💻 Build

    cd ~/catkin_ws/src
    git clone https://github.com/sair-lab/AirSLAM.git
    cd ../
    catkin_make
    source ~/catkin_ws/devel/setup.bash

🏃 Run

The launch files for VO/VIO, map optimization and relocalization are placed in VO folder, MR folder, and Reloc folder, respectively. Before running them, you need to modify the corresponding configurations according to you data path and the desired map saving path. The following is an example of mapping, optimization, and relocalization with the EuRoC dataset.

Mapping

1: Change "dataroot" in VO launch file to your own data path. For the EuRoC dataset, "mav0" needs to be included in the path.

2: Change "saving_dir" in the same file to the path where you want to save the map and trajectory. It must be an existing folder.

3: Run the launch file:

roslaunch air_slam vo_euroc.launch 

Map Optimization

1: Change "map_root" in MR launch file to your own map path.

2: Run the launch file:

roslaunch air_slam mr_euroc.launch 

Relocalization

1: Change "dataroot" in Reloc launch file to your own query data path.

2: Change "map_root" in the same file to your own map path.

3: Run the launch file:

roslaunch air_slam reloc_euroc.launch 

Other datasets

Launch folder and config folder respectively provide the lauch files and configuration files for other datatsetsin the paper. If you want to run AirSLAM with your own dataset, you need to create your own camera file, configuration file, and launch file.

✍️ TODO List

  • Initial release. 🚀
  • Support SuperGlue as feature matecher
  • Optimize the TensorRT acceleration of PLNet

📝 Citation

@article{xu2024airslam,
  title = {{AirSLAM}: An Efficient and Illumination-Robust Point-Line Visual SLAM System},
  author = {Xu, Kuan and Hao, Yuefan and Yuan, Shenghai and Wang, Chen and Xie, Lihua},
  journal = {arXiv preprint arXiv:2408.03520},
  year = {2024},
  url = {https://arxiv.org/abs/2408.03520},
  code = {https://github.com/sair-lab/AirSLAM},
}

@inproceedings{xu2023airvo,
  title = {{AirVO}: An Illumination-Robust Point-Line Visual Odometry},
  author = {Xu, Kuan and Hao, Yuefan and Yuan, Shenghai and Wang, Chen and Xie, Lihua},
  booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  year = {2023},
  url = {https://arxiv.org/abs/2212.07595},
  code = {https://github.com/sair-lab/AirVO},
  video = {https://youtu.be/YfOCLll_PfU},
  addendum = {SAIR Lab Recommended}
}

airvo's People

Contributors

jaafarmahmoud1 avatar thien94 avatar wang-chen avatar xukuanhit avatar yuefanhao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

airvo's Issues

Compilation failure issues in Ubuntu18.04

My compilation environment is:
ubuntu18.04
g++ and gcc 7.5.0
cmake 3.24

The following error occurred when I executed the command "catkin_make":

In file included from /usr/local/include/g2o/core/base_fixed_sized_edge.h:39:0,
from /usr/local/include/g2o/core/base_binary_edge.h:30,
from /usr/local/include/g2o/types/slam3d/types_slam3d.h:31,
from /home/uestc213/data/airvio_ws/src/AirVO/include/utils.h:25,
from /home/uestc213/data/airvio_ws/src/AirVO/src/camera.cc:4:
/usr/local/include/g2o/stuff/tuple_tools.h: In function ‘void g2o::tuple_apply_i(F&&, T&, int)’:
/usr/local/include/g2o/stuff/tuple_tools.h:45:35: error: ‘tuple_size_v’ is not a member of ‘std’
std::make_index_sequence<std::tuple_size_v<std::decay_t>>());
^~~~~~~~~~~~
/usr/local/include/g2o/stuff/tuple_tools.h:45:35: note: suggested alternative: ‘tuple_size’

So I made change to the CMakelist.txt:
set(CMAKE_CXX_STANDARD 17)

However , another error occurred:
error: ‘make_unique’ is not a member of ‘g2o’; did you mean ‘std::make_unique’?

CUDA版本问题

Due to the limited performance of the notebook, can CUDA11.3 be tested?

buffers.h文件中的问题

你好,感谢你的工作!
我想问一下,在buffers.h文件中,在host上分配的用于数据传输的内存,是否使用页锁定内存更好呢?即是否应该使用cudaMallocHost()/cudaFreeHost(),而不是分配分页内存的Malloc()/free()?
8ce8b3ccfd9ab59577e52f69c4f4b45

CMake Error: g2o

Excuse me, I encountered a g2o problem when compiling. How can I solve it?

CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
G2O_INCREMENTAL_LIBRARY
linked by target "air_vo_lib" in directory /home/zjj/AirVO_ws/src/AirVO-master
G2O_INTERACTIVE_LIBRARY
linked by target "air_vo_lib" in directory /home/zjj/AirVO_ws/src/AirVO-master
G2O_SOLVER_CHOLMOD
linked by target "air_vo_lib" in directory /home/zjj/AirVO_ws/src/AirVO-master
G2O_SOLVER_CSPARSE
linked by target "air_vo_lib" in directory /home/zjj/AirVO_ws/src/AirVO-master
G2O_SOLVER_CSPARSE_EXTENSION
linked by target "air_vo_lib" in directory /home/zjj/AirVO_ws/src/AirVO-master
G2O_VIEWER_LIBRARY
linked by target "air_vo_lib" in directory /home/zjj/AirVO_ws/src/AirVO-master

-- Configuring incomplete, errors occurred!
See also "/home/zjj/AirVO_ws/build/CMakeFiles/CMakeOutput.log".
See also "/home/zjj/AirVO_ws/build/CMakeFiles/CMakeError.log".

Thanks!

UMA-VI Dataset

Hello, I want to use your great work in my paper. However, I cannot download the UMA-VI dataset from https://mapir.isa.uma.es/mapirwebsite/?p=2108&page=2. I also try the link provided by the authors as below:
image

But I cannot sign in the website. Can you provide an onedrive or google cloud link of the dataset? Thanks!

sorry, fatal error: NvInferRuntime.h: 没有那个文件或目录

[ 2%] Building CXX object Thirdparty/TensorRTBuffer/CMakeFiles/TensorRTBuffer.dir/src/logger.cpp.o

In file included from /home/kkk/workspace/llll/Thirdparty/TensorRTBuffer/include/logger.h:20,

             from /home/kkk/workspace/llll/Thirdparty/TensorRTBuffer/src/logger.cpp:17:

/home/kkk/workspace/llll/Thirdparty/TensorRTBuffer/include/logging.h:20:10: fatal error: NvInferRuntime.h: 没有那个文件或目录

20 | #include <NvInferRuntime.h>

  |          ^~~~~~~~~~~~~~~~~~

compilation terminated.

make[2]: *** [Thirdparty/TensorRTBuffer/CMakeFiles/TensorRTBuffer.dir/build.make:76:Thirdparty/TensorRTBuffer/CMakeFiles/TensorRTBuffer.dir/src/logger.cpp.o] 错误 1

make[1]: *** [CMakeFiles/Makefile2:162:Thirdparty/TensorRTBuffer/CMakeFiles/TensorRTBuffer.dir/all] 错误 2

make: *** [Makefile:91:all] 错误 2

camera_config

"Thank you for your work. As a beginner, I have learned a lot from it. I am not very familiar with some parameters in your camera intrinsic file. I understand that LEFT.D and LEFT.K are obtained from the dataset. Could you please explain how LEFT.R and LEFT.P are obtained?"
image

No viewer for the outputs

Hi, I have run the algorithm succefully, but there is no the viewer for me to check the results on the images. What can I do to solve the problem? Thanks a lot.

Lightglue.

Hi, I am looking to speed up the runtime of this project, and am thinking that swapping superglue for lightglue might help.

Do you have any plan to implement this? Or pointers that might help me?

Thanks!

Could not load library libcublasLt.so.12. Error: libcublasLt.so.12: cannot open shared object file: No such file or directory

After I runned the command "roslaunch air_vo oivio.launch", the following error I got:
config_file = /home/ru/catkin_ws/src/AirVO/configs/configs_oivio.yaml
path = /home/ru/oivio/MN_015_GV_01/husky0/cam0/data
Could not load library libcublasLt.so.12. Error: libcublasLt.so.12: cannot open shared object file: No such file or directory //the libcublasLt.so lib is exist,but the libcublasLt.so.12 is not exist. I don't konw why???
[air_vo-2] process has died [pid 517129, exit code -6, cmd /home/ru/catkin_ws/devel/lib/air_vo/air_vo __name:=air_vo __log:=/home/ru/.ros/log/71b0e9b6-b836-11ed-b3a2-cb4f55628a0f/air_vo-2.log].
log file: /home/ru/.ros/log/71b0e9b6-b836-11ed-b3a2-cb4f55628a0f/air_vo-2*.log
Another problem is that I wonder to known the meaning of "exit code -6" ?

Low performance platform testing

Hello, thank you so much for open-source the code.
I have read your paper. The experimental platform is RTX3090, and the feature extraction time is only 15ms. Have you tried the computing platform with low performance, such as RTX3060 of notebook, and how long is the time?
I failed to run through your code, so I ask you this question. I really hope to get your answer.

about runing enviroment

thanks for your great efforts. will it run on environment of cuda 10.2 with tensorrt, and ROS melodic?

How To convert SuperGlue model Correctly?

Great work! Thanks a lot!
some details of the project I need your help!
I follow the step by convert2onnx subdirectory:
step 1: convert torch model to onnx format:

python convert_superglue_to_onnx.py
step 2: onnx simplyfy the onnx model:
python -m onnxsim ../output/superglue_indoor.onnx ../output/superglue_indoor_sim.onnx --dynamic-input-shape --input-shape keypoints_0:1,512,2 scores_0:1,512 descriptors_0:1,256,512 keypoints_1:1,512,2 scores_1:1,512 descriptors_1:1,256,512
step 3: convert int64 to int 32:
python convert_int32.py

now, I got three models:
image

when i run the code,an error will be occured in the methold "build" of super_glue:
image

but when i use your provided "superglue_outdoor_sim_int32.onnx" in output directory,everything works well.

SO,can you provide the process of the model converted?

Is it possible to adapt AirVO with monocular cameras?

I've already tried feeding the same images from cam0 and cam1 and using the euro configuration without any modification. Fortunately, although stereo matching is one of the core of AirVO, it can localize a little bit. However, it's still stuck at its initial position as well as generating point clouds at wrong locations.

_keyframe_ids.size ==0 why?

I want to ask a question!!
first,I use below command:
docker pull xukuanhit/air_slam:v1
docker run -it --env DISPLAY=$DISPLAY --volume /tmp/.X11-unix:/tmp/.X11-unix --privileged --runtime nvidia --gpus all --volume ${PWD}:/workspace --workdir /workspace --name air_slam xukuanhit/air_slam:v1 /bin/bash
I use 2080TI gpu;
second . I use the command and all success
image
third ,I use command "roslaunch air_vo uma_bumblebee_indoor.launch "
but get the information:
image

image

I want to ask why "_keyframe_ids.size = 0" ?? thank you!!

OIVIO Dataset

Thanks for your work. The OIVIO data set cannot be downloaded, and the official website is 404. Can you share the MN_015_GV_01 data set through the network disk?

Results better than reported in paper for UMA-VI

Hello, thank you so much for your work. This is a great VO system.

The results I get when comparing the GT trajectory to the estimated trajectory using ATE (after alignment) is noticeably better than what has been reported in your paper. Have changes been made in the meantime to optimize the code? I'm using an RTX4070 an thus the newer version of Cuda. Could this be the reason for the improvement?

The results I am getting vs. yours is:

conference-csc1 -> 0.2816 (vs. 0.5236)
conference-csc2 -> 0.1420 (vs 0.1607)
third-floor-csc1 -> 0.1101 (vs. 0.1760)
third-floor-csc2 -> 0.1510 (vs. 0.1312)

Please also see attached the xy plot I have extracted for the conference-csc2 sequence as a reference. As you'll see both the star t and endpoints are closer to the ground truths than shown in the Figure in the paper, as well.

xy_run0

Error in SuperPoint building

Hello, why are the following errors when I run launch after I compile successfully?

Error in SuperPoint building

MapBuilder::MapBuilder(Configs& configs): _init(false), _track_id(0), _line_track_id(0),
_to_update_local_map(false), _configs(configs){
_camera = std::shared_ptr(new Camera(configs.camera_config_path));
_superpoint = std::shared_ptr(new SuperPoint(configs.superpoint_config));
if (!_superpoint->build()){
std::cout << "Error in SuperPoint building" << std::endl;
exit(0);
}

SuperGlue推理结果为NaN

哈喽 感谢您的开源工作~
我在使用公共数据集Euroc运行AirVO的时候一直无法初始化地图, 排查原因最后发现是在superglue在推理的时候 auto *output_score = static_cast<float *>(buffers.getHostBuffer(superglue_config_.output_tensor_names[0])); 中这个*output_score是一个Nan值, 导致匹配数量一直是0.请问您在使用superglue推理的时候发现过这种情况吗? superpoint推理是没问题的.
我的环境是Ubuntu20.04 CUDA12.1 + TensorRT 8.6.16

question

Thank you for your great work!
I'm not very familiar with using TensorRT. I have a question, in function "bool SuperPoint::build()", what is the code after the if(deserialize_engine()){}" part for?
1

for leveraging TensorRT's API to build a network after deserialization fails?

Error in SuperPoint building

`root@d09797bf0618:/workspace# roslaunch air_vo oivio.launch
... logging to /root/.ros/log/3ab20a38-c3a5-11ed-94a1-0242ac110002/roslaunch-d09797bf0618-806.log
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://d09797bf0618:42449/

SUMMARY

PARAMETERS

  • /air_vo/camera_config_path: /workspace/catkin...
  • /air_vo/config_path: /workspace/catkin...
  • /air_vo/dataroot: /workspace/oivio/...
  • /air_vo/model_dir: /workspace/catkin...
  • /air_vo/saving_dir: /workspace/catkin...
  • /air_vo/traj_path: /workspace/catkin...
  • /rosdistro: noetic
  • /rosversion: 1.15.14

NODES
/
air_vo (air_vo/air_vo)

ROS_MASTER_URI=http://localhost:11311

process[air_vo-1]: started with pid [843]
config_file = /workspace/catkin_ws/src/AirVO/configs/configs_oivio.yaml
path = /workspace/oivio/TN_100_GV_01/husky0/cam0/data
[03/16/2023-10:51:34] [I] [TRT] [MemUsageChange] Init CUDA: CPU +572, GPU +0, now: CPU 608, GPU 980 (MiB)
[03/16/2023-10:51:35] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +1, GPU +0, now: CPU 628, GPU 980 (MiB)
[03/16/2023-10:51:35] [I] [TRT] ----------------------------------------------------------------
[03/16/2023-10:51:35] [I] [TRT] Input filename: /workspace/catkin_ws/src/AirVO/output/superpoint_v1_sim_int32.onnx
[03/16/2023-10:51:35] [I] [TRT] ONNX IR version: 0.0.8
[03/16/2023-10:51:35] [I] [TRT] Opset version: 12
[03/16/2023-10:51:35] [I] [TRT] Producer name: onnx-typecast
[03/16/2023-10:51:35] [I] [TRT] Producer version:
[03/16/2023-10:51:35] [I] [TRT] Domain:
[03/16/2023-10:51:35] [I] [TRT] Model version: 0
[03/16/2023-10:51:35] [I] [TRT] Doc string:
[03/16/2023-10:51:35] [I] [TRT] ----------------------------------------------------------------
[03/16/2023-10:51:35] [W] [TRT] onnx2trt_utils.cpp:369: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[03/16/2023-10:51:36] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +1340, GPU +378, now: CPU 1976, GPU 1358 (MiB)
[03/16/2023-10:51:36] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +254, GPU +60, now: CPU 2230, GPU 1418 (MiB)
[03/16/2023-10:51:36] [W] [TRT] TensorRT was linked against cuDNN 8.4.1 but loaded cuDNN 8.4.0
[03/16/2023-10:51:36] [I] [TRT] Local timing cache in use. Profiling results in this builder pass will not be stored.
[03/16/2023-10:51:42] [E] [TRT] 1: [caskBuilderUtils.cpp::trtSmToCaskCCV::548] Error Code 1: Internal Error (Unsupported SM: 0x809)
[03/16/2023-10:51:42] [E] [TRT] 2: [builder.cpp::buildSerializedNetwork::636] Error Code 2: Internal Error (Assertion engine != nullptr failed. )
Error in SuperPoint building
[air_vo-1] process has finished cleanly
log file: /root/.ros/log/3ab20a38-c3a5-11ed-94a1-0242ac110002/air_vo-1*.log
all processes on machine have died, roslaunch will exit
shutting down processing monitor...
... shutting down processing monitor complete
done
root@d09797bf0618:/workspace#
`

Is it a bug?

image

int the line 184?Is it a bug?I think it should ( if (!good_infer))thanks!

running crash

I have just followed the instructions to setup the environments, but when I run the Euroc Datset, It crashed without any error information. Could help me to solve the problem?
Picture1

Parameters of configs_realsense (D435i)

Hello.
Thank you for sharing the code.

While running the outdoor 640*480 rosbag dataset recorded on the D435i, I found that the first half of the path was normal, but the second half of the path suddenly had a severe drift around the corner. As if there is a problem with my parameters, Do I need to change some parameters in configs_realsense?

roslaunch crashed

When I run the code "roslaunch air_vo euroc_ros.launch ", but I get an error:

setting /run_id to b3188950-19ab-11ee-8292-ac1f6ba0835e
process[rosout-1]: started with pid [37919]
started core service [/rosout]
process[air_vo_ros-2]: started with pid [37925]
process[rviz-3]: started with pid [37927]
config_file = /home/213/data/airvo_ws/src/AirVO/configs/configs_euroc.yaml
[ INFO] [1688393573.997696746]: rviz version 1.13.29
[ INFO] [1688393573.997757365]: compiled against Qt version 5.9.5
[ INFO] [1688393573.997776307]: compiled against OGRE version 1.9.0 (Ghadamon)
[ INFO] [1688393574.002567755]: Forcing OpenGl version 0.
[air_vo_ros-2] process has died [pid 37925, exit code -11, cmd /home/213/data/airvo_ws/devel/lib/air_vo/air_vo_ros __name:=air_vo_ros __log:=/home/213/.ros/log/b3188950-19ab-11ee-8292-ac1f6ba0835e/air_vo_ros-2.log].
log file: /home/213/.ros/log/b3188950-19ab-11ee-8292-ac1f6ba0835e/air_vo_ros-2*.log
[ INFO] [1688393574.461469073]: Stereo is NOT SUPPORTED
[ INFO] [1688393574.461561817]: OpenGL device: NVIDIA GeForce RTX 2080 Ti/PCIe/SSE2
[ INFO] [1688393574.461588794]: OpenGl version: 4.6 (GLSL 4.6).

I only modified the dataroot in the launch file.

In addition, can you provide the offline download of OIVIO? Because the official website of OIVIO cannot be downloaded now.
thanks!!

How to Evaluate VO?

Thank you for your open-source contribution. I have two questions:
1.How to evaluate the performance of VO, such as the tables listed in your paper, for example, RMSE metrics and the running time of the principal components?
2.When will AirVO be extended to a full SLAM system?

catkin_make error

edge_project_line.cc:20:10: error: 'readInformationMatrix' was not declared in this scope

How to solve this problem???
thank you.

roslaunch euroc.launch ;process has died

在 jetson Orin NX上,
ubuntu20.04
opencv 4.5.4
CUDA 11.4.315
TensorRT 8.5.2.2
ceres2.0.0
g2o(tag:20230223_git)
编译都通过没有问题,数据集也下载了,在launch文件里面路径也改为对应的了,相同环境下运行过SuperPoint-SuperGlue-TensorRT没有问题
运行roslaunch air_vo euroc_ros.launch时 ,最后显示 process has died ,具体终端显示如下

`nvidia@nvidia-desktop:~/AirVO_ws/devel$ roslaunch air_vo euroc.launch
... logging to /home/nvidia/.ros/log/7bea6782-3d8c-11ee-858e-788a8639a8d0/roslaunch-nvidia-desktop-25251.log
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://nvidia-desktop:33595/

SUMMARY

PARAMETERS

  • /air_vo/camera_config_path: /home/nvidia/AirV...
  • /air_vo/config_path: /home/nvidia/AirV...
  • /air_vo/dataroot: /home/nvidia/MH_0...
  • /air_vo/model_dir: /home/nvidia/AirV...
  • /air_vo/saving_dir: /home/nvidia/AirV...
  • /air_vo/traj_path: /home/nvidia/AirV...
  • /rosdistro: noetic
  • /rosversion: 1.16.0

NODES
/
air_vo (air_vo/air_vo)

auto-starting new master
process[master]: started with pid [25259]
ROS_MASTER_URI=http://localhost:11311

setting /run_id to 7bea6782-3d8c-11ee-858e-788a8639a8d0
process[rosout-1]: started with pid [25269]
started core service [/rosout]
process[air_vo-2]: started with pid [25272]
config_file = /home/nvidia/AirVO_ws/src/AirVO/configs/configs_euroc.yaml
path = /home/nvidia/MH_01_easy/mav0/cam0/data
[air_vo-2] process has died [pid 25272, exit code -11, cmd /home/nvidia/AirVO_ws/devel/lib/air_vo/air_vo __name:=air_vo __log:=/home/nvidia/.ros/log/7bea6782-3d8c-11ee-858e-788a8639a8d0/air_vo-2.log].
log file: /home/nvidia/.ros/log/7bea6782-3d8c-11ee-858e-788a8639a8d0/air_vo-2*.log
`

superglue and superpoint failed

hi, i did install cuda-11.6.2 and tensorrt 8.4 and successfully compiled the code.
but it stucked in the network building step; the superglue engine building hanging there;

i wish to create the onnx file by myself using the convert2onnx folder; do you have a environment.yml file for creating the conda env.
thanks a lot;

Error in SuperPoint building

I tried to run uma_bumblebee_indoor.launch and euroc_ros.launch on docker (xukuanhit/air_slam:v1) as you recommend, but got the SuperPoint building error.

root@arl-Thin-GF63-12VE:/workspace# roslaunch air_vo uma_bumblebee_indoor.launch

... logging to /root/.ros/log/83057fb2-1671-11ee-bd38-adcfe6f4f8ef/roslaunch-arl-Thin-GF63-12VE-2796.log
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://arl-Thin-GF63-12VE:33725/

SUMMARY
========

PARAMETERS
 * /air_vo/camera_config_path: /workspace/src/Ai...
 * /air_vo/config_path: /workspace/src/Ai...
 * /air_vo/dataroot: /workspace/src/Ai...
 * /air_vo/model_dir: /workspace/src/Ai...
 * /air_vo/saving_dir: /workspace/src/Ai...
 * /air_vo/traj_path: /workspace/src/Ai...
 * /rosdistro: noetic
 * /rosversion: 1.15.14

NODES
  /
    air_vo (air_vo/air_vo)

ROS_MASTER_URI=http://localhost:11311

process[air_vo-1]: started with pid [2815]
config_file = /workspace/src/AirVO/configs/configs_uma_bumblebee_indoor.yaml
path = /workspace/src/AirVO/dataset/third-floor-csc2_2019-03-04-20-32-22_IllChange/cam0/data
Error in SuperPoint building
[air_vo-1] process has finished cleanly
log file: /root/.ros/log/83057fb2-1671-11ee-bd38-adcfe6f4f8ef/air_vo-1*.log
all processes on machine have died, roslaunch will exit
shutting down processing monitor...
... shutting down processing monitor complete
done

Possible memory leak

Hello!
Thanks for the great work!
I am facing memory leak in the code, probably the source of the leak is from the function MapBuilder::ExtractFeatureAndMatch
Everytime a new image is processed the memory is increased and not released. I even turned off the tracking thread and still face the issue!
Do you have any idea why it is happening?
With respect!

请教一下orb-slam2跑OIVIO的评估问题

由于坐标系不同,两条轨迹无法对齐,evo -a指令由于时间戳不一致无法对齐轨迹,而且orbslam生成的轨迹txt有7000多行,然而真值只有2040行,请教如何解决这个问题。

Error when runining the examples

Hi,

Thank you for sharing code. I tried to run the AirVO code but meet the following problems:

One Frame Processinh Time: 17 ms.
Save file to /home/asrlab/catkin_ws/src/AirVO/debug/traj.txt
_keyframe_ids.size = 155
terminate called without an active exception
[air_vo-2] process has died [pid 148170, exit code -6, cmd /home/asrlab/catkin_ws/devel/lib/air_vo/air_vo __name:=air_vo __log:=/home/asrlab/.ros/log/df79dbc6-ec4a-11ee-b1c8-5b0f24582382/air_vo-2.log].
log file: /home/asrlab/.ros/log/df79dbc6-ec4a-11ee-b1c8-5b0f24582382/air_vo-2*.log

My system:
OS: Ubuntu 20.04
GPU: RTX 3070
CUDA: 11.8
cuDNN: 8.5
OpenCV: 4.2.0
I compiled OpenCV 4.2.0 with C++14 and CUDA.

About the parameters of the algorithm?

What are the meanings represented by each parameter in the configs_euroc.yaml file?
I have found that in some scenarios, although line features increase robustness, they also improve translation errors. I guess it's the linear merge parameter that affects the results. I am trying to modify the parameters in the hope of achieving better results. But most of the parameters are not explained.

Windows, without ROS?

Hi! Thank you for this code. Would it be possible to run it on Windows, without ROS?

Could the AirVO run bag dateset

Hello ,In order to test the AirVO ,I download other datasets but they are bag files. Could the AirVO run bag dateset ? If could,How?

Nice work

Excellent work. I'm also a HIT student, who will join Prof. Xie‘s group by CSC support. Can I add your WeChat to talk about something more? My Wechat: 18845770280.

Is it possible to run w/ a bag file?

Hello,

Thank you for working on this research project. I got everything to work perfectly - one thing I was just wondering was if I could use this with my bag file (I have a bag file recorded using flir_boson sensor).

Thank you.

error: no match for ‘operator=’ (operand types are ‘std::shared_ptr<cv::ximgproc::FastLineDetector>’ and ‘cv::Ptr<cv::ximgproc::FastLineDetector>’)

请问catkin_make:时报错:该怎么解决?

AirVO_ws/src/AirVO-master/src/line_processor.cc:465:118: error: no match for ‘operator=’ (operand types are ‘std::shared_ptrcv::ximgproc::FastLineDetector’ and ‘cv::Ptrcv::ximgproc::FastLineDetector’)
line_detector_config.canny_th1, line_detector_config.canny_th2, line_detector_config.canny_aperture_size, false);

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.