Giter Club home page Giter Club logo

mlcc's Introduction

Targetless Extrinsic Calibration of Multiple Small FoV LiDARs and Cameras using Adaptive Voxelization

Our journal paper is available on IEEE TIM. The pre-release code has been updated. Our experiment video is available on YouTube and Bilibili. Please consider citing our paper if you find our code useful.

@ARTICLE{9779777,
  author={Liu, Xiyuan and Yuan, Chongjian and Zhang, Fu},
  journal={IEEE Transactions on Instrumentation and Measurement},
  title={Targetless Extrinsic Calibration of Multiple Small FoV LiDARs and Cameras Using Adaptive Voxelization},
  year={2022},
  volume={71},
  number={},
  pages={1-12},
  doi={10.1109/TIM.2022.3176889}
}

Introduction

In this paper, we propose a fast, accurate, and targetless extrinsic calibration method for multiple LiDARs and cameras based on adaptive voxelization. On the theory level, we incorporate the LiDAR extrinsic calibration with the bundle adjustment method. We derive the second-order derivatives of the cost function w.r.t. the extrinsic parameter to accelerate the optimization. On the implementation level, we apply the adaptive voxelization to dynamically segment the LiDAR point cloud into voxels with non-identical sizes, and reduce the computation time in the process of feature correspondence matching.

Fig. 1 Dense colorized point cloud reconstructed with LiDAR poses and extrinsic calibrated with our proposed method.

Adaptive Voxelization

In both LiDAR-LiDAR and LiDAR-camera extrinsic calibration, we implement adaptive voxelization to accelerate the feature correspondence matching process. The point cloud map is dynamically segmented into voxels with non-identical sizes, such that only one plane feature is contained in each voxel. This process sufficiently saves the execution time of k-d tree searching from our previous work1 (see Fig. 2) and work2 (see Fig. 3).

Fig. 2 Adaptive voxelization in LiDAR-LiDAR extrinsic calibration.

Fig. 3 Adaptive voxelization in LiDAR-camera extrinsic calibration. A) real-world image. B) raw point cloud of this scene. C) voxelization of previous work where the yellow circles indicate the false edge estimation. D) edges extracted with our proposed method.

1. Prerequisites

Our code has been tested on Ubuntu 16.04 with ROS Kinetic, Ubuntu 18.04 with ROS Melodic and Ubuntu 20.04 with ROS Noetic, Ceres Solver 1.14.x, OpenCV 3.4.14, Eigen 3.3.7, PCL 1.8.

2. Build and Run

Clone the repository and catkin_make:

cd ~/catkin_ws/src
git clone [email protected]:hku-mars/mlcc.git
cd .. && catkin_make
source ~/catkin_ws/devel/setup.bash

3. Run Our Example

The parameters base LiDAR (AVIA or MID), test scene (scene-1 or scene-2), adaptive_voxel_size, etc., could be modified in the corresponding launch file. We also provide the original rosbag files (scene-1 and scene-2) for your reference.

3.1 Multi-LiDAR Extrinsic Calibration

Step 1: base LiDAR pose optimization (the initial pose is stored in scene-x/original_pose)

roslaunch mlcc pose_refine.launch

Step 2: LiDAR extrinsic optimization (the initial extrinsic is stored in config/init_extrinsic)

roslaunch mlcc extrinsic_refine.launch

Step 3: pose and extrinsic joint optimization

roslaunch mlcc global_refine.launch

3.2 Multi-LiADR-Camera Extrinsic Calibration

roslaunch mlcc calib_camera.launch

3.3 Single LiDAR-Camera Calibration

We have added code for single LiDAR-camera extrinsic calibration using adaptive voxelization, which supports both pinhole and fisheye camera models. The FISHEYE macro is defined in calib_single_camera.hpp. You can try our provided fisheye camera data.

roslaunch mlcc calib_single_camera.launch

Fig. 4 Extrinsic calibration of the fisheye camera and LiDAR in a single scene using adaptive voxelization. Left: distorted image. Right: colorized point cloud.

4. Run Your Own Data

To test on your own data, you need to save the LiDAR point cloud in .pcd format. Please only collect the point cloud and images when the LiDAR (sensor platform) is not moving for optimal precision (or segment them from a complete rosbag). The base LiDAR poses and initial extrinsic values shall also be provided (in tx ty tz qw qx qy qz format). These initial values could be obtained by general SLAM and hand-eye calibration algorithms.

You may need to modify the parameters voxel_size (adaptive voxel size), feat_eigen_limit (feature eigen ratio), and downsmp_sz_base (downsampling size) for LiDAR-LiDAR extrinsic calibration to adjust the precision and speed. You need to change the corresponding path and topic name in the yaml files in the config folder.

5. Known Issues

Currently, we separate the LiDAR extrinsic calibration process into three steps for debug reasons. In future release, we wish to combine them together to make them more convenient to use.

6. License

The source code is released under GPLv2 license.

We are still working on improving the performance and reliability of our codes. For any technical issues, please contact us via email [email protected] and [email protected].

For commercial use, please contact Dr. Fu Zhang [email protected].

mlcc's People

Contributors

samsdolphin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mlcc's Issues

Pose_refine reported the following error: Assertion `!std::isnan(residual2)'

Thanks for your excellent work.
When I converted the KITTI360 data into the required format for pose optimization, the problem of "Assertion !std::isnan(residual2)" was reported
After I conducted a series of tests, I found that when the number of frames I used was less than or equal to 35, there was no problem with Pose_refine (0.pcd~34.pcd). But as long as the number of frames is greater than 35, the error will be reported.

process[pose_refine-1]: started with pid [26669]
process[rviz-2]: started with pid [26676]
---------------------
iteration 0
pose_refine: /home/ly/catkin_ws/src/mlcc/include/pose_refine.hpp:272: void LM_OPTIMIZER::optimize(): Assertion `!std::isnan(residual2)' faile
d.
[pose_refine-1] process has died [pid 26669, exit code -6, cmd /home/ly/catkin_ws/devel/lib/mlcc/pose_refine __name:=pose_refine __log:=/home
/ly/.ros/log/f4dda018-eeec-11ec-a53c-04d4c45d3faf/pose_refine-1.log].
log file: /home/ly/.ros/log/f4dda018-eeec-11ec-a53c-04d4c45d3faf/pose_refine-1*.log

calibration of multi-lidar and camera

Thanks for your excellent work.
I want to use the code to calibrate multi lidar and two- camera for my custom robot
Can you give me some advice about how to make the .dat file such as src/mlcc/scene0/00/patch0.dat?

extrinsic refine did not work in my pc!

when i run the extrinsic refine using the data from the repo sense2 , i got the error .

---------------------
iteration 0
extrinsic_refine: /home/eric/workspace/mlcc_new_ws/src/mlcc-main/include/extrinsic_refine.hpp:171: void EXTRIN_OPTIMIZER::optimize(): Assertion `!std::isnan(residual2)' failed.
[extrinsic_refine-2] process has died [pid 159904, exit code -6, cmd /home/eric/workspace/mlcc_new_ws/devel/lib/mlcc/extrinsic_refine __name:=extrinsic_refine __log:=/home/eric/.ros/log/8a525178-9d94-11ee-92d4-2b86260c0bb7/extrinsic_refine-2.log].
log file: /home/eric/.ros/log/8a525178-9d94-11ee-92d4-2b86260c0bb7/extrinsic_refine-2*.log

Could please tell me how to fix it? Thanks in advance.

Results of the same data vary a lot from different initial extrinsics between camera and lidar

Devices: livox mid360 and a fisheye camera (182degree FOV)
I test the single pose method as well as mult-poses methods using different initial extrins data. The optimization loop looks fine since the cloud edges keep approaching the rgb edges. The matching results judging from the image looks good (cloud edge overlap with the rgb edge very well). However, the results still experience big difference (around 10 cm difference) when I slightly change the initial extrinsics (from 0~10 cm).
The cloud edge and rgb edge is extracted well since we manually made objects with clear geometric features for calibration.

I wonder what kind of scene is more suitable for this algorithm and why the result varies so much even though the edges overlap with each other well?

2

judge_eigen

Hello~
In judge_eigen(), by computing max-eigenvalue/min-eigenvalue, will it extract lines besides planes?

Thank you very much for adding calibration for a single camera and radar. I encountered the following error

Iteration:0 Distance:30
[pcl::KdTreeFLANN::setInputCloud] Cannot create a KDTree with an empty input cloud!
[pcl::KdTreeFLANN::setInputCloud] Cannot create a KDTree with an empty input cloud!

Iteration:1 Distance:29
[pcl::KdTreeFLANN::setInputCloud] Cannot create a KDTree with an empty input cloud!
[pcl::KdTreeFLANN::setInputCloud] Cannot create a KDTree with an empty input cloud!

my lidar is velodyne16

Optimization crash during `extrinsic_refine` step

Hi there,

I am following the calibration steps with the provided dataset ('scene2') and ran into some branch in the extrinsic_refine step .
Could you please share some thoughts on the issue? I have tried the scene1 and got the same issue.

Thanks

roslaunch mlcc extrinsic_refine.launch

process[extrinsic_refine-1]: started with pid [945409]
process[rviz-2]: started with pid [945410]
---------------------
iteration 0
extrinsic_refine: /home/cyngn/github/catkin_mlcc_ws/src/mlcc/include/extrinsic_refine.hpp:178: void EXTRIN_OPTIMIZER::optimize(): Assertion `!std::isnan(residual2)' failed.

Expected Results with your data?

Which are the expected results of camera calibration to obtain with your data? In both cases of avia and mid base lidar?
I am having issues obtaining the results written in results.txt:

LEFT CAMERA with AVIA as base and your provided pose.json file :
-0.000398216,-0.999997,0.00253489,0.0667963
-0.0157211,-0.00252831,-0.999873,0.0417173
0.999876,-0.000438017,-0.01572,-0.0450952
0,0,0,1

but the expected is
0.00138404,-0.999995,0.00282654,0.0202171
-0.0190134,-0.00285234,-0.999815,0.110485
0.999818,0.00133004,-0.0190172,-0.0218314
0,0,0,1

Is there something missing in the uploaded code?

MID360 and camera calibration

2024-01-02 16-09-41 的屏幕截图
Hello, thank you very much for your open source project, I am not able to get good results when calibrating a single lidar and camera, the alignment results are shown in the picture. How should I adjust the parameters? The alignment works well when testing the data set you posted. Do I need to change other parameters for the pinhole camera model?

My data link : https://1drv.ms/u/s!AuQ1Ilp0ikW-i3H-ZFp6POdNKQST?e=gLYFPf

Question about vpnp cost function

Dear author, thanks for your excellent work, but I am puzzled by the vpnp cost function, I did not understand why we need the line direction to compute the matrix V, and the physical meaning of this formula VRV.transpose
a9dbc88dd7d6bdc163c148b62cf3a50
Look forward to your reply

Questions about OctoTree parameter Settings

if (use_ada_voxel) { if (base_number == 3) // if AVIA is the base LiDAR { adaptVoxel(adapt_voxel_map, 3, 0.0025); debugVoxel(adapt_voxel_map); down_sampling_voxel(*lidar_edge_clouds, 0.05); } else { adaptVoxel(adapt_voxel_map, 4, 0.0009); debugVoxel(adapt_voxel_map); down_sampling_voxel(*lidar_edge_clouds, 0.02); }
May I have a question that how do you try the numerical value of the adaptVoxel function's parameters.
And how are the parameters (double voxel_size, double eigen_threshold) determined?

Is intensity field used?

Hi,
is the intensity field used in the process of calibration?
Is it mandatory to have?
thank you for your time.

Error for calib_camera.launch

hello! Thanks for your excellent works !!

When I follow the guide, I run roslaunch mlcc calib_camera.launch, some errors occur, below are some msg.
my env is ubuntu18.04, ros is melodic.

[ INFO] [1645761068.171630692]: Camera 0 Configuration
Camera Matrix:
[863.590518437255, 0.116814496662654, 621.666074631063;
0, 863.100180533059, 533.971978652819;
0, 0, 1]
Distortion Coeffs:
[-0.0944205499243979;
0.09467276777765039;
-0.00807970960613932;
8.07461209775283e-05;
6.527166468758789e-05]
Extrinsic Params:
[0, -1, 0, 0;
0, 0, -1, 0;
1, 0, 0, 0;
0, 0, 0, 1]
Rotation error:0
[ INFO] [1645761068.172541533]: Camera 1 Configuration
Camera Matrix:
[863.081687640302, 0.176140650303666, 628.941349825503;
0, 862.563371991533, 533.00290953509;
0, 0, 1]
Distortion Coeffs:
[-0.0943795554942897;
0.09829982415249131;
-0.0125418048527694;
0.000456235380677041;
-8.73113795357082e-05]
Extrinsic Params:
[0, -1, 0, 0;
0, 0, -1, 0;
1, 0, 0, 0;
0, 0, 0, 1]
Rotation error:0
total ext_number:3
[calib_camera-2] process has died [pid 7304, exit code -9, cmd /home/A/ros_ws/catkin_ws_mlcc/devel/lib/mlcc/calib_camera /home/A/ros_ws/catkin_ws_mlcc/src/mlcc/config/left.yaml /home/A/ros_ws/catkin_ws_mlcc/src/mlcc/config/right.yaml /home/A/ros_ws/catkin_ws_mlcc/src/mlcc/config/avia_stereo.yaml /home/A/ros_ws/catkin_ws_mlcc/src/mlcc/result __name:=calib_camera __log:=/home/A/.ros/log/29b15922-95ee-11ec-ad7f-a8a15920a9d6/calib_camera-2.log].
log file: /home/A/.ros/log/29b15922-95ee-11ec-ad7f-a8a15920a9d6/calib_camera-2*.log

Is there any config or setting that I need to change? thanks for your reply.
Thanks

NAN observed after solving AX=B, leading to failure when optimizing extrinsics

Hello, I ran into a problem with my own dataset that matX contains NAN elements after solving linear equation AX=B using cv::solve() function in extrinsic_refine.hpp, causing failure in optimizing extrinsic between base LiDAR and ref LiDAR.

I've checked and made sure that my input point cloud does not contain NAN elements, what are possible solution to this problem to keep the optimization process running, thank you!

run extrinsic_refine.launch error

When I follow the guide, run roslaunch mlcc pose_refine.launch, it`s fine。After that, I run roslaunch mlcc extrinsic_refine.launch, some errors occur, below are some msg, btw, my env is ubuntu18.04, ros is melodic.thanks for your reply.

setting /run_id to 6b384e2a-355c-11ec-850e-8cec4b814673
process[rosout-1]: started with pid [7627]
started core service [/rosout]
process[extrinsic_refine-2]: started with pid [7634]
process[rviz-3]: started with pid [7635]

iteration 0
extrinsic_refine: /home/admini//ws3/src/mlcc/include/extrinsic_refine.hpp:308: void EXTRIN_OPTIMIZER::optimize(): Assertion `!std::isnan(residual2)' failed.

calibration of a camera and a lidar

Hello, thank you for your great work. Can this work realize the calibration of a camera and a lidar, because the effect is obviously better than your original work.Could you suggest how to use this package to calibrate a radar and camera?

Extrinsic refine - NaN residual

Hi Team.
I'm having hard time to replicate your experiment on your data (scene1 and secene2). I'm working inside ros:melodic based docker. I've installed your tested dependencies (Eigen, Ceres, OpenCV). I have no errors during compilation, just few warnings.

First stage (pose_refine) works fine. But I am not able to successfully run second stage (extrinsic_refine) program stops on first iteration on NaN assert on line extrinsic_refine.hpp:308:

iteration 0
residual0: 2.757675 nan u: 0.010000 v: 2.000000 q: nan -nan nan
extrinsic_refine: /home/robo/catkin_ws/src/mlcc/include/extrinsic_refine.hpp:308: void EXTRIN_OPTIMIZER::optimize(): Assertion `!std::isnan(residual2)' failed.

Final stage then works fine. Final camera - lidar alignment eats all of my 32GB Ram a crashed on std::bad_alloc. Same as mentioned before: here

I am thankful for any advice how to make this calibration working.

problem: extrinsic_refine.hpp:309:** void EXTRIN_OPTIMIZER::optimize(): Assertion `!std::isnan(residual2)' failed.

i used the scen* from your project to test this program,i got the problem:

————————————————————————————————————————————
iteration 0
residual0: 5.352935 nan u: 0.010000 v: 2.000000 q: nan -nan nan
extrinsic_refine: ******/rrrlive_/mlcc/src/mlcc/include/extrinsic_refine.hpp:309: void EXTRIN_OPTIMIZER::optimize(): Assertion `!std::isnan(residual2)' failed.
——————————————————————————————————————————————————
how can i fix it?

Cameras Calib: FoV Check not passed and pts_3d empty

Hello,
I am trying to reproduce this work with my own data. So far I am using a simulated environment and sensors in CARLA:
I calibrated the lidars successfully but when it comes to calibrate the cameras I got a very generic error of empty matrix, after debugging I found out that pts_3d at this line https://github.com/hku-mars/mlcc/blob/main/include/calib_camera.hpp#L1003 is empty (sometimes).
This is due to the fact that the FoV Check is not passed, by priting the cos_angle I obtain numbers like:
0.0664861
-0.0557674
-0.0600844
-0.062242
-0.0643988
-0.0600814

1.Can you help me understanding what this can be dependent from?

  1. What could happen if I remove the FoV check?

This is what I got in rviz, I am not confident with the fact that the voxels don't cover the whole scene but there are some parts with black holes.
image

Thank you very much

quastion about your voxel: does it only for livox lidar:

Dear authors:
In your project,I found that loc_xyz sames like must more than 0.
And,i found the livox lidar's frame is like this,x>0,y>0,z>0.
but ,I want using it for robosens lidar .
If I need modify this param?
thanks for your help.


  for (int j = 0; j < 3; j++)
	{
		loc_xyz[j] = pt_trans[j] / voxel_size;
		if (loc_xyz[j] < 0)
			loc_xyz[j] -= 1.0;
	}

!std::isnan(residual2)

hello, I met a problem with this
20220615-012739
it's in the first step pose_refine.launch.I have read the details about this same phenomenon but It doesn't work
can you give me some advice

camera_calib std::bad_alloc

16G memory
when run roslaunch mlcc calib_camera.launch
error:
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc

Livox Mid-70 and IMU Calibration

Hi Team,

Great work. I am struggling with the calibration of the Livox Mid-70 and IMU. Which package or process would you guys suggest to tackle this issue ? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.