Giter Club home page Giter Club logo

joint-lidar-camera-calib's Introduction

joint-lidar-camera-calib

1 Introduction

Joint intrinsic and extrinsic LiDAR-camera calibration in targetless environments. This work aims to calibrate camera intrinsic and LiDAR-camera extrinsic parameters even without a chessboard, while maintaining comparable accuracy with target-based methods. Our method merely requires several textured planes in the scene. As textured planes are ubiquitous in urban environments, this method enjoys broad usability across diverse calibration scenes. A detailed description of the method can be found in our paper. The calibration pipeline is summarized in the following figure.

2 Installation

2.1 Prerequisites

Ubuntu
ROS
OpenCV
PCL
Eigen
Ceres Solver

2.2 Build

Clone the repository and catkin_make:

cd ~/$A_ROS_DIR$/src
git clone https://github.com/hku-mars/joint-lidar-camera-calib.git
cd ..
catkin_make
source devel/setup.bash

3 Sample data

Data can be downloaded from this link.

4 Usage

This calibration method works for both solid-state and mechanically spinning LiDARs. In terms of cameras, current version supports the pinhole model with radical and tangential distortion.

4.1 Data Collection

The ideal calibration scene usually consists of multiple textured planes, as shown in the following figure. In urban environments, such scenes are ubiquitous. However, please note that at least three planes with non-coplanar normal vectors are needed. Otherwise, the scene leads to degeneration of point-to-plane registration and thus compromises the extrinsic calibration accuracy.

In general, users roughly know the extrinsic parameters of the sensor suites. Given initial extrinsic parameters (initial rotation error < 5 degrees, initial tranlation error < 0.5 m), 6~8 frames of data are typically enough to calibrate the parameters. We suggest that, when recording each frame, the sensor suite is kept static to avoid point cloud distortion and sensor synchronization problems. The user changes the yaw angle (about 5 degrees) and x-y translation (about 10 cm) of the sensor suite a little bit when recording a new frame. Note that pure translation (no rotation) should be avoided because this leads to the degeneration of camera self-calibration. Continuous motion is also accepted if the above-mentioned problems can be tackled.

If an initial guess of the extrinsic parameters is unavailable, users can recover them using hand-eye-calibration. Sample code (src/hand_eye_calib.cpp) and pose files (sample_data/hand_eye_calib) are provided.

4.2 Initialization

As shown in the calibration pipeline, the Initilization stage first conducts Camera Self-Calibration and LiDAR Pose Estimation.

4.2.1 Camera Self-Calibration

We use the open-source software COLMAP, and we have a video detailing how to use it. It can be accessed on Youtube and OneDrive.

4.2.2 LiDAR Pose Estimation

A slightly modified version of BALM2 is provided here. First, estimate each LiDAR pose using incremental point-to-plane registration (input your data path in config/registration.yaml):

roslaunch balm2 registration.launch

Next, conduct LiDAR bundle adjustment (input your data path in config/conduct_BA.yaml):

roslaunch balm2 conduct_BA.launch

4.3 Joint Calibration

Organize your data folder as follows:

.
├── clouds
│   ├── 0.pcd
│   ├── ...
│   └── x.pcd
├── config
│   └── config.yaml
├── images
│   ├── 0.png
│   ├── ...
│   └── x.png
├── LiDAR_pose
│   └── lidar_poses_BA.txt
├── result
└── SfM
    ├── cameras.txt
    ├── images.txt
    └── points3D.txt

Then conduct Joint Optimization (input your data path in launch/calib.launch):

roslaunch joint_lidar_camera_calib calib.launch

Note that the step Refinement of Visual Scale and Extrinsic Parameters in the Initilization stage is also executed here.

4.4 Adaptability

4.4.1 Extrinsic Calibration Only

If you are very confident in your intrinsic parameters (e.g. you calibrate them using standard target-based method) and only want to optimize extrinsic parameters, set keep_intrinsic_fixed to true in config/config.yaml.

4.4.2 Other Pinhole Models

For the pinhole model with/without distortions, there are multiple combinations of camera intrinsic parameters. For instance, (fx = fy, cx, cy), (fx = fy, cx, cy, k1, k2), (fx, fy, cx, cy, k1, k2, p1, p2), and so on. Current version of this repository provides an implementation of (fx = fy, cx, cy, k1, k2), a commonly used group of parameters. However, when users want to use other combinations, they should adapt the corresponding functions (all the functions that work with camera projection) in include/calib.hpp to the specific intrinsic parameters.

4.4.3 Fisheye/Panoramic Camera

We have not tested the performance on fisheye/panoramic cameras, so we are not sure about the calibration accuracy on them. Adaptation of code is also required. Users should adapt the corresponding functions (all the functions that work with camera projection) in include/calib.hpp to the specific intrinsic parameters.

5 Debug

If you get unexpected calibration result on your own dataset, here are some tips for locating the problem(s).
(1) Visualize the accumulated point cloud after LiDAR BA. After conducting LiDAR BA (see Section 4.2.2), you might want to check the accuracy of LiDAR poses. While it is challenging to quantitatively evaluate the pose accuracy from poses stored in LiDAR_pose/lidar_poses_BA.txt, you may qualitatively evaluate it through the accumulated point cloud. The code transforms point cloud of every individual frame into the first LiDAR frame, which is saved as clouds/cloud_all.pcd. Visualize it in CloudCompare/PCL Viewer and check point cloud quality, e.g. whether the roads/walls are flat and thin enough.
(2) Make sure that the extrinsic parameters you provide in config/config.yaml are those which transform a point from LiDAR frame to camera frame. They should not be extrinsics which transform a point from camera frame to LiDAR frame or represent any other transformation.
(3) Make sure you have a good initial guess of extrinsic parameters. Although the initial parameters should, of course, be different from the ground-true values, they should not deviate too much from ground-truth. Rotation error of 5~10 degrees and translation error less than 0.5m are generally acceptable. A good way to check the extrinsic parameters is to colorize point cloud using initial extrinsic and intrisic parameters and visualize it in CloudCompare/PCL Viewer. We provide an implementation of colorization, i.e. function void Calib::colorize_point_cloud in calib.hpp.
(4) Check if the refinement module (Section 4.3) works. Visualize LiDAR point cloud (result/LiDAR_cloud.pcd) and visual points (result/SfM_cloud_before_opt.pcd) in CloudCompare/PCL Viewer. They should align well, as shown in the following figure, where white points are LiDAR point cloud and colored points are visual points.

6 Acknowledgements

In development of this work, we stand on the state-of-the-art works: COLMAP and BALM2.

7 License

The source code is released under GPLv2 license.

For commercial use, please contact Dr. Fu Zhang [email protected].

joint-lidar-camera-calib's People

Contributors

llihku avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

joint-lidar-camera-calib's Issues

Unable to open config.yaml

Thanks for your amazing work. However, I met errors when run the command
roslaunch joint_lidar_camera_calib calib.launch.

process[calib-2]: started with pid [85717]
[ERROR] [1699574319.919059511]: Unable to open xxx/joint-lidar-camera-calib/data_/00/config/config.yaml
[ERROR] [1699574319.919465603]: Check configuration file.

The config/config.yaml file you show in the directory tree is not given in this project. As it provides default parameters, can you share this file in the github? Really appreciate your reply.

作者您好,非常感谢您能开源此项工作的代码,对于代码中的内容有几个问题想请教一下,感谢您的回答。

1.对于32线旋转机械式激光雷达,采集数据是否是连续采集,还是挪动一下采集一帧数据直到满足6~8帧数据。如果是连续采集数据的话,相机类型有什么限制吗?
2.在采集数据详细说明中提到需要在xy轴平面和航向角上进行小范围的移动,以免退化。是否方向一定要如此,可以在xz轴平面上和俯仰角进行小范围移动呢?
3.使用COLMAP获取相机的姿态能补充说明下嘛?

期待/感谢您回答

About the selection of calibration scene

image

In section 4.1, it says "The ideal calibration scene usually consists of multiple textured planes", but i think the floor plane in the above image is not textured plane. Am I right?

calib.refine_scale_ext() This step reports an error

INFO] [1705077562.793562945]: SfM point cloud saved as /home/wyw/ROS1_PROJECT/2023/camera_lidar_new/joint_camera_lidar/data/result/SfM_cloud_init.pcd
[ INFO] [1705077563.347440279]: LiDAR point clouds saved as /home/wyw/ROS1_PROJECT/2023/camera_lidar_new/joint_camera_lidar/data/result/LiDAR_cloud.pcd

[ INFO] [1705077563.359045062]: Initial visual scale recovered. Value: 0.134602
calib: /usr/include/eigen3/Eigen/src/Core/DenseCoeffsBase.h:425: Eigen::DenseCoeffsBase<Derived, 1>::Scalar& Eigen::DenseCoeffsBase<Derived, 1>::operator()(Eigen::Index) [with Derived = Eigen::Matrix<double, 3, 1>; Eigen::DenseCoeffsBase<Derived, 1>::Scalar = double; Eigen::Index = long int]: Assertion `index >= 0 && index < size()' failed.
[calib-1] process has died [pid 1377675, exit code -6, cmd /home/wyw/ROS1_PROJECT/2023/camera_lidar_new/joint_camera_lidar/devel/lib/joint_lidar_camera_calib/calib __name:=calib __log:=/home/wyw/.ros/log/06631870-b0fe-11ee-8f7d-4d0d94364953/calib-1.log].
log file: /home/wyw/.ros/log/06631870-b0fe-11ee-8f7d-4d0d94364953/calib-1*.log

Calibration Performance - İmplementation on KITTI Dataset

Hello,

I just wanted to say thanks for the fantastic work you've been doing. It's been a real pleasure to go through.
I've got a bunch of questions about the joint lidar calibration with Velodyne-HDL-64:

  • Would it be possible for you to provide the source code you utilized in experiments with the KITTI Dataset? I believe I can learn a lot from this implementation.

  • I have observed that when I changed the config/config.yaml => camera_coeffs nothing is changing. In the code, it doesn't affect anything. Why we are writing the camera_coeffs ( intrinsic parameters)? Also, I know the intrinsic parameter initially and I want to accept it as ground truth. How can I do that?

  • When I tried to do the calibration on my own using a custom dataset, I ran into some issues. The calibration results were quite different from what's suggested in the paper.
    For example, my ground truth translation was [0.303, 0.596, -1.890], but I ended up with [3.16, 0.642, -4.33]. Also, the ground truth rotation matrix in Euler radians was [-1.572, 0.032, -0.787], but my results were [-1.4831931, 0.1939253, -1.2888357]. (I converted the rotation matrix to Euler angles for better understanding.)

Data Collection:

  • Initially, I thought there might be an issue with my data collection. So, I tried various scenes with different camera-lidar translations and rotations, making sure to adhere to the "at least three planes constraint." What should I focus on during data collection to improve calibration results?

  • I investigated that forward motion is problematic with COLMAP's reconstruction. To avoid this, I tried to minimize forward motion in my scenes, but it didn't seem to help.
    Reference: colmap/colmap#1490

  • I experimented with using more than 8 images to see if it would improve reconstruction quality. However, it didn't seem to affect the self-calibration process. Is it better or worse to use more images for self-calibration?

If you have any thoughts or ideas on these issues, I'd really appreciate your input.

Thanks again! I'll be waiting for your response.

catkin_make error: undefined reference to tf::TransformBroadcaster::sendTransform, missing tf in cmakelist of balm,在balm的cmakelist里缺少tf

if anyone meet the error: undefined reference to `tf::TransformBroadcaster::sendTransform during catkin_make, add tf in the cmakelists.txt under BALM dir. It seems that someone forget this line when modify the cmakelist, since there is a tf depend in package.xml of BALM.

find_package(catkin REQUIRED COMPONENTS
roscpp
rospy
std_msgs
pcl_conversions
pcl_ros
tf
)

如果有人在编译时碰到 undefined reference to `tf::TransformBroadcaster::sendTransform, 在balm的cmakelist里添加tf,package.xml里有写tf依赖但是不知道为什么cmakelist里缺少了这一项。

Input point cloud has no data

Hello, first, thanks for sharing your work!

I tried with the provided example data and your software worked without issues, so I guess that there is no installation problems. Then I tried with my own data, I was able to run the COLMAP software, and the commands of section LiDAR Pose Estimation without errors. However, I am receiving the following output running the last command roslaunch joint_lidar_camera_calib calib.launch:

terminate called after throwing an instance of 'pcl::IOException'
  what():  : [pcl::PCDWriter::writeBinary] Input point cloud has no data!

Here is my data with the folder structure, there should be something that I am missing. I am obtaining the data by recording a rosbag and then converting it into a PCD. I am using a Blickfeld Cube 1 sensor, is the point cloud dense enough?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.