Giter Club home page Giter Club logo

Comments (5)

darrenjkt avatar darrenjkt commented on July 17, 2024

The calibration needs to be done on an undistorted image. See below. You need to figure out how to undistort your specific camera image if the provided one does not work. No need to undistort the lidar points.

if (i_params.fisheye_model)
{
// Undistort the image by applying the fisheye intrinsic parameters
// the final input param is the camera matrix in the new or rectified coordinate frame.
// We put this to be the same as i_params_.cameramat or else it will be set to empty matrix by default.
std::vector<cv::Point2d> corners_undistorted;
cv::fisheye::undistortPoints(corners, corners_undistorted, i_params.cameramat, i_params.distcoeff,
i_params.cameramat);
cv::solvePnP(corners_3d, corners_undistorted, i_params.cameramat, cv::noArray(), rvec, tvec);
cv::fisheye::projectPoints(corners_3d, inner_cbcorner_pixels, rvec, tvec, i_params.cameramat, i_params.distcoeff);
cv::fisheye::projectPoints(board_corners_3d, board_image_pixels, rvec, tvec, i_params.cameramat,
i_params.distcoeff);

When I project into the image in the assess_calibration.cpp, I choose to leave the image distorted and instead distort the lidar points to be for a fisheye image. The alternate way to project lidar to image is to undistort the image and then project lidar points into the undistorted image.

If you can undistort your image, I think that the calibration result would look a lot better. We need to solve the PnP from the chessboard in the 3D world frame to the chessboard in the image plane. If the image plane is distorted, the transforms estimated by PnP will not be accurate. This carries on to the other aspects of calibration. There is the least distortion in the middle of the frame - that's probably why your calibration is best at its middle. If your chessboard is in the exact middle of your frame, the edges may still be distorted which may be why even your calibration at the middle is not so good.

from cam_lidar_calibration.

mserour avatar mserour commented on July 17, 2024

Hello @darrenjkt

Great work on the tool it's very easy to interface with.

In case we publish undistored image from camera driver, what should be the value of camera_info topic in this case?

Also, my camera isn't fish-eye, it's pin-hole with 3 k distortion parameters and 2 tangential parameters and in the code snippet you attached you only un-distort the image in case of fish-eye camera not pin-hole camera so should I publish undistorted image form the camera driver and remove all distortion parameters from here
} else { // Pinhole model cv::solvePnP(corners_3d, corners, i_params.cameramat, i_params.distcoeff, rvec, tvec); cv::projectPoints(corners_3d, rvec, tvec, i_params.cameramat, i_params.distcoeff, inner_cbcorner_pixels); cv::projectPoints(board_corners_3d, rvec, tvec, i_params.cameramat, i_params.distcoeff, board_image_pixels); }

Also you use distortion parameters in the optimizer to calculate the projection errors should it be removed from there as well ?

I'll appreciate your feedback.

from cam_lidar_calibration.

darrenjkt avatar darrenjkt commented on July 17, 2024

This codebase has really only been tested with a fisheye camera. I have not tested it on pinhole despite there being code for it. You could try publishing the undistorted image from the camera driver. That's the easiest way to do it in my opinion and hopefully it turns out good.

If you publish a rectified image, the distortion matrix in the camera_infos should also be all zeros. If the D matrix is all zeros then you can leave that line of code. Otherwise, simply feed in a zero matrix in place of the distcoeff.

Yes, you have to make sure the optimizer prrojection errors is also using a zero matrix for the distortion coefficients.

from cam_lidar_calibration.

mserour avatar mserour commented on July 17, 2024

Thanks @darrenjkt for your feedback

I've tried to feed in zeros matrix in the distortion coefficients in this code snippet and in the optimizer using pin-hole camera model but the VOQ was very during the optimization ranging from 100 to 400, where could the distortion coeffs affect the optimization this severely ?

Also as you can see from the images I attached in the original post, the image is more distorted than the sample images you attached in the readme. So I tried to feed in the model to be fish-eye with the same intrinsic and distortion ceoffs from the pin-hole camera but the optimization resulted in very bad calibration worse than the attached images, Could you guide me how to use the package with same fisheye camera model?

I wonder also what did you use for intrinsic estimation ? ros calibration/Matlab ?

One last is it possible to feed initial estimation to the optimizer ?

Thank again

from cam_lidar_calibration.

darrenjkt avatar darrenjkt commented on July 17, 2024

Sorry, it took me a while to get back; I must've missed this response and been a bit busy.

There is already an initial estimation for the rotation fed to the optimizer which was estimated using matrix equations. Theoretically, this should be all you need but because of sensor errors and other physical factors, just using the maths doesn't give a great result and hence we optimise with other metrics, using this as a starting point.

cv::Mat NN = camera_normals_.t() * camera_normals_;
cv::Mat NM = camera_normals_.t() * lidar_normals_;
cv::Mat UNR = (NN.inv() * NM).t(); // Analytical rotation matrix for real data

const Rotation initial_rotation{ euler[0], euler[1], euler[2] };

The initial estimation for translation is also fed to the optimizer using the centres of the boards in lidar and camera frames. See below.

// Analytical Translation
cv::Mat cp_trans = tmp_rot * camera_centres_.t();
cv::Mat trans_diff = lidar_centres_.t() - cp_trans;
cv::Mat summed_diff;
cv::reduce(trans_diff, summed_diff, 1, CV_REDUCE_SUM, CV_64F);
summed_diff = summed_diff / trans_diff.cols;
const RotationTranslation initial_rotation_translation{ best_rotation_, summed_diff.at<double>(0),
summed_diff.at<double>(1), summed_diff.at<double>(2) };

I didn't do the intrinsic calibration for these cameras but I'm aware that opencv has a calibration function.

from cam_lidar_calibration.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.