Giter Club home page Giter Club logo

Comments (5)

cgsaxner avatar cgsaxner commented on June 9, 2024

Hi Nova!

I think your camera parameters look reasonable. The intrinsics as well as extrinsics of the cameras might be different for every HL2. The parameters I obtained and provide in this repo can be seen as a reference, and might even work for other HL2s, but there is no guarantee.

As for the larger differences, e.g. cx and cy being approximately swapped, and the sign difference in the extrinsic matrix, these can be explained by rotating the images. I calibrated both with the images "as are", and with rotated images, and obtained similar results as yours with the rotated images.

When using your calibration parameters, note that the plugin does NOT rotate the images. So you would have to modify the plugin to rotate the images or detected blobs accordingly, otherwise, the 3D reconstruction will not work!

Not a trivial question at all, the camera calibration is essential to for very accurate tracking, so this is actually a key step :)

from hl2tooltracking.

nocardio avatar nocardio commented on June 9, 2024

Hi Cgsaxner, thank you for your immediate reply 👍

Just as you mentioned, I modify the plugin to rotate the images and detect blobs accordingly. Using my existing calibrations, I can successfully calculate the triangulated points and match them to the points in the instrument markers.
After I got the estimated pose of the marker, do you think I need to rotate/invert the matrix or just use it "as is" to your code?

Back to the calibration process, I also tried to follow the same calibration procedures as yours. I got the similar intrinsic parameters as yours (cx and cy are not being swapped). However, I got a really weird extrinsic. Do you perhaps know what I am missing here?

"K1": [[374.7974483204609, 0.0, 326.73365358676296], [0.0, 373.81077210981357, 245.79527427004624], [0.0, 0.0, 1.0]],
"D1": [[-0.015265481008251279, 0.06359039857765747, 0.002794160375673042, 0.005908780510357354, 0.0]],
"K2": [[372.7694238525443, 0.0, 318.18290019438365], [0.0, 372.2345752817822, 233.61868821911386], [0.0, 0.0, 1.0]],
"D2": [[-0.030133088719034466, 0.08979537726539062, -0.0030097401391500096, -0.003940485405324927, 0.0]],
"R": [[0.4292777049358266, 0.38645072182454976, 0.8163188664035115], [-0.29300277251440143, -0.7953622972622787, 0.530611148952361], [0.8543243104020297, -0.4669632273476736, -0.22820016862645276]],
"T": [[-0.359926220903595], [-0.47292623355191216], [0.9347921061650112]],

Once again, thank you very much ^^

from hl2tooltracking.

cgsaxner avatar cgsaxner commented on June 9, 2024

I think the last thing you need to consider is the rigToWorld / camToWorld transform, which is needed to transform the detected 3D points from camera reference frame to world reference frame. Since you effectively change the camera reference frame by rotating the images, you would also need to flip this transform by 90° around the z-Axis.
However, I'm not 100 % sure, I would recommend to have some offline evaluation code using recordings or known points and just try it out. This additional level of complexity was the reason why I abandoned rotating the images in the first place :)

As for the calibration problem, indeed, the extrinsic matrix looks strange. Not sure what is going wrong there without seeing the data. Are you using the same images as for the flipped calibration, just without rotating them?

from hl2tooltracking.

nocardio avatar nocardio commented on June 9, 2024

Hi, sorry for the late reply.
For the weird extrinsic result (flipped calibration), I used the "as-is" images, without additional rotation, with left-image and right-image are originally counter-clockwise and clockwise rotated by 90degrees, respectively.

from hl2tooltracking.

 avatar commented on June 9, 2024

Hello,
I had a doubt regarding how to compute the transformation of right camera to left camera.

//Code from instrument tracker.h where we have to place the calibration information

	// definition of the model points
	cv::Mat pointsModel = (cv::Mat_<float>(5, 3) <<
		2.57336e-7f, 2.42391e-6f, 3.36463e-6f,
		0.0442368f, 0.0349529f, 0.0726091f,
		-0.0936104f, 0.0113667f, 0.0732692f,
		0.000176568f, -0.000130322f, 0.108399f,
		-0.00748328f, 0.0856192f, 0.0642646f);

	// definition of camera parameters
	float lfFxHL2 = 362.76819f;
	float lfFyHL2 = 363.63627f;
	float lfCxHL2 = 316.65636f;
	float lfCyHL2 = 231.20529f;
	cv::Mat lfDistHL2 = (cv::Mat_<float>(1, 5) <<
		-0.01818f, 0.01685f, -0.00494f, 0.00170f, 0.0f);

	float rfFxHL2 = 364.68835f;
	float rfFyHL2 = 364.63003f;
	float rfCxHL2 = 321.11678f;
	float rfCyHL2 = 233.32648f;
	cv::Mat rfDistHL2 = (cv::Mat_<float>(1, 5) <<
		-0.02002f, 0.01935f, -0.00306f, -0.00216f, 0.0f);
	// ---------How to find this matrix? Rf2Lf ------------
	cv::Mat rfToLf = (cv::Mat_<float>(4, 4) <<
		-0.999988716987f, 0.004749329171f, 0.000098853715f, -0.001400949679f,
		-0.004750308829f, -0.999856389093f, -0.016267629061f, -0.098276565847f,
		0.000021579193f, -0.016267915098f, 0.999867668481f, 0.002508806869f,
		0.0f, 0.0f, 0.0f, 1.0f);

Using openCV camera calibration, I was able to find the intrinsic parameters & distortion coefficient for both the cameras.

Thanks in advance!

from hl2tooltracking.

Related Issues (7)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.