jakarto3d / py-ocamcalib Goto Github PK
View Code? Open in Web Editor NEWLicense: GNU General Public License v2.0
License: GNU General Public License v2.0
Hello Mr. Vazquez!
First and foremost, thank you for open-sourcing this wonderful Python implementation of OCamCalib. It's been incredibly helpful for fisheye camera calibration and I truly appreciate the effort you've put into it.
I've noticed that the repository doesn't currently have an overall license. Some files have the GNU GPLv2 header, yet not all. Specifically the following files are without a liscense header:
I am using your work in an academic institution for research and we would like to open source the code, modifications to it and results responsibly. Having a clear license on all files necessary to perform the calibration would guide me on how best to do this.
Would you consider adding a license to these files or possibly the entire repository?
Thank you once again for your work!
Best regards,
Valentin Bauer
How to calculate K and D in OpenCV internal parameter format based on polynomial coefficents and stretch matrix?
Great work!!
How to convert camera perspective from pixel coordinates to world coordinates to bird 's-eye view? How do I get the corresponding R and T
Your work is perfect. I want to know the diffrence between the fov param fov in your undistort code and the ScaleFactor param in matlab undistortFisheyeImage function. Thank you
Hi Hugo!
I hope you're well. I'm curious about the fisheye camera's maximum angle of view. Does it have any limitations for this to function effectively? Are there other specific guidelines I should follow, like keeping the chessboard aligned with the horizon or maintaining the pitch and roll of the images?
I've observed a significant curve in the center of my image where it should be straight. Do you think I might be missing something? Just for reference, the fisheye lens has an AoV of 200º.
Thank you for your guidance!
Hello Hugo!
I've observed a systematic corner detection issue for a specific fisheye camera calibration. It seems corners are frequently detected with a slight offset, which may be linked to the combination of quick detection on downsized/thresholded images and a static win_size = (5,5)
in CalibrationEngine.detect_corners()
.
To address this, I suggest calculating the win_size
for each image as half the minimum distance between all detected corners in that image. The rationale behind this is to allow corners to be refined accurately and reduce the chance of snapping to a nearby, incorrect corner. Here's a parameterized implementation approach:
# src/pyocamcalib/modelling/calibration.py
from scipy.spatial.distance import cdist
#...
def detect_corners(self, window_size: Union[str, int], check: bool = False, max_height: int = 520):
# ...
# after call to cv.findChessboardCornersSB()
if ret:
corners = np.squeeze(corners)
corners[:, 0] *= r_w
corners[:, 1] *= r_h
if window_size == "auto":
# calculate distance between all corners, returns a len(corners) x len(corners) matrix
pairwise_distances = cdist(corners, corners, 'euclidean')
# keep only _one_ distance for each pair of corners, and discard distances between a corner and itself
# which means taking only the upper triangular part of the matrix
pairwise_distances = pairwise_distances[np.triu_indices(pairwise_distances.shape[0], k=1)]
# the minimum distance between any two corners should give us a good estimate of the window size
distance_min = np.min(pairwise_distances)
# then we use half the minimum distance as the window size
# also the window size must be an integer, so the we truncate it towards zero
win_size = max(int(distance_min / 2), 5)
else:
win_size = window_size
zero_zone = (-1, -1)
criteria = (cv.TERM_CRITERIA_EPS + cv.TermCriteria_COUNT, 40, 0.001)
corners = np.expand_dims(corners, axis=0)
cv.cornerSubPix(gray, corners, (win_size, win_size), zero_zone, criteria)
Here are some visual comparisons showing current detection vs. proposed:
I've assembled various datasets from the images in test_images
, Scaramuzzas's example images in OCamCalib, custom images and one set of images uploaded by bapossatto to their fork of the this repository. I pushed the images through the parameter estimation pipeline without making manual corrections to the detected corners.
Impact on Reprojection RMSE: Most datasets showed an improved or similar Bundle Adjustment RMSE post-implementation, with error normalized by image diagonal (to allow comparison between large and small images).
Performance Evaluation: The proposed change didn't substantially alter the calibration runtime when normalized by the image diagonal.
Only the Gopro dataset (from test_images
) had considerably worse Bundle Adjustment RMSE. I've identified several wrong detection of the chessboard corners (GOPR5.jpg, GOPR7.jpg, GOPR9.jpg and GOPR10.jpg). The proposed change worsens those wrong detections, because I suspect that the corners get further moved due to a bigger window size. You can see and example of this effect on the GOPR5.jpg image:
After removing the four badly detected images both normalized RMSE's (after Bundle Adjustment) increased for the Gopro dataset. I think this is to be expected with less calibration images. The normalized RMSE with the proposed change was 0.75 vs. 1.04 for the default window size.
Based on these findings, I believe this dynamic approach to determining win_size
could be a beneficial enhancement to the current implementation. With the caveat that a very bad initial detection of cv.findChessboardCornersSB()
can cause an even worse estimate of the parameters. I would argue though that those bad detections should be caught with the necessary inspection of the detected corners.
I'd love to hear your thoughts on this proposed change and any potential implications I might have overlooked. If you're open to it, I'd be happy to submit a pull request with the necessary modifications.
Best regards,
Valentin
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.