- 👋 Hi, I’m @ZhenghaoFei interested in robotics especially robots in agricultural
zhenghaofei / visual_odom Goto Github PK
View Code? Open in Web Editor NEWThis repository is C++ OpenCV implementation of Stereo Odometry
License: MIT License
This repository is C++ OpenCV implementation of Stereo Odometry
License: MIT License
Hi,Zhenghao! Thanks for sharing your code.
I modified your code in order to save the estimated trajectory. And I tested the 00-10 sequences of kitti dataset with the modified code. My operations on each sequence are same. But what bothers me is that the results of sequence 04 and 06 are no right. The error is very large. But I think the code can't get such bad results. So I am troubled by the current situation. Have you tested on the other sequence of kitti dataset? Have you encountered such a situation? I could't find the reason of this situation.
My estimated results of kitti are as follows:
I am working on a project focused on feature selection but would like to use your stack as a foundation as it appears to be very robust for VO. My application is an RC size car using a ZED camera. I have made modifications to adapt your software to work with my platform. An issue I am having is that the estimated motion red line does not "draw" when the car moves ( i am not using KITTI set or ground truth ). My scale is very small, ~.001 therefore the frame_pose changes in only very small values, which I think is why there is no noticeable change in the position of the red dot in the display. Would the small scale value be related to the fact that the motion of an RC car is much smaller than a full size car in the KITTI data? Additionally, in points3D, each line follows the format - #.#####, - #.#####, #.#####; I am assuming the format is X,Y,Z in 3D space however it is unclear where the origin of this space is (left camera frame, baseline center, etc..). The Z distance appears to be forward positive, I have not evaluated accuracy, but X and Y are always negative. Since the features are distributed across the image frame, I would have expected a mostly even split of 'XY' being ++, +-, -+, -- for the four frame quadrants. I'm not sure if this would also contribute to the scale I am seeing. Do you have any thoughts on what may be causing these issues? I plan to cite your work in my paper and will be glad to share the results with you when it is complete. Thank you
EDIT :: I should add that I am using the master branch
Hi
I apologize to open an issue here. I have a question rather than an issue.
If you plot the 3D pose, the trajectory starts to drift sooner in Z direction very quickly! Could you help me to understand why it happens and how to solve it?
I am attaching a file that shows ground truth vs estimated trajectory. The line in orange is ground truth and the line in blue is trajectory.
First thank you for yur project, it help me to learn visual odom, and i see this programe can use rgbd dataset to test, but i can't find where is the rgbd sequence.
Can you tell me where i can find and use rgbd dataset? Thanks very much!
cerr << "Usage: ./run path_to_sequence(rgbd for using intel rgbd) path_to_calibration [optional]path_to_ground_truth_pose" << endl;
Hi professor
I would first thank you for sharing such important code. I applied this code, however the obtained results on the KITTI are much lower than that published on the SOFT-VO paper. Could you professor explain the reason for that (may be some parts are missing in this code).
Regards
Hi! Thank you for making this code available. I have it running on the KITTI data, and it looks great in rotation, and in translation X and Y (left/right and forward/back). In the Y axis(up/down), it is inverted, (down where it should be up) and drifts off the ground truth straight away. Did you see this same behavior? Can you think of anything I can do to improve this result?
Thanks again.
Hello
So I got this error
"Input parameters must be matrices in function cvTriangulatePoints"
then I went for the cv.TriangulatePoints implementation then got:
if( !CV_IS_MAT(projMatr1) || !CV_IS_MAT(projMatr2) || !CV_IS_MAT(projPoints1) || !CV_IS_MAT(projPoints2) || !CV_IS_MAT(points4D) ) CV_Error( CV_StsUnsupportedFormat, "Input parameters must be matrices" );
source: https://github.com/opencv/opencv/blob/master/modules/calib3d/src/triangulate.cpp
then I checked your code at main.cpp, you are passing a vector to the Triangulation function not Mat
std::vector<cv::Point2f> pointsLeft_t0, pointsRight_t0, pointsLeft_t1, pointsRight_t1; cv::triangulatePoints(projMatrl, projMatrr, pointsLeft_t0, pointsRight_t0, points4D_t0);
I'am not sure about this but could you please check this.
Thank you
Hi, thanks for sharing the code!
But I have the question about the function trackingFrame2Frame() of src/visualOdometry.cpp . In the function, your camera intrinsic matrix is :
cv::Mat intrinsic_matrix = (cv::Mat_<float>(3, 3) << projMatrl.at<float>(0, 0), projMatrl.at<float>(0, 1), projMatrl.at<float>(0, 2), projMatrl.at<float>(1, 0), projMatrl.at<float>(1, 1), projMatrl.at<float>(1, 2), projMatrl.at<float>(1, 1), projMatrl.at<float>(1, 2), projMatrl.at<float>(1, 3));
But I think the camera intrinsic matrix shuould be
fx 0 cx 0 fy cy 0 0 1
.
So the matrix should be
cv::Mat intrinsic_matrix = (cv::Mat_<float>(3, 3) << projMatrl.at<float>(0, 0), projMatrl.at<float>(0, 1), projMatrl.at<float>(0, 2), projMatrl.at<float>(1, 0), projMatrl.at<float>(1, 1), projMatrl.at<float>(1, 2), projMatrl.at<float>(2, 0), projMatrl.at<float>(2,1), projMatrl.at<float>(2, 2));
Hello,
I want to perform SOFT-SLAM on the images published by ZED stereo camera's ROS topics. There is ROS branch, however, I did not really understand how I can use it with ROS. It would be very helpful if you could write a small guide about how we can use this with ROS. Thank you
Hi Zhengao,
In feature.h you create a struct FeaturePoint and define variables oldFeaturePointsLeft and currentFeaturePointsLeft in main.cpp, but it doesn't appear that these variables are ever used. I am assuming the intention of this was to be used to describe features with unique identifiers and age, as described by Fig. 2 in the Cvisic paper. Can you confirm that this struct is not used? It appears that feature age is implemented in the FeatureSet struct, but that features are identified by row when necessary
Thanks,
Chris
我发现这代码做视觉里程计都不执行bundleAdjustment?还有就是你分成10×10的点,每个格子最多一个点,那就是你一张图片最多100个特征点,我觉得这样也太少了把
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.