Giter Club home page Giter Club logo

visual_odom's Introduction

  • 👋 Hi, I’m @ZhenghaoFei interested in robotics especially robots in agricultural

visual_odom's People

Contributors

ramsafin avatar temburuyk avatar zhenghaofei avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

visual_odom's Issues

about the kitti estimated results

Hi,Zhenghao! Thanks for sharing your code.
I modified your code in order to save the estimated trajectory. And I tested the 00-10 sequences of kitti dataset with the modified code. My operations on each sequence are same. But what bothers me is that the results of sequence 04 and 06 are no right. The error is very large. But I think the code can't get such bad results. So I am troubled by the current situation. Have you tested on the other sequence of kitti dataset? Have you encountered such a situation? I could't find the reason of this situation.
My estimated results of kitti are as follows:
00
01
02
03
04
05
06
07
08
09
10

estimated motion not presenting in display

I am working on a project focused on feature selection but would like to use your stack as a foundation as it appears to be very robust for VO. My application is an RC size car using a ZED camera. I have made modifications to adapt your software to work with my platform. An issue I am having is that the estimated motion red line does not "draw" when the car moves ( i am not using KITTI set or ground truth ). My scale is very small, ~.001 therefore the frame_pose changes in only very small values, which I think is why there is no noticeable change in the position of the red dot in the display. Would the small scale value be related to the fact that the motion of an RC car is much smaller than a full size car in the KITTI data? Additionally, in points3D, each line follows the format - #.#####, - #.#####, #.#####; I am assuming the format is X,Y,Z in 3D space however it is unclear where the origin of this space is (left camera frame, baseline center, etc..). The Z distance appears to be forward positive, I have not evaluated accuracy, but X and Y are always negative. Since the features are distributed across the image frame, I would have expected a mostly even split of 'XY' being ++, +-, -+, -- for the four frame quadrants. I'm not sure if this would also contribute to the scale I am seeing. Do you have any thoughts on what may be causing these issues? I plan to cite your work in my paper and will be glad to share the results with you when it is complete. Thank you

EDIT :: I should add that I am using the master branch

Why the trajectory in three dimension drifts a lot?

Hi

I apologize to open an issue here. I have a question rather than an issue.

If you plot the 3D pose, the trajectory starts to drift sooner in Z direction very quickly! Could you help me to understand why it happens and how to solve it?

I am attaching a file that shows ground truth vs estimated trajectory. The line in orange is ground truth and the line in blue is trajectory.

trajectory

How to use rgbd sequence?

First thank you for yur project, it help me to learn visual odom, and i see this programe can use rgbd dataset to test, but i can't find where is the rgbd sequence.
Can you tell me where i can find and use rgbd dataset? Thanks very much!
cerr << "Usage: ./run path_to_sequence(rgbd for using intel rgbd) path_to_calibration [optional]path_to_ground_truth_pose" << endl;

About the obtained results

Hi professor

I would first thank you for sharing such important code. I applied this code, however the obtained results on the KITTI are much lower than that published on the SOFT-VO paper. Could you professor explain the reason for that (may be some parts are missing in this code).

Regards

X and Z axis looks great, Y is much worse...

Hi! Thank you for making this code available. I have it running on the KITTI data, and it looks great in rotation, and in translation X and Y (left/right and forward/back). In the Y axis(up/down), it is inverted, (down where it should be up) and drifts off the ground truth straight away. Did you see this same behavior? Can you think of anything I can do to improve this result?

Thanks again.

Results on KITTI 00 dataset

Hello,
When I run your code this is the result I get from KITTI dataset sequence 00.
kitti_00

Your results seem to be better. Did you use the same calibration file when you recorded the demo video? Do you guys also get the same result or am I doing something wrong?
Note: I am using grayscale dataset

CV_StsUnsupportedFormat " input Parameters in function cvTriangulatePoints"

Hello
So I got this error
"Input parameters must be matrices in function cvTriangulatePoints"
then I went for the cv.TriangulatePoints implementation then got:
if( !CV_IS_MAT(projMatr1) || !CV_IS_MAT(projMatr2) || !CV_IS_MAT(projPoints1) || !CV_IS_MAT(projPoints2) || !CV_IS_MAT(points4D) ) CV_Error( CV_StsUnsupportedFormat, "Input parameters must be matrices" );
source: https://github.com/opencv/opencv/blob/master/modules/calib3d/src/triangulate.cpp
then I checked your code at main.cpp, you are passing a vector to the Triangulation function not Mat
std::vector<cv::Point2f> pointsLeft_t0, pointsRight_t0, pointsLeft_t1, pointsRight_t1; cv::triangulatePoints(projMatrl, projMatrr, pointsLeft_t0, pointsRight_t0, points4D_t0);

I'am not sure about this but could you please check this.
Thank you

about the camera intrinsic matrix in the function trackingFrame2Frame() of src/visualOdometry.cpp

Hi, thanks for sharing the code!
But I have the question about the function trackingFrame2Frame() of src/visualOdometry.cpp . In the function, your camera intrinsic matrix is :
cv::Mat intrinsic_matrix = (cv::Mat_<float>(3, 3) << projMatrl.at<float>(0, 0), projMatrl.at<float>(0, 1), projMatrl.at<float>(0, 2), projMatrl.at<float>(1, 0), projMatrl.at<float>(1, 1), projMatrl.at<float>(1, 2), projMatrl.at<float>(1, 1), projMatrl.at<float>(1, 2), projMatrl.at<float>(1, 3));
But I think the camera intrinsic matrix shuould be
fx 0 cx 0 fy cy 0 0 1.
So the matrix should be
cv::Mat intrinsic_matrix = (cv::Mat_<float>(3, 3) << projMatrl.at<float>(0, 0), projMatrl.at<float>(0, 1), projMatrl.at<float>(0, 2), projMatrl.at<float>(1, 0), projMatrl.at<float>(1, 1), projMatrl.at<float>(1, 2), projMatrl.at<float>(2, 0), projMatrl.at<float>(2,1), projMatrl.at<float>(2, 2));

Use SOFT-SLAM with ROS using ZED camera

Hello,
I want to perform SOFT-SLAM on the images published by ZED stereo camera's ROS topics. There is ROS branch, however, I did not really understand how I can use it with ROS. It would be very helpful if you could write a small guide about how we can use this with ROS. Thank you

FeaturePoint struct

Hi Zhengao,

In feature.h you create a struct FeaturePoint and define variables oldFeaturePointsLeft and currentFeaturePointsLeft in main.cpp, but it doesn't appear that these variables are ever used. I am assuming the intention of this was to be used to describe features with unique identifiers and age, as described by Fig. 2 in the Cvisic paper. Can you confirm that this struct is not used? It appears that feature age is implemented in the FeatureSet struct, but that features are identified by row when necessary

Thanks,

Chris

大哥,你是不是把代码给删了部分啊

我发现这代码做视觉里程计都不执行bundleAdjustment?还有就是你分成10×10的点,每个格子最多一个点,那就是你一张图片最多100个特征点,我觉得这样也太少了把

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.