Giter Club home page Giter Club logo

kitti_visual_odometry's Introduction

KITTI Odometry in Python and OpenCV - Beginner's Guide to Computer Vision

This repository contains a Jupyter Notebook tutorial for guiding intermediate Python programmers who are new to the fields of Computer Vision and Autonomous Vehicles through the process of performing visual odometry with the KITTI Odometry Dataset. There is also a video series on YouTube that walks through the material in this tutorial.

The tutorial will start with a review of the fundamentals of computer vision necessary for this task, and then proceed to lay out and implement functions to perform visual odometry using stereo depth estimation, utilizing the opencv-python package. Motion will be estimated by reconstructing 3D position of matched feature keypoints in one frame using the estimated stereo depth map, and estimating the pose of the camera in the next frame using the solvePnPRansac() function. It will then use this framework to compare performance of different combinations of stereo matchers, feature matchers, distance thresholds for filtering feature matches, and use of lidar correction of stereo depth estimation.

The final estimated trajectory given by the approach in this notebook drifts over time, but is accurate enough to show the fundamentals of visual odometry. This will be an ongoing project to improve these results in the future, and more tutorials will be added as developments occur.

Please reach out with any comments or suggestions!

How to use this repository:

Clone this repository into a folder which also contains your download of the KITTI odometry dataset in a separate folder called 'dataset'. The tutorial is contained in the KITTI_visual_odometry.ipynb jupyter notebook.

Sample Images:

Stereo disparity map of first sequence image:

Disparity Map

Estimated depth map from stereo disparity:

Estimated Depth Map

Final estimated trajectory vs ground truth:

Estimated Trajectory

Contents:

  • KITTI_visual_odometry.ipynb - Main tutorial notebook with complete documentation.
  • functions_codealong.ipynb - Notebook from the video tutorial series.

kitti_visual_odometry's People

Contributors

foamofthesea avatar hamzaouajhain avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

kitti_visual_odometry's Issues

os.listdir does not list the name of the images in ascending order

In the Dataset_handler class, the os.listdir lists the names of the images arbitrarily, which causes the function match_features to not give accurate results as the image 000000.png (self.first_image_left) is not being compared to 000001.png (self.second_image_left) but rather another arbitrary image.

self.left_image_files = os.listdir(self.seq_dir + 'image_0')
self.right_image_files = os.listdir(self.seq_dir + 'image_1')
self.velodyne_files = os.listdir(self.seq_dir + 'velodyne')

This can be solved by sorting the names of the images generated as shown below

self.left_image_files.sort()
self.right_image_files.sort()
self.velodyne_files.sort()

Questions for P0~3 matrix in calib.txt

Hello! Thanks for creating this amazing repo along with the videos on Youtube for beginners of KITTI odometry datasets. Here I got a question about the projection matrix in calib.txt.

In my understanding, in the videos, you said the P0-3 matrices are the rectified projection matrix, which project 3D coordinates in the associated camera coordinate system (0-3) into the global reference frame (cam0, left grayscale camera). And you prove it by decomposing the projection matrix into the intrinsic and extrinsic matrix:

P1 = np.array(calib.loc['P1:']).reshape(3,4)
k1, r1, t1, _, _, _, _ = cv2.decomposeProjectionMatrix(P1)
t1 /= t1[3]
>>> print(k1)
[[718.856    0.     607.1928]
 [  0.     718.856  185.2157]
 [  0.       0.       1.    ]]
>>> print(r1)
[[1. 0. 0.]
 [0. 1. 0.]
 [0. 0. 1.]]
>>> print(t1) 
[[ 5.37165719e-001]
 [ 1.40649083e-017]
 [-4.94065646e-324]
 [ 1.00000000e+000]]

Then, I tried to reverse the above operation:

Rt = np.hstack([r1, t1[:3]])
P = k1 @ Rt
>>>print(P.round(3))
[[718.856   0.    607.193 386.145]
 [  0.    718.856 185.216   0.   ]
 [  0.      0.      1.     -0.   ]]
>>>print(P1.round(3))
[[ 718.856    0.     607.193 -386.145]
 [   0.     718.856  185.216    0.   ]
 [   0.       0.       1.       0.   ]]

I found that the P is actually the negative of P1, it is because the function cv2.decomposeProjectionMatrix returns the -t instead of t for the translation vector. It is strange to me, could you kindly help me with this? Thanks.

Thank you so much for your help.

Number of channels in cv2.StereoSGBM_create

Hi,

First of all, congratulations! It's probably the best explanation of Visual Odommetry in the entire internet.

In your application, you used cv2.StereoSGBM_create and included that variables:

matcher = cv2.StereoSGBM_create(numDisparities=num_disparities,
                                    minDisparity=0,
                                    blockSize=block_size,
                                    P1 = 8 * 3 * sad_window ** 2,
                                    P2 = 32 * 3 * sad_window ** 2,
                                    mode=cv2.STEREO_SGBM_MODE_SGBM_3WAY
                                   )

But seeing the documentation in opencv website, it seems that:

P1 and P2 should be multiplied by 8*number_of_image_channels*blockSize*blockSize and 32*number_of_image_channels*blockSize*blockSize , respectively. And in your code, the images that you've used in the examples has just one channel, which means that the operation is a bit different when compared to the recommendation.

It's just a suggestion to improve your work, and I hope you see that issue with good eyes.

Again, thank you for what you've been doing. Great job!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.