- cmake >= 2.8
- All OSes: click here for installation instructions
- make >= 4.1 (Linux, Mac), 3.81 (Windows)
- Linux: make is installed by default on most Linux distros
- Mac: install Xcode command line tools to get make
- Windows: Click here for installation instructions
- Git LFS
- Weight files are handled using LFS
- OpenCV >= 4.1
- This must be compiled from source using the
-D OPENCV_ENABLE_NONFREE=ON
cmake flag for testing the SIFT and SURF detectors. - The OpenCV 4.1.0 source code can be found here
- This must be compiled from source using the
- gcc/g++ >= 5.4
- Linux: gcc / g++ is installed by default on most Linux distros
- Mac: same deal as make - install Xcode command line tools
- Windows: recommend using MinGW
- Clone this repo.
- Make a build directory in the top level project directory:
mkdir build && cd build
- Compile:
cmake .. && make
- Run it:
./3D_object_tracking
. (If you have run-time error, you may need to re-download theyolov3.weights
. just runwget "https://pjreddie.com/media/files/yolov3.weights"
, and copyyolov3.weights
intodat/yolo/
)
The matchBoundingBoxes
functon is implemented.
I take one matched keypoint at a time find the corresponsing point in current and prev frame, then I check which bounding box in prev and curr frame the point belong to, once found store the value and increment the count
check line 283-332 in camFusion_student.cpp
.
First, I find closest distance to Lidar points within ego lane, then I use the equation that I learned from the TTC section to compute the TTC.
check line 238-280 in camFusion_student.cpp
.
First, I take one match pair, then if the previous keypoint and current keypoint are within the same bounding box, then we add this match to a vector. After finishing the loop, I assign the obtained vector to boundingBox.kptMatches
. Then I filter out the outliers based on their distance.
check line 145-178 in camFusion_student.cpp
.
This section is heavily based on the previous practice code. First I compute distance ratios between all matched keypoints, then I use median of computed distance ratio, and apply the equation I learned to compute TTC.
check line 182-235 in camFusion_student.cpp
.
Based on the given sequence, I think the faulty measurement is at frame 4. This is because there are points located on the other object rather than the car that is at front.
- The detector/descriptor combination has been listed here. From this spread sheet, we know FAST/BRIEF combination is the best because it is the most efficient combination and the TTC is reasonalbly good.
- From the image sequcene provided, the faulty measurement is at frame 5. I think this is due to too many mismatched keypoints.