This is the project repo for the final project of the Udacity Self-Driving Car Nanodegree: Programming a Real Self-Driving Car. For more information about the project, see the project introduction here.
###Team
Team Members | Email Address | Location |
---|---|---|
Onur Ucler (Team lead) | [email protected] | Folsom, CA |
Srikant Rao | [email protected] | Folsom, CA |
-
Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. Ubuntu downloads can be found here.
-
If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:
- 2 CPU
- 2 GB system memory
- 25 GB of free hard drive space
The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this.
-
Follow these instructions to install ROS
- ROS Kinetic if you have Ubuntu 16.04.
- ROS Indigo if you have Ubuntu 14.04.
-
- Use this option to install the SDK on a workstation that already has ROS installed: One Line SDK Install (binary)
-
Download the Udacity Simulator.
- Clone the project repository
git clone https://github.com/srikantrao/System_Integration.git
- Install python dependencies
cd CarND-Capstone
pip install -r requirements.txt
- Make and run
cd ros
./clean.sh
source devel/setup.sh
- Running different launch files for different scenarios
If you want to run the simulator, run the following command
roslaunch launch/sim.launch
If you want to run it on the car, run the following command
roslaunch launch/site.launch
If you want to see a visualization of the detector working
roslaunch launch/site_visualization.launch
- Run the simulator
- Download training bag that was recorded on the Udacity self-driving car (a bag demonstraing the correct predictions in autonomous mode can be found here)
- Unzip the file
unzip traffic_light_bag_files.zip
- Play the bag file
rosbag play -l traffic_light_bag_files/loop_with_traffic_light.bag
- Launch your project in site mode
cd CarND-Capstone/ros
roslaunch launch/site.launch
- Confirm that traffic light detection works on real life images
You can confirm that the detector works on real life images by running the rosbag
cd CarND-Capstone/ros
roslaunch launch/site_visualization.launch
Open another terminal and go to CarND-Capstone/ros
source devel/setup.bash
rosbag play -l traffic_light_bag_files/loop_with_traffic_light.bag
Open another terminal and go to CarND-Capstone/ros
source devel/setupbash
rosrun tools rosbagVisual.py
We have explored multiple options to detect traffic lights: firs t one was building our own Deep Convolutional Neural Network (CNN) and using pre-trained models.
What we noticed that we need a more data to train our CNN models to have a good model. Due to the limmitted training data and training time constraints we ended up using pre-trained object detection model (Fast Region-based Convolutional Network ). Here is the model link: https://github.com/tensorflow/models/tree/master/research/object_detection
A jupyter notbook was created to test the model accuracy and here are the some outputs. Please refer to the notebook, ~/classifier/faster-R-CNN/object_detection.ipynb, for more details.
The twist controller node was implemented using a PID controller for brake/throttle.
At a high level, the throttle value was given by
throttle = Kp * vel_err + Kd * vel_err_delta + Ki * vel_err_integral
If the throttle was negative, it was considered a brake value. The steering angle was determined based on code provided in yaw_controller.py
For the same radius
v_obs * r = w_obs and v_ctrl * r = w_ctrl
therefore,w_ctrl = w_obs * v_ctrl / v_obs
which can then be extended to calculate steering_angle.A low pass filter usign exponentially weighted averages was used to smoothen the steering output result. The steering angle output is effectively a moving window average of the last 20 values for simulation and last 5 values for the site.