The result https://youtu.be/gtjwOifrxhQ
When we drive, we use our eyes to decide where to go. The lines on the road that show us where the lanes are act as our constant reference for where to steer the vehicle. This project was the very first step in developing a self-driving car is to automatically detect lane lines using in images using Python and OpenCV.
Step 1: Set up the CarND Term1 Starter Kit
Step 2: Open the code in a Jupyter Notebook
> jupyter notebook
Click on the file called "P1.ipynb". Another browser window will appear displaying the notebook.
The program mainly consists of 7 parts.
original image
- Convert images to grayscale.
- Extract the pixels of the image with higher value since the yellow and white lane is relatively white in grayscale.
- Apply gaussian blur to remove noise in the image.
- Apply canny edge detection to find the edge of lanes in the image.
- Mask by polygon to extract roughly where the lanes are.
- Apply probabilistic Hough transform to find straight line segments to draw.
- Combine drawn lines and the original image.
As my method crops out the bottom half of the image by the trapezoid, I imagine the outputed lines will be much off from the actual lane in the more real-life image since the given test video and images were under the almost best condition such as the straight lane, sunny day, and no shade on lanes.
My algorithm always represents lanes as two straight lines no matter what. Apparently this cannot be sufficient when a car encounter a quick curve.
In bad whether or road with shade, the grayscale value of each pixel could be much lower, which results that the lanes are corpped out through the current color masking.
To adupt the different light condition, I would suggest use HSB space instead of RGB space for color masking. Since it's easier to select similar color with different brightness. That is, specifing small range of hue and somewhat wider range of saturation and brightness would easily extract the pixel of a cirtain color.