This repository contains the vision part, which is also what I mainly contributed to, of the robot project of "Intelligent Robots Design and Implementation" course of E.E, Tsinghua University. It used non-deep learning methods of object detection.
Console output:
possible track: area 76027.5, solidity 0.9976183915285596, current
candidate track: area 76023.5, solidity 0.9976968201682437, current
landmine: area 1677.0, solidity 0.982, area_ratio 0.636
{'light': [], 'landmine': [(76, 133)]}
It can detect red, yellow or green light.
It can detect the track by color matching and canny edge detection.
It can detect the landmine by color, area, shape and solidity.
It can detect the ditch from discontinuous tracks and parallel brinks.
Just specify the address of the video stream and then it's settled. For example,
from image_processor import ImageProcessor
stream = 0
img_proc = ImageProcessor(stream, True) # `True` for debug mode
data = img_proc.analyze_objects()
Every time you call analyze_objects
, ImageProcessor
will detect the objects and return the information. In the above example:
data['light']
stores information about the signal lightdata['track']
stores information about current track (i.e. which the robot stands on)data['track_next']
stores information about the next track (i.e. which the robot does not stand on)data['landmine']
stores information about the landminedata['ditch']
stores information about the ditch