Giter Club home page Giter Club logo

chickenvision's Introduction

Deprecated! Use ChameleonVision or OpenSight instead

Thank you to all who participated. I hope we helped kickedstart the open-source movement in FRC vision. Due to the Screaming Chickens disbanding, I can no longer maintain ChickenVision for the 2020 season.

Welcome to Chicken Vision

To tune vision, use GRIP (http://wpiroboticsprojects.github.io/GRIP/#/).

Credit

You don't have to give credit, but if you do, it is greatly appreciated if you tell us how Chicken Vision has helped you and your team: https://docs.google.com/spreadsheets/d/1YWcWk0oOwUUU_g2qIJem4bmJQUaB20VSqDUPqsKFyJk/edit?usp=sharing Tweet #ChickenVision! Let's try to get it trending!

Also, huge thanks to Team 3216 who seriously improved README and really went the extra mile in producing excellent instructions.

Requirements

Tuning For Reflective Tape


Using GRIP

Finding an IP

Once GRIP is downloaded, open it. Afterwards, click sources and click IP Camera. To find the IP of the raspberry pi, use a network scanner such as FING for android (https://play.google.com/store/apps/details?id=com.overlook.android.fing&hl=en_US). Once you have found the IP, put that in the box. Sometimes, the host name works, making it so that you can use frcvision.local/ rather than the IP.

Configuring GRIP

The address should be frcvision.local:1181/stream.mjpg
From the operation palate (The right side of GRIP) (picture below)
GRIP palate
Drag the according image processing functions tho the area below (ex. Blur, Filter Lines). Put the functions in order of the picture below how to connect the different functions (next two pictures). To connect different functions, click and drag one of the small empty circles
Draggable Dot
And drag it to another function, it should look like this (MAKE SURE THAT YOU DO ONE OUTPUT OF A FUNCTION TO THE INPUT OF ANOTHER FUNCTION)
completed function
GRIP image demo picture

After You have set up all of the functions, it's time to apply them. Simply drag the knobs of the values (ex. H, S, and V) according to the picture below. You can change them if needed if the example doesn't work. You also may need to change the raspberry pi's settings (https://github.com/MRT3216/MRT3216-2019-DeepSpace/wiki/FRC-2019-Vision#configuring-the-raspberry-pi).
GRIP image demo picture
Finally, Click the eye icon (on all the functions) (picture below) to show the functions being applied
eye icon from GRIP

What HSV stands for

H = Hue
S = Saturation
V = Variable/Brightness

Changing Chicken to Detect Vision Tape

When you are done setting the values from the steps above, click the tools tab in GRIP and click export code. Change the language to Python and set the Pipeline class name to GripPipeline. Then set a save location that you can access later. Lastly, change the Module Name to whatever you want. It should look like this.
Export Code

Once you've exported the code, pull up chicken vision and the GRIP python file you made side by side.
Code side by side

In the Chicken Vision code, you have to change the HSV Values. Below is where you change it. (Line 176 and Line 177)
chicken vision HSV Values
To change those values, you need the GRIP file's HSV Values. Below is where you get that. (Lines 21, 22 and 23)
GRIP HSV Values

GRIP and Chicken Vision set HSV differently. For GRIP, it's an array. Ex. hue = [low_value, high_value]. For Chicken Vision it's different. Ex.
lower_green = np.array([low_hue_value])
upper_green = np.array([high_hue_value])
So in more complicated terms, it goes like this for Chicken Vision.
lower_green = np.array([low_GRIP_hue, low_GRIP_saturation, low_GRIP_value])
upper_green = np.array([high_GRIP_hue, high_GRIP_saturation, high_GRIP_value])

Once you are done with that, if you had a different blur that isn't 1 (MUST BE ODD!), change it on line 172.
Blur Set

The next thing to do is upload it to the Raspberry Pi.

The Raspberry Pi Image

Configuring the raspberry pi

Go to the IP of the raspberry pi in a web browser (If you lost it, refer back to https://github.com/MRT3216/MRT3216-2019-DeepSpace/wiki/FRC-2019-Vision#finding-an-ip). After that, follow the picture below.
Upload the code

After that go to the raspberry pi's IP:1881 (or frcvision.local/). You should see the camera stream and a whole bunch of settings you can change (picture below). Change your settings accordingly.
camera settings
When you have done so, go to the IP of your Raspberry Pi and go to the Vision Settings tab (picture below). Click on camera one and some settings should appear below.
Camera JSON
Click the Copy Source Config From Camera button, and you're done with the Raspberry Pi!

Shuffle Board Configuration

After that, open the FRC Shuffleboard that you downloaded in the requirements section and plug in a Ethernet cable into the Raspberry Pi from the RoboRio (https://github.com/MRT3216/MRT3216-2019-DeepSpace/wiki/FRC-2019-Vision#requirements).

Then open the left tab under where it says Configuration (top left corner) (it also should have a close application X to the left of configuration). Drag the arrows to the right (respective distance) (image below).
arrow to pull tab open

To view the camera stream, click on the CameraServer dropdown and drag the text stream to the dashboard and violala! Your video stream is in Shuffle Board! (picture below).
CameraServer tab

To view Network Tables, Click the Network Tables tab and under ChickenVision dropdown are the values being outputted by the Chicken Vision Python program (picture below).
Network Tables Dropdown

Network Tables

In the code, you can send out network table values Ex. networkTable.putNumber("VideoTimestamp", timestamp). You can also put other types of data such as a boolaen and other types of data.

Functionality/Features

  • Sanity checks: Filters out contours whose rays form a V, only recognizes targets whose contours are adjacent
  • Returns the angle (in degrees) of closest target for easy integration for robot program (Gyro)
  • If angle is to two targets are the same, it picks the left target. You can change this in code
  • Pre-calculated (but sub-optimal) built in HSV Threshold range
  • Should be plug-and-play
  • All contours have green shapes around them along with a white dot and vertical line running through their center point
  • Targets have vertical blue line in between contours. Yaw is calculated from that x coordinate. There should only be one blue line (one Target) at a time.
  • Rounded yaw (horizontal angle from camera) is displayed in large white font at the top of the screen
  • Team 254's explanations linked in comments of angle calculation functions

Resources and Links

chickenvision's People

Contributors

aaron-sc avatar archduketim avatar codeninjadev avatar dblitt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chickenvision's Issues

Semicolon

You have a semicolon on line 750 that shouldn't be there

Uploading Code to Raspberry Pi

So, you have a new version and I took it and want to use it. How do apply this to the vision of the Raspberry Pi? When I upload this as an application, it doesn't work.
image

It gives the stream a 404 error.

image

Also how do you implement the GRIP pipeline? I have exported it, and don't know how to use it.

image

'[...] not in biggestCnts' throws ValueError

We are occasionally having Exceptions thrown on (currently line 430 of master):

                    # Appends important info to array
                    if not biggestCnts:
                        biggestCnts.append([cx, cy, rotation, cnt])
                    elif [cx, cy, rotation, cnt] not in biggestCnts:
                        biggestCnts.append([cx, cy, rotation, cnt])

we are seeing

File "./uploaded.py", line xxx, in findTape

    elif[cx, cy, rotation, cnt] not in biggestCnts:

ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

I did some offline testing, and you will sometimes get this ValueError if there is a numpy array in the list you are checking to see if it's present in the biggestCnts list of lists. cnt is a numpy array, hence the "boom".

I can't make it happen consistently, but I noticed that the cnt we are putting into biggestCnts is never used again in findTape. Can we just not put it in the list, and make the problem go away?

Do we even need to check to see if [cx, cy, rotation, cnt] already exists in biggestCnts? What's the chance of having two contours in the same image with the exist same center [cx, cy] and shape[cnt]?

Do we need to check if biggestCnts is empty before adding it?

Can we reduce lines 430-433 to:

                    biggestCnts.append([cx, cy, rotation])

Troubles grabbing settings.json

I have the newest available image for frcvision

When I attempt to upload your code I get the following error:

Traceback (most recent call last):

  File "./uploaded.py", line 684, in <module>

    if not readConfig():

  File "./uploaded.py", line 631, in readConfig

    j = json.load(f)

  File "/usr/lib/python3.5/json/__init__.py", line 265, in load

    return loads(fp.read(),

  File "/usr/lib/python3.5/encodings/ascii.py", line 26, in decode

    return codecs.ascii_decode(input, self.errors)[0]

UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 174: ordinal not in range(128)

Waiting 5 seconds...

I attempted to comment that section out, to no avail.

I apologize if this is clear, I am new and rather frustrated with the lack of documentation on the frcvision environment.

Mask Not Being Applied

The output image is not masked when applying HSV.

This is what we are seeing in the web app
image

Apply Functionality/Features

Hello!

So I followed your tutorial, and uploaded the file to the raspberry pi. I then proceeded to pipe it through GRIP. I set blurs and HSV the way you did, but had no idea how to apply some of the included feature/functions documented in your README.

Startup

Saving the file as a python file, and then putting it on the pi using the FRC WPIlib gui, does absolutely nothing. Did I do something wrong?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.