Giter Club home page Giter Club logo

suas-2022's Introduction

SUAS-2022

Missouri S&T Multirotor Robot Design Team's code for the Association for Unmanned Vehicle Systems International's 2022 Student Unmanned Aerial Systems Competition (AUVSI SUAS 2022)

Table of Contents

Codebase Structure

flight/  # Physical motor control, movement, path planning, and other flight related algorithms
vision/  # Mapping, shape & text detection, and other computer vision related algorithms
run.py  # Python program to run the competition code

integration_tests/  # Programs written to test discrete modules of the competition code

Information about files within each directory can be found in /<directory>/README.md

Requirements

To run our competition code, you will need:

Installation and Setup

Follow these instructions exactly based on your platform:

  1. Set up the development toolchain (platform dependent).

  2. Install the proper Python version (see Requirements) using pyenv on Linux and macOS. For Windows, get the executable from the Python website.

  3. Install Poetry)

  4. Clone PX4 Firmware repository (tutorial here)

  5. If testing without a drone, install a supported simulator

    • Currently, we primarily do simple development with jMAVSim, and complex development and testing in AirSim, so start with jMAVSim
    • Run the make command from inside the PX4 Firmware repository
  6. Clone the repository with git clone --recursive https://github.com/MissouriMRR/SUAS-2022.git

  7. In the root of the repository, run poetry install

Running and Testing Code

  • Follow the steps in the installation instructions above
  • If you are working only on computer vision code, you may skip steps 1, 4, and 5
  • Initialize your virtual environment for running and testing code with
poetry shell
  • You may now run and test your modules at will inside this shell
  • To run the competition code, execute the following from the root directory
./run.py
  • When you're done, deactivate and exit the virtual env with
exit

Contributing Code

  1. Clone the repository with git clone --recursive https://github.com/MissouriMRR/SUAS-2022.git
  2. Make sure you are on the most up-to-date develop branch with git switch develop then git pull
  3. Create a branch with the proper naming format (see Creating A Branch)
  4. Make changes in your branch and commit regularly (see Committing Code)
  5. Once changes are complete, push your code, go to GitHub, and submit a "Pull Request". Then, fill in the necessary information. Any issues your PR solves should be denoted in the description along with a note that the PR "Closes #XX" where XX is replaced with the issue number you are closing. Request a review from one of the software leaders on the upper right-hand side of the PR.
  6. Once it has been reviewed by the leads, it can be accepted and you can merge your branch into the develop branch.
  7. Once this is done, you may delete your branch, and the cycle continues...

Creating A Branch

  • To contribute, you must create a new branch for your code
  • If you've made changes in develop and want to add your changes to a new branch, use
git switch -c "branch_name"
  • Otherwise, create & switch to a new branch with
git checkout -b "branch_name" 
# or both of 
git branch "branch_name"
git checkout "branch_name"

branch_name should follow the convention feature/{feature_name}, or hotfix/{fix_name}

Committing Code

  • When programming with a VCS like Github, you should make changes and commit regularly with clear commit messages.
  • This repo uses git hooks for the automated execution of scrips that reformat & analyze your code when a commit is made.
  • In order for a pull request to be considered, commits must be made on a repo clone with pre-commit properly installed

Setting Up Pre-Commit

  • Here are the commands for the first-time setup & install of pre-commit for the repo
pip install pre-commit
pre-commit install
  • You can then use pre-commit run to run the pre-commit hooks on all staged files or you can let it automatically trigger when you make a commit
  • Note that if you decide not to use pre-commit run before committing code, and your code gets reformatted but passes all of the analysis hooks, you must re-commit that code with a new second commit message either mirroring the previous commit message or stating that the code was reformatted.

Committing Unfinished Code

  • When committing code, our pre-commit hooks will scan your files and look for errors, type inconsistencies, bad practices, and non-conformities to our coding style.
  • This means that your commit will be rejected if it fails any one of the pre-commit hooks.
  • Oftentimes, one may need to commit code to save their place or the current version in unfinished code and bypass pre-commit hooks
Bypassing All Pre-Commit Hooks
  • To bypass all pre-commit hooks, add the --no-verify flag to your git commit execution.
Bypassing Invidual Pre-Commit Hooks
  • To bypass specific hooks on a module, place the following comments at the beginning of your file(s) for each respective hook.
    • Black: # fmt: off
    • Mypy: # type: ignore
    • Pylint: # pylint: disable=all
  • Pull requests made that bypass pre-commit hooks without prior approval will be rejected.

License

We adopt the MIT License for our projects. Please read the LICENSE file for more info

suas-2022's People

Contributors

fallscameron01 avatar mrouie avatar mat-px avatar cjhr95 avatar cmspencer109 avatar jreini avatar clayjay3 avatar

Stargazers

 avatar Hoa Nguyen(8mndpv) avatar  avatar Vatsal avatar  avatar Evan Pozzo avatar  avatar Carson Jones avatar

Watchers

James Cloos avatar  avatar Michael Pieper avatar  avatar Samantha Baker avatar

suas-2022's Issues

Vision Pipeline

Make a pipeline that runs vision code. Code will be executed in post process.

Standard Object Characteristics

standard_characteristics.py

Characteristics needed to be recognized:
-shape
-shape color
-alphanumeric character
-alphanumeric color
-alphanumeric orientation (position relative to birds eye view?)

These characteristics need to be transmitted back through the interop system. Most likely returns an object with characteristics.

ex.

{
"id": 1,
"mission": 1,
"type": "STANDARD",
"latitude": 38,
"longitude": -76,
"orientation": "N",
"shape": "RECTANGLE",
"shapeColor": "RED",
"autonomous": false
}

Emergent Object Detection

emergent_in_frame.py

Detect a person engaged in an "activity" of interest:
-motion detection
-object detection (to find potential humans)
-human detection from objects found (shape-based, texture-based or motion-based features)

Object should be in the search area. Returns a bool.

*Maybe look into datasets that could help train to distinguish humans (score based)

image

Standard Object Text Character Parsing

Summary

Detect text on the standard ODLC objects and return the bounding area and alphabetic characters found

Extended Summary

Objective: This task involves building a function of a text_detection module that will run in the vision pipeline that is capable of processing an image containing at least one ODLC standard object, and detecting the single-character, 1-inch-thick, alphabetic lettering on the Standard Object(s).

Output: The output required for this detection is the exact character of the English alphabet that is on the ODLC Standard Object as well as the bounding box of that character.

Off-Axis ODLC Flight Plan

Given the GPS location of the Off-Axis object, fly as close to the passed coordinates as possible within the flight bounds, wait a set amount of time to simulate vision working, and continue with other flight tasks.

Waypoint Flight Path

Receive waypoint locations as GPS coordinates, and autonomously maneuver the drone to take quickest path between waypoints. Must fly the waypoints in sequential order as they are received from the Mission Plan downloaded from the Interoperability System. Utilize provided test data or create own waypoints corresponding to our flight test location to test flight logic.

Standard Object Match Percentage

standard_match.py

Compare captured image of object with base image of object and return an accuracy rating. Returns an integer.

Interop Data Processing

Write a file/functions to receive ODLC data from vision and send to interop server using pre-made interop library and framework.

Mapping

Stitch multiple images into WGS 84 Web Mercator Projection.

16:9 Aspect Ratio
GPS position given for center of image
Given height in feet for distance

Exclude uncommon letters from text detection

Some letters can be mirrors of more common letters. For example, 'W' can be similar to an 'M' rotated 180 degrees. Uncommon letters should be excluded/mapped to more common letters.

Unit Tests

Make Unit tests for every function in every algorithm

Bounding Box object

This issue details this project's necessity for an bounding box object stored in the "common" submodule in the "vision" module.

This bounding box object should utilize an efficient and appropriate data structure to store the four vertices of a non-rotated (up-right) rectangle. In practice, this rectangle corresponds to a region of interest on a processed image during the competition.
An example of one implementation of a similar use case can be found in the IARC-2020 repo's vision.bounding_box module.

Object Bounding Box

object_bounding_box.py

Create a bounding box around the object to get an optimized and accurate image of an object with minimal non important detail to make calculating characteristics easier. Returns a cropped image of the object.

Parameters: Frame with object, Contour array of object

Object Image

object_image.py

After detecting an object, we need to save an image of the object:
-get most ideal (clear and visible) image of the object
-crop image so object takes up minimum 25% of image
-visibly recognizable to someone looking at the image

Might be useless...

UGV Control Through MAVSDK

Given the GPS location of the final location, maneuver the UGV to the desired location in the quickest possible manner. Utilize MAVSDK commands and functions for ease of use to control the UGV's speed and directional commands.

Stationary Obstacle Avoidance Algorithm

Summary

Write code to avoid the stationary obstacles - virtual cylinders - shown in the mission data, using the given GPS coordinates for the cylinders' altitude, radius, latitude and longitude.

Extended Summary

This task involves the development and implementation of an efficient algorithm to, when given point A and point B of an intended flight path that intersect a stationary obstacle, detour around the stationary obstacle without "crashing" into it. These stationary obstacles are virtual cylinders of up to 750 feet in height and between 30 and 300 feet in radius. The implementation should take into consideration the approximate size of the drone and thus keep a reasonable distance from the stationary obstacle when detouring. This is to accommodate for potential GPS inaccuracies as well as to avoid inadvertent intersection between the body of the drone and the stationary obstacle.

Standard Object Text Orientation

Summary

Objective: Given the bounds of the detected text (a single alphanumeric character), find the orientation of the text in the image. Orientation includes cardinal and intermediate directions.

Extended Summary

Main Objective: This task involves building a function of a text detection module that will run in the vision pipeline that is capable of processing an image with a bounding area within the image and detecting orientation of the single-character, 1-inch-thick, alphanumeric character on the Standard Object(s).

Output: The output required for this detection is the orientation of the text as one of the 8 principal winds (cardinal/intermediate directions: N, NE, E, SE, S, SW, W, NW) relative to North (as seen in the SUAS Enumeration of Standard Objects).
Vehicle orientation data should be retrieved using the MAVSDK telemetry module as an Euler angle in degrees (see MAVSDK-Python Telemetry Docs)

Mapping Survey Path

Provided the GPS coordinates for the center point, as well as the height of the map allowed, generate an efficient path for the drone to travel in order to create a map of the desired mapping boundary using a system of waypoints. Must fly at a height to create a 16:9 aspect ratio image, as well as fly at a speed to allow for accurate image capturing with overlap.

AirDrop GPS Flight

Provided the GPS location of the center drop point as well as the boundaries for the drop area, fly to the location of the air drop & prepare to release the UGV. Current system should simply pause at desired location and wait for a short period of time to simulate the release process, and continue to next flight state.

Template Matching

Matches the center image (image with the center coordinate) to the stitched panoramic image and gets the pixel location of the center.

ODLC Detection

Need ability to detect the ODLC objects within an image. Should return contours of the objects.

Emergent Object ODLC Flight Mission

Given the latitude and longitude of the last known position of the Emergent object, scan the flight boundaries starting from this point for the "person engaged in an activity of interest". Flight path should efficiently scan the search grid, the boundaries of which are located in the Mission Plan in latitude and longitude format.

Generic Object Localization from Image

Summary

object_location.py

Determine the GPS location of an object in an image.
This involves being able to, when given an image and a single pixel in that image matrix (and the retrievable vehicle orientation data and camera optic data), return the coordinates of the real-world GPS location of the pixel in the image.
This module should also be overloaded with the capability of parse an image with a given bounding area (set of pixels) (and vehicle orientation data and camera optic data) to return the GPS coordinates that correspond to its real-world bounding area.

Handle text detection at different orientations

  • Text detection with PyTesseract is not reliable when the text is at angles. Might be able to be resolved with pytesseract.image_to_osd().

  • Also need to consider the orientation of how the shape is passed in the bounding box.

Greatest Contour Image

Finds the greatest contour needed from stitching image. Needed for the resulting black areas from stitching function to create a uniform panoramic image. This should also crop the image into a 16:9 aspect ratio from the center coordinate/center pixel.

MAVProxy Setup & Integration

Using the tools provided by the SUAS Interoperability System, construct the framework necessary to forward GPS data of the drone's position during flight. Data must be uploaded at an average rate of 1Hz while airborne. The commands necessary to forward positional data are located within the MAVLink System, which can be initialized from the terminal, and the tools needed to send this data to the interop system are located within the Client image provided in the Interoperability GitHub page.

Emergent Object Characteristics

emergent_characteristics.py

Characteristics needed to be recognized:
-color of clothes
-size of person
-skin color
-surrounding scene

These characteristics need to be transmitted back through the interop system. Most likely returns an object with characteristics.

ex.
{
"id": 1,
"mission": 1,
"type": "STANDARD",
"latitude": 38,
"longitude": -76,
"orientation": "N",
"shape": "RECTANGLE",
"shapeColor": "RED",
"autonomous": false
}

Standard Object Text Color Parsing

Summary

Objective: Given the bounds of the detected text (a single alphanumeric character), find the color of the text. Need to determine color of text and map to enumeration of colors provided by interop system.

Extended Summary

Main Objective: This task involves building a function as part of a text_detection module that will run in the vision pipeline that when given an image and a bounding area within that image, can detect the COLOR of the single-character, 1-inch-thick, alphanumeric character on the Standard Object(s).

Output: The output required for this detection is a color matching one of WHITE, BLACK, GRAY, RED, BLUE, GREEN, YELLOW, PURPLE, BROWN, and ORANGE (as seen in the SUAS Enumeration of Standard Objects).****

Standard Object Classification

Summary

standard_in_frame.py

Classify the features of a shape in an image that is minimum 1 foot wide and colored.

Extended Summary

Objective: This task involves building a function of the odlc package that processes a section of an image containing an unknown standard object, and resolving the standard object shape and standard object color of that standard object.

Output: The output required for this detection is the shape type and the shape color of the standard object. Note that the shape will have a differently colored alphanumeric character of 1-inch thick lettering which should be ignored in this module.
Possible object shapes are: CIRCLE, SEMICIRCLE, QUARTER_CIRCLE, CIRCLE, SEMICIRCLE, QUARTER_CIRCLE, TRIANGLE, SQUARE, RECTANGLE, TRAPEZOID, PENTAGON, HEXAGON, HEPTAGON, OCTAGON, STAR, and CROSS (as seen in the SUAS enumeration of standard object shapes).
Possible object colors are: WHITE, BLACK, GRAY, RED, BLUE, GREEN, YELLOW, PURPLE, BROWN, ORANGE (as seen in the SUAS enumeration of standard object colors

Here is an example of an ODLC standard object:
image
More examples can he found here.

Standard Object Color Parsing

Summary

Objective: This task involves building a function of an odlc module that will run in the vision pipeline that is capable of processing an image containing at least one ODLC standard object, and detecting the outer color of the background around the text (not the color of the text)

Output: The output required for this detection is a color matching one of WHITE, BLACK, GRAY, RED, BLUE, GREEN, YELLOW, PURPLE, BROWN, and ORANGE (as seen in the SUAS Enumeration of Standard Objects).****

Emergent Object Detection

Summary

Objective: Given an image, find the bounds of the emergent object. The bounds of the emergent object will be used to crop the image.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.