Giter Club home page Giter Club logo

suas-2022's Issues

UGV Control Through MAVSDK

Given the GPS location of the final location, maneuver the UGV to the desired location in the quickest possible manner. Utilize MAVSDK commands and functions for ease of use to control the UGV's speed and directional commands.

Greatest Contour Image

Finds the greatest contour needed from stitching image. Needed for the resulting black areas from stitching function to create a uniform panoramic image. This should also crop the image into a 16:9 aspect ratio from the center coordinate/center pixel.

Standard Object Color Parsing

Summary

Objective: This task involves building a function of an odlc module that will run in the vision pipeline that is capable of processing an image containing at least one ODLC standard object, and detecting the outer color of the background around the text (not the color of the text)

Output: The output required for this detection is a color matching one of WHITE, BLACK, GRAY, RED, BLUE, GREEN, YELLOW, PURPLE, BROWN, and ORANGE (as seen in the SUAS Enumeration of Standard Objects).****

Standard Object Text Character Parsing

Summary

Detect text on the standard ODLC objects and return the bounding area and alphabetic characters found

Extended Summary

Objective: This task involves building a function of a text_detection module that will run in the vision pipeline that is capable of processing an image containing at least one ODLC standard object, and detecting the single-character, 1-inch-thick, alphabetic lettering on the Standard Object(s).

Output: The output required for this detection is the exact character of the English alphabet that is on the ODLC Standard Object as well as the bounding box of that character.

Standard Object Classification

Summary

standard_in_frame.py

Classify the features of a shape in an image that is minimum 1 foot wide and colored.

Extended Summary

Objective: This task involves building a function of the odlc package that processes a section of an image containing an unknown standard object, and resolving the standard object shape and standard object color of that standard object.

Output: The output required for this detection is the shape type and the shape color of the standard object. Note that the shape will have a differently colored alphanumeric character of 1-inch thick lettering which should be ignored in this module.
Possible object shapes are: CIRCLE, SEMICIRCLE, QUARTER_CIRCLE, CIRCLE, SEMICIRCLE, QUARTER_CIRCLE, TRIANGLE, SQUARE, RECTANGLE, TRAPEZOID, PENTAGON, HEXAGON, HEPTAGON, OCTAGON, STAR, and CROSS (as seen in the SUAS enumeration of standard object shapes).
Possible object colors are: WHITE, BLACK, GRAY, RED, BLUE, GREEN, YELLOW, PURPLE, BROWN, ORANGE (as seen in the SUAS enumeration of standard object colors

Here is an example of an ODLC standard object:
image
More examples can he found here.

Standard Object Text Color Parsing

Summary

Objective: Given the bounds of the detected text (a single alphanumeric character), find the color of the text. Need to determine color of text and map to enumeration of colors provided by interop system.

Extended Summary

Main Objective: This task involves building a function as part of a text_detection module that will run in the vision pipeline that when given an image and a bounding area within that image, can detect the COLOR of the single-character, 1-inch-thick, alphanumeric character on the Standard Object(s).

Output: The output required for this detection is a color matching one of WHITE, BLACK, GRAY, RED, BLUE, GREEN, YELLOW, PURPLE, BROWN, and ORANGE (as seen in the SUAS Enumeration of Standard Objects).****

Template Matching

Matches the center image (image with the center coordinate) to the stitched panoramic image and gets the pixel location of the center.

Emergent Object Characteristics

emergent_characteristics.py

Characteristics needed to be recognized:
-color of clothes
-size of person
-skin color
-surrounding scene

These characteristics need to be transmitted back through the interop system. Most likely returns an object with characteristics.

ex.
{
"id": 1,
"mission": 1,
"type": "STANDARD",
"latitude": 38,
"longitude": -76,
"orientation": "N",
"shape": "RECTANGLE",
"shapeColor": "RED",
"autonomous": false
}

Handle text detection at different orientations

  • Text detection with PyTesseract is not reliable when the text is at angles. Might be able to be resolved with pytesseract.image_to_osd().

  • Also need to consider the orientation of how the shape is passed in the bounding box.

Object Image

object_image.py

After detecting an object, we need to save an image of the object:
-get most ideal (clear and visible) image of the object
-crop image so object takes up minimum 25% of image
-visibly recognizable to someone looking at the image

Might be useless...

ODLC Detection

Need ability to detect the ODLC objects within an image. Should return contours of the objects.

Emergent Object Detection

Summary

Objective: Given an image, find the bounds of the emergent object. The bounds of the emergent object will be used to crop the image.

Standard Object Match Percentage

standard_match.py

Compare captured image of object with base image of object and return an accuracy rating. Returns an integer.

MAVProxy Setup & Integration

Using the tools provided by the SUAS Interoperability System, construct the framework necessary to forward GPS data of the drone's position during flight. Data must be uploaded at an average rate of 1Hz while airborne. The commands necessary to forward positional data are located within the MAVLink System, which can be initialized from the terminal, and the tools needed to send this data to the interop system are located within the Client image provided in the Interoperability GitHub page.

Mapping

Stitch multiple images into WGS 84 Web Mercator Projection.

16:9 Aspect Ratio
GPS position given for center of image
Given height in feet for distance

Generic Object Localization from Image

Summary

object_location.py

Determine the GPS location of an object in an image.
This involves being able to, when given an image and a single pixel in that image matrix (and the retrievable vehicle orientation data and camera optic data), return the coordinates of the real-world GPS location of the pixel in the image.
This module should also be overloaded with the capability of parse an image with a given bounding area (set of pixels) (and vehicle orientation data and camera optic data) to return the GPS coordinates that correspond to its real-world bounding area.

Off-Axis ODLC Flight Plan

Given the GPS location of the Off-Axis object, fly as close to the passed coordinates as possible within the flight bounds, wait a set amount of time to simulate vision working, and continue with other flight tasks.

Vision Pipeline

Make a pipeline that runs vision code. Code will be executed in post process.

Unit Tests

Make Unit tests for every function in every algorithm

Stationary Obstacle Avoidance Algorithm

Summary

Write code to avoid the stationary obstacles - virtual cylinders - shown in the mission data, using the given GPS coordinates for the cylinders' altitude, radius, latitude and longitude.

Extended Summary

This task involves the development and implementation of an efficient algorithm to, when given point A and point B of an intended flight path that intersect a stationary obstacle, detour around the stationary obstacle without "crashing" into it. These stationary obstacles are virtual cylinders of up to 750 feet in height and between 30 and 300 feet in radius. The implementation should take into consideration the approximate size of the drone and thus keep a reasonable distance from the stationary obstacle when detouring. This is to accommodate for potential GPS inaccuracies as well as to avoid inadvertent intersection between the body of the drone and the stationary obstacle.

Emergent Object Detection

emergent_in_frame.py

Detect a person engaged in an "activity" of interest:
-motion detection
-object detection (to find potential humans)
-human detection from objects found (shape-based, texture-based or motion-based features)

Object should be in the search area. Returns a bool.

*Maybe look into datasets that could help train to distinguish humans (score based)

image

Waypoint Flight Path

Receive waypoint locations as GPS coordinates, and autonomously maneuver the drone to take quickest path between waypoints. Must fly the waypoints in sequential order as they are received from the Mission Plan downloaded from the Interoperability System. Utilize provided test data or create own waypoints corresponding to our flight test location to test flight logic.

Standard Object Characteristics

standard_characteristics.py

Characteristics needed to be recognized:
-shape
-shape color
-alphanumeric character
-alphanumeric color
-alphanumeric orientation (position relative to birds eye view?)

These characteristics need to be transmitted back through the interop system. Most likely returns an object with characteristics.

ex.

{
"id": 1,
"mission": 1,
"type": "STANDARD",
"latitude": 38,
"longitude": -76,
"orientation": "N",
"shape": "RECTANGLE",
"shapeColor": "RED",
"autonomous": false
}

Bounding Box object

This issue details this project's necessity for an bounding box object stored in the "common" submodule in the "vision" module.

This bounding box object should utilize an efficient and appropriate data structure to store the four vertices of a non-rotated (up-right) rectangle. In practice, this rectangle corresponds to a region of interest on a processed image during the competition.
An example of one implementation of a similar use case can be found in the IARC-2020 repo's vision.bounding_box module.

Standard Object Text Orientation

Summary

Objective: Given the bounds of the detected text (a single alphanumeric character), find the orientation of the text in the image. Orientation includes cardinal and intermediate directions.

Extended Summary

Main Objective: This task involves building a function of a text detection module that will run in the vision pipeline that is capable of processing an image with a bounding area within the image and detecting orientation of the single-character, 1-inch-thick, alphanumeric character on the Standard Object(s).

Output: The output required for this detection is the orientation of the text as one of the 8 principal winds (cardinal/intermediate directions: N, NE, E, SE, S, SW, W, NW) relative to North (as seen in the SUAS Enumeration of Standard Objects).
Vehicle orientation data should be retrieved using the MAVSDK telemetry module as an Euler angle in degrees (see MAVSDK-Python Telemetry Docs)

Mapping Survey Path

Provided the GPS coordinates for the center point, as well as the height of the map allowed, generate an efficient path for the drone to travel in order to create a map of the desired mapping boundary using a system of waypoints. Must fly at a height to create a 16:9 aspect ratio image, as well as fly at a speed to allow for accurate image capturing with overlap.

Emergent Object ODLC Flight Mission

Given the latitude and longitude of the last known position of the Emergent object, scan the flight boundaries starting from this point for the "person engaged in an activity of interest". Flight path should efficiently scan the search grid, the boundaries of which are located in the Mission Plan in latitude and longitude format.

Object Bounding Box

object_bounding_box.py

Create a bounding box around the object to get an optimized and accurate image of an object with minimal non important detail to make calculating characteristics easier. Returns a cropped image of the object.

Parameters: Frame with object, Contour array of object

Interop Data Processing

Write a file/functions to receive ODLC data from vision and send to interop server using pre-made interop library and framework.

AirDrop GPS Flight

Provided the GPS location of the center drop point as well as the boundaries for the drop area, fly to the location of the air drop & prepare to release the UGV. Current system should simply pause at desired location and wait for a short period of time to simulate the release process, and continue to next flight state.

Exclude uncommon letters from text detection

Some letters can be mirrors of more common letters. For example, 'W' can be similar to an 'M' rotated 180 degrees. Uncommon letters should be excluded/mapped to more common letters.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.