Giter Club home page Giter Club logo

droneaid's Introduction

License Slack

DroneAid logo

DroneAid uses machine learning to detect calls for help on the ground placed by those in need. At the heart of DroneAid is a Symbol Language that is used to train a visual recognition model. That model analyzes video from a drone to detect and count specific images. A dashboard can be used to plot those locations on a map and initiate a response.

An aerial scout for first responders

DroneAid consists of several components:

  1. The DroneAid Symbol Language that represents need and quantities
  2. A mechanism for rendering the symbols in virtual reality to train a model
  3. The trained model that can be applied to drone livestream video
  4. A dashboard that renders the location of needs captured by a drone

The current implementation can be extended beyond a particular drone to additional drones, airplanes, and satellites. The Symbol Language can be used to train additional visual recognition implementations.

The original version of DroneAid was created by Pedro Cruz in August 2018. A refactored version was released as a Call for Code® with The Linux Foundation open source project in October 2019. DroneAid is currently hosted at The Linux Foundation.

Get started

The DroneAid origin story

Pedro Cruz explains his inspiration for DroneAid, based on his experience in Puerto Rico after Hurricane Maria. He flew his drone around his neighborhood and saw handwritten messages indicating what people need and realized he could standardize a solution to provide a response.

DroneAid

DroneAid Symbol Language

The DroneAid Symbol Language provides a way for those affected by natural disasters to express their needs and make them visible to drones, planes, and satellites when traditional communications are not available.

Victims can use a pre-packaged symbol kit that has been manufactured and distributed to them, or recreate the symbols manually with whatever materials they have available.

These symbols include those below, which represent a subset of the icons provided by The United Nations Office for the Coordination of Humanitarian Affairs (OCHA). These can be complemented with numbers to quantify need, such as the number or people who need water.

Symbol Meaning Symbol Meaning
SOS Immediate Help Needed
(orange; downward triangle over SOS)
Shelter Shelter Needed
(cyan; person standing in structure)
OK No Help Needed
(green; upward triangle over OK)
FirstAid First Aid Kit Needed
(yellow; case with first aid cross)
Water Water Needed
(blue; water droplet)
Children Area with Children in Need
(lilac; baby with diaper)
Food Food Needed
(red; pan with wheat)
Elderly Area with Elderly in Need
(purple; person with cane)

See it in action

Dashboard Screenshot

A demonstration implementation takes the video stream of DJI Tello drone and analyzes the frames to find and count symbols. See tello-demo for instructions on how to get it running.

Use the pre-trained visual recognition model on the Symbol Language

See the Tensorflow.js example.

See the Tensorflow.js example deployed to Code Engine.

Set up and training the model

In order to train the model, we must place the symbols into simulated environments so that the system knows how to detect them in a variety of conditions (i.e. whether they are distorted, faded, or in low light conditions).

See SETUP.md

Frequently asked questions

See FAQ.md

Project roadmap

See ROADMAP.md

Technical charter

See DroneAid-Technical-Charter.pdf

Built with

Contributing

Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting DroneAid pull requests.

Authors

License

This project is licensed under the Apache 2 License - see the LICENSE file for details.

droneaid's People

Contributors

bourdakos1 avatar dependabot[bot] avatar derekteay avatar johnwalicki avatar krook avatar pedrocruzio avatar vabarbosa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

droneaid's Issues

Unloadable CSS / Javascript dependencies when connected to Tello

If your laptop is connected to the Tello Drone hotspot and you try to load the http://127.0.0.1:3000 webpage, the index.html tries unsuccessfully to load the remote CSS page
<link rel="stylesheet" href="https://codepen.io/ibmcodait/pen/gVMdwm.css">
and scripts

    <script src="https://cdn.jsdelivr.net/npm/@cloud-annotations/object-detection"></script>
    <script src="https://codepen.io/ibmcodait/pen/gVMdwm.js"></script>

There's some additional recursive dependences within those files too.

Since the laptop is not connected to the internet (at least over wireless), the rendered page is a mess. It would be better to move these dependencies local into this repo so that a Ctrl-R / page reload doesn't wreck the drone demo.

For now the workaround is to :

  • Connect to your mobile smartphone hotspot
  • npm start
  • Open a browser tab to http://127.0.0.1:3000
  • Power on the Tello Drone
  • Connect to the Tello hotspot
  • Don't reload the browser page.
  • Fly the drone with object detection.

Encapsulate the trained model in a Docker image

Is your feature request related to a problem? Please describe.
The IBM Model Asset Exchange provides a way to reuse models instead of having to train them manually.

Describe the solution you'd like
The trained model materials need to be packaged and shared through MAX.

Describe alternatives you've considered
Training the model each time for each platform.

Additional context
This could be related to any other Dockerization steps.

Anki flashcards deck for learning and recall

Is your feature request related to a problem? Please describe.
An (Anki) deck of flashcards would make learning and recalling the DroneAid symbols even easier

Describe the solution you'd like

  • Deck
    • Card(s)
      • Image + Visual Description
      • Meaning

Describe alternatives you've considered

Additional context

Document steps for PowerAI and Watson Visual Recognition

Is your feature request related to a problem? Please describe.
Beyond the current open source TensorFlow based implementation, we need to add steps for alternative visual recognition systems, including PowerAI and Watson Visual Recognition.

Describe the solution you'd like
DroneAid in general needs to have a flexible architecture that allows different components to implement its 4 core features; 1. Several device type inputs, 2. Several training systems, 3. Several realtime processing systems, and 4. Several downstream reporting/messaging outputs.

Additional context
The original system was based on PowerAI. We need to recover those steps if possible.

DOC: Visual description of each symbol

A written description of each symbol in the README table could be helpful for training and memory purposes.

I wrote this; please feel free to reuse none or any of it without attribution:

Each of the symbols are drawn within a triangle pointing up:

  • Immediate Help Needed (orange; downward triangle \n SOS),
  • Shelter Needed (cyan; like a guy standing in a tall pentagon without a floor),
  • OK: No Help Needed (green; upward triangle \n OK),
  • First Aid Kit Needed (yellow; briefcase with a first aid cross),
  • Water Needed (blue; rain droplet),
  • Area with Children in Need (lilac; baby looking thing with a diaper on),
  • Food Needed (red; pan with wheat drawn above it),
  • Area with Elderly in Need (purple; person with a cane)

Create a model to detect hand-drawn "SOS"

Is your feature request related to a problem? Please describe.
We assume that the person in need will have a kit with the printed symbols available. We should improve the system to demonstrate how a person could hand-recreate the symbols, and in turn, make the recognition more sensitive to those symbols.

droneaid-counter.js doesn't actually count

Describe the bug
The model detects one of the classified images and draws a bounding box but the counter on the web page does not increment

To Reproduce
Steps to reproduce the behavior:

  1. View some of the symbol images.
  2. Observe that the model draws bounding boxes.
  3. The web page does not increment

Encapsulate the Tello demo in a Docker image

Is your feature request related to a problem? Please describe.
Users currently have to download Homebrew, ffmpeg, Node.js and other tools. We should package this all in Docker. This also helps support Windows users.

Describe the solution you'd like
Create a Dockerfile that captures all the steps.

Describe alternatives you've considered
Platform-specific instructions

Additional context
The trained visual recognition model should also be shared through the IBM Model Asset Exchange as a Docker image.

Reimplement the mapping dashboard

Is your feature request related to a problem? Please describe.
Need to replace the lost code in the PowerAI version for the plotting the located needs by type and coordinates on the map.

Describe the solution you'd like
This is the fourth and final component needed for an end-to-end version of DroneAid.

Additional context
The initial version of DroneAid had this dashboard as part of the solution. It was lost when the PowerAI version implementation was deleted.

image

Extend to consumer DJI drones like the Mavic, Phantom, and Spark

Is your feature request related to a problem? Please describe.
The input stream to the visual recognition model should be source agnostic, whether from drones, civil aviation, or satellite feeds.

Describe the solution you'd like
Need to provide working code that shows this implementation.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.