Giter Club home page Giter Club logo

acme-robotics-human-tracker's Introduction

Acme-Robotics-Human-Detector-Tracker

Build Status Coverage Status License: MIT

Detects and Tracks Human in Robot Reference Frame

Project contibutors

  1. Naitri Rajyaguru, Robotics Graduate Student at University of Maryland, College Park.
  2. Mayank Joshi, Robotics Graduate Student at University of Maryland, College Park.

Overview

Human detection is a part of object detection where primary class “human/person” is detected in images or video frames. In robotics, human detection is very necessary as humans are dynamic obstacles and no one would want a robot colliding with a human. Therefore, for navigation, a robot needs to detect and identify humans in its field of view.

We propose a human detection and tracking module, which uses computer vision algorithms to perceive the environment. The module uses a monocular camera to get frames in which humans are first detected by a deep neural network model and then those are tracked till they are present in the frame. Alot of research has been done in the field of object detection, especially human detection. Although quite a lot algorithms are quite efficient, like HOG (Histogram of Oriented Gradients) but they are not as robust as a deep neural network. Now days we have processing power that could run DNN model in real time and the accuracy of the model can be increased by training it on more data, therefore a scope of improvement is always present. Our module also has the capability to obtain training data and check the performance of DNN model on the test data, which can be provided by the user.

We plan to use YoloV4-tiny, a pre-trained model based on Darknet framework.

image

Image taken from this article

Deliverables

  • Project: Acme Robotics Human(s) obstacle detector and tracker in robot reference frame
  • Overview of prosposed work, including risks, mitigation, timeline
  • UML and activity diagrams
  • Travis code coverage setup with Coveralls
  • Developer-level documentation
  • Phase 1 - implementation of first version of the whole module

Potential Risks and Mitigation

  • The model is trained on MS COCO dataset which contains RGB images therefore would not able to work with infrared cameras or in low light. Training on additional low light and data from infrared camera can make the model more robust
  • Since yoloV4-tiny is the compressed version of yoloV4, therefore it lacks in accuracy. YoloV4 achieves an average precision of 64.9% while yoloV4-tiny achieves an average precision of 40.2%. Check full comparison here
  • Running the system on the robot can memory expensive due to deep learning model, one possible solution is to quantize the weights of the model so that it takes less memory and is faster

UML Diagram

  • The class dependency diagram of the proposed design:

image

  • Activity diagram image

  • Quadchart and Proposal of 2 pages can be found here

Dependencies

Install OpenCV 4.5.0 and other dependencies using the following command found in Acme-Robotics-Human-Tracker directory

sh requirements.sh

Build Instructions

With the following steps you can clone this repository in you local machine and build it.

git clone --recursive https://github.com/mjoshi07/Acme-Robotics-Human-Tracker
cd Acme-Robotics-Human-Tracker
mkdir build
cd build
cmake ..
make

To run tests of this module, use the following command

 ./tests/human-tracker-test

To use, webcam uncomment the line in Autobot.cpp #216 and comment line #219

./app/human-tracker

Demo-Phase1

The result of phase 1 - first version of implementation for Human(N>=1) detection and tracking can be found here image

Technology

  • We will be following AIP (Agile Iterative Process) and implement the software using TDD (Test Driven Development)
  • Quality shall be ensured by unit testing and cppcheck, adhering to Google C++ style guide
  • Operating System (Ubuntu 18.04 LTS)
  • Programming Language (Modern C++)
  • Build System (CMake >= 3.2.1)
  • External Library (Opencv >= 4.5.0)
  • Pre-trained model (yoloV4-tiny)

Development Process

Following the Agile Iterative Process for Development, we switch roles of driver and navigator. Product backlog, iteration backlog and worklog can be found here and Sprint planning with review notes can be found here

Generate Doxygen Documentation

To install doxygen run the following command:

sudo apt-get install doxygen

Now from the cloned directory run:

cd docs
doxygen Doxyfile

Generated doxygen files are in html format and you can find them in ./docs folder. With the following command

cd docs
cd html
google-chrome index.html

Running Cppcheck and Cpplint

Run cppcheck: Results are stored in ./results/cppcheck.txt

sh run_cppcheck.sh

Run cpplint: Results are stored in ./results/cpplinttxt

sh run_cpplint.sh

acme-robotics-human-tracker's People

Contributors

mjoshi07 avatar naitri avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.