Giter Club home page Giter Club logo

occlusion-avoidance-and-dl-for-tracking's Introduction

Occlusion-Avoidance-and-DL-for-Tracking

Math Model for Time of Occlusion and Tracking by Deep Learning (Multi-Object Tracking Problem)

Contents

Introduction

This repository contain the internship work on "Vision based Multi-robot Tracking of Target with Aerial and Ground Robots". This work is carried at LAAS-CNRS, Toulouse, France as a part of my Master thesis internship 2020 under the supervision of Prof. Antonio Franchi (Researcher, RIS Team) in coordination with Mr.Martin Jacquet(PhD) within ANR Project MuroPhen. This project is licensed under the terms of the BSD 3-Clause.

Abstract

Recent research in Aerial Robotics defines Visual Tracking Methods deployed on Quadrotors. The idea is to study the existing state-of-the-art real time tracking algorithms and free to adopt the existing ones considering the problem of occlusion avoidance. Occlusions are mostly unpredictable and still ongoing research in tracking scenarios. The inability to track the object’s features when some obstacle is encountered in between the camera field of view and moving target are usually termed as ’Phenomenon of Occlusion’. The physical world obstacles causing these phenomenon are called ‘Occlusions’. Basically, occlusions are two types. One is Spatio-temporal: No occlusion, partial occlusion & full occlusion. Other one is Time sequences based: Regular time occlusions and Irregular time occlusions.

Deploying a tracking algorithm considering the Occlusion Avoidance is a challenging task. This will include exploitation of the following parameters: geometric, photo-metric, motion, time, camera, feedbacks, control loops, real time stability, sensors, UAV dynamics etc. One has to build his own environment and the required dataset in indoor or outdoor arena allotted to him. This work states the novel & spanking new solution for Occlusion Aviodance through the Time of Occlusion concept to drive or navigate or control the UAV in the 3D environment. Then after deploy the tracking algorithm for tracking the moving target. Deep Learning Tools are observed as profound methods and provide effective results (enabling optimization efficiently).

After a rigorous study of 10 Standard Robotic papers and over 20 papers on state-of-the-art Deep Neural Networks, the latest evolutions like YOLO, RetinaNet and Joint Monocular 3D Tracking surely believed to be future milestones upon using these tracking networks on Aerial Vehicles. Each one of them is expected to do potentially well & therefore these algorithms are tested and reported with qualitative results.

Highlights:

  1. A Full novel math model: Refer chapter-3 in this thesis document or slide no.8 & 9 from thesis ppt
  2. Tracking by Deep Learning Models: YOLOv3, YOLOv4 and Detectron2
  3. Use of GPU with CUDA, CuDNN
  4. Tuning YOLO and Detectron2 to Depth based Tracking system

Occlusion Avoidance: Time of Occlusion Computation (glimpse)

Case-1:

CASE-1: Straight line moving target with constant velocity

Case-2:

CASE-2: Moving target with time-varying speed with constant acceleration

Prerequisites

YOLOv3:

  • Transfer Learning – pre-trained net, pretrained weights, predef-annotations
  • TensorFlow 1.15, Keras 2.2.4, Numpy 1.16, Opencv 3.2.4.17, python 3.6.9
  • batch size: 8 (can be 8 or 32). Batch Norm Epsilon= 1e-5, leaky ReLU=0.1, anchor boxes - 9 no.s, epoches: CPU
  • Backbone: Darknet-53 residual connections, 72 Conv. Layers, Upsample layer, Non-max suppression
  • COCO Dataset: 80 classes

YOLOv4:

  • Opencv>=2.4, intel i7, cmake>=3.8 (3.10.2), openmp, git, g++,
  • darknet backbone, for GPU usage: CUDA 10.0, CuDNN >=7.0

Detectron2:

  • RetinaNet includes mask RCNN
  • Written in Python - powered by Pytorch
  • Uses COCO Dataset
  • Multiple Networks grouped
  • python>=3.6, Pytorch>=1.4, torchvision, pycocotools,
  • opencv-python, opencv>=4, gcc>=5, CUDA 10.1, CuDNN >=7.0

Working

a. Run YOLOv3 webcam based tracking (image_detect.py is the main file)

  1. Clone this repo. Download YOLOv3 weights from YOLO website, or use: wget https://pjreddie.com/media/files/yolov3.weights
  2. Copy downloaded weights file to model_data folder.
  3. Convert Darknet YOLO to Keras: python convert.py model_data/yolov3.cfg model_data/yolov3.weights model_data/yolo_weights.h5
  4. Test YOLO v3 with image_detect.py or realtime_detect.py (modify used model and classes according to your needs)
    1. output of YOLOv3 CPU my PC found here: Click here-1 (1m57s)
    2. output of YOLOv3 CPU LAAS PC found here: Click here-2 (1m06s)

b. Run YOLOv4 and Detectron2 webcam based tracking (all tested on LAAS PC)

  1. for YOLOv4 webcam object tracking - follow the commands instructed in 'history linux20' file
    1. output of YOLOv4 CPU found here: Click here-3 (1m24s)
    2. output of YOLOv4 GPU found here: Click here-4 (25s)
  2. for Detectron2 webcam object tracking - follow the commands instructed in 'history linux20' file
    1. output of Detectron2 CPU found here: Click here-5 (47s)
    2. output of Detectron2 GPU found here: Click here-6 (58s)

Thanks to YOLOv4 and Detectron2 software, they are cloned as it is.

Their installations are adopted to our PC configurations in proper way and then it was a magic.

Results

  1. Please refer to our Time of Occlusion computation (case1 and case2) in Chapter-3 of Msc Thesis Report
  2. The YOLOv3, YOLOv4 and Detectron2 are tested on various PC configurations using webcam traj

MSC Thesis Defended on 06 July 2020

Future Work

  1. Tuning YOLO and Detectron2 to Depth based Tracking using "Triangluar similarity technique"
  2. Implementing the software on Robotic platforms (Under construction)

Obstacle Avoidance Testing (Part-2)

  1. Drone Simulation uses ROS, Gazebo, Ardupilot, mavros, mavlink,darknet-ros packages
  2. Output : Drone Flight Simulation with ros-webcam video "Sim4.mp4" is Click here-7

Object Detection work in progess ... (in ROS-Gazebo-DroneFlightSimulation)

occlusion-avoidance-and-dl-for-tracking's People

Contributors

vamshikodipaka avatar

Stargazers

 avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.