Giter Club home page Giter Club logo

deepstream-retail-analytics's Introduction

Description

This is a sample application to perform real-time Intelligent Video Analytics (IVA) in a brick and mortar retail environment using NVIDIA DeepStream, TAO, and pre-trained models. DeepStream is used to run DL inference on a video feed inside a store to detect and track customers, and identify whether the detected persons are carrying shopping baskets. The inference output of this Computer Vision (CV) pipeline is streamed, using Kafka, to a Time-Series Database (TSDB) for archival and further processing. A Django app serves a REST-ful API to query insights based on the inference data. We also demonstrate a sample front-end dashboard to quickly visualize the various Key Performance Indicators (KPIs) available through the Django app.

This application is based on deepstream-test4 and deepstream-test5 sample applications included with DeepStream. The architecture diagram below shows how all the components are connected.

What is this DeepStream pipeline made of?

  • Primary Detector: PeopleNet Pre-Trained Model (PTM) from NGC
  • Secondary Detector: Custom classification model trained using TAO toolkit to classify people with and without shopping baskets
  • Object Tracker: NvDCF tracker
  • Message Converter: Custom message converter to generate custom payload from inference data
  • Message Broker: Message broker to relay inference data to a kafka server

Table of Contents

Application Architecture

Quick Start

Prerequisites

  1. Install the latest NVIDIA drivers for your operating system and GPU.

  2. Install Docker and the NVIDIA Container Toolkit - Refer to this README.

  3. OPTIONAL: Install python and pip. Required for front-end only. Can omit if not using front-end.

  4. Install DeepStream 6.1 instructions

    • Pull the docker image for DeepStream development
    docker pull nvcr.io/nvidia/deepstream:6.1-devel
    • Allow external applications to connect to the host's X display
    xhost +

    Note: If you are using a remote machine, the above command will not work from an SSH session. It has to be executed from a VNC/RDP connection.

    • Run the container
    docker run -it --entrypoint /bin/bash --gpus all --rm --network=host -e DISPLAY=:0 -v /tmp/.X11-unix/:/tmp/.X11-unix --privileged -v /var/run/docker.sock:/var/run/docker.sock nvcr.io/nvidia/deepstream:6.1-devel

    This command will

    • Start the container
    • Provide access to all GPUs
    • Host the container on the host's network
    • Forwards the display of the host to the container along with some other volumes
    • Opens an interactive terminal to run commands from within the container
  5. Install git-lfs inside the container

    apt install git-lfs
  6. We need a kafka message broker and kSQL database. For the purpose of this project, we use confluent-platform to setup these services. So, lets setup confluent-platform:

    Note: Bash commands in this section should be run from a separete terminal window and not from within the DeepStream container.

    wget https://raw.githubusercontent.com/confluentinc/cp-all-in-one/7.2.1-post/cp-all-in-one/docker-compose.yml
    docker-compose up -d
    • Verify if all the containers started successfully by running docker ps

  7. Create a kafka-topic that will be used to receive messages sent by the DeepStream app

    docker exec -it broker /bin/bash
    # Within the container
    kafka-topics --bootstrap-server "localhost:9092" --topic "detections" --create
    • You can also create the kafka topic by navigating to the confluent control center > cluster > Topics > Add Topic.
  8. Setup a kSQL stream based on the topic detections:

    a) If you used the above mentioned docker compose file, you can access kSQL CLI by running

    docker exec -it ksqldb-cli ksql http://ksqldb-server:8088
    

    b) Once the CLI is active, copy-paste the content from confluent-platform/stream_creation.sql into the CLI to create the stream

    Note: You don't have to explicitly create the topic in confluent-kafka. The broker will automatically create a topic once DeepStream sends messages to a new topic.

Getting Started

  1. If you are using DeepStream via a docker container as mentioned in the instructions above, execute the following command to open the terminal of the DeepStream docker container if it's not already open. Otherwise, skip this step.

    docker exec -it <container_id> /bin/bash

    You can locate the container id by running the following command:

    docker container ps

  2. Clone the repo in $DS_SDK_ROOT/sources/apps/sample_apps/

    cd /opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps
    git clone https://github.com/NVIDIA-AI-IOT/deepstream-retail-analytics.git
    cd deepstream-retail-analytics
    git lfs pull
    • Although not necessary, it is recommended to verify the checksum of model and input files to confirm file integrity
    cd files/
    sha512sum -c checksum.txt
  3. Download PeopleNet model to files/ folder in the project. Download the .etlt and labels.txt files.

    wget 'https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet/versions/pruned_v2.0/files/resnet34_peoplenet_pruned.etlt'
    wget 'https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet/versions/pruned_v2.0/files/labels.txt'
  4. Update below mentioned paths in configs/pgie_config_peoplenet.yml according to where you store the model

    • tlt-encoded-model
    • labelfile-path
  • We use a custom message converter. The shared library file is located here.

Build for x86 dGPU system

Run the following commands from project root

Modify the below command with the cuda version installed in the docker container. To check the CUDA version inside the docker container you can use nvcc --version command.

export CUDA_VER=<cuda_version>
make -B

Run the application

Running the DeepStream Application

./ds-retail-iva configs/retail_iva.yml --no-display

The --no-display flag in the above command is optional. If the application is running from within a docker container without a display attached, you should use this flag.

Running the front-end

cd ds-retail-iva-frontend
pip install -r requirements.txt
python3 manage.py runserver 0.0.0.0:8000

Open a browser and go to http://localhost:8000 to visualize the dashboard

Output

Dashboard

Advanced

TAO README - Follow the instructions in this file to create a dataset and train a classification model using TAO toolkit

NvMsgConv README - Follow this README to build a custom library to modify message payload generated by DeepStream

deepstream-retail-analytics's People

Contributors

ericphan-nv avatar kedarpotdar-nv avatar pmedikeri avatar pmedikerinv avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.