Giter Club home page Giter Club logo

deepstream-video-pipeline's Introduction

deepstream-video-pipeline

deepstream-video-pipeline's People

Contributors

pbridger avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepstream-video-pipeline's Issues

GLib.Error: gst_parse_error: no element "nvstreammux" (1)

Hello, I pulled the dockerfile you provided, but encountered GLib.Error when running ds_pipeline.py

root@c4762d0ad73c:/home/deepstream_video_pipeline# python3 ds_pipeline.py
nvstreammux name=mux0 gpu-id=0 enable-padding=1 width=300 height=300 batch-size=8 batched-push-timeout=1000000 ! 
nvinfer config-file-path=detector_default.config gpu-id=0 batch-size=8 ! fakesink enable-last-sample=0
filesrc location=media/in.mp4 num-buffers=512 ! qtdemux ! h264parse ! nvv4l2decoder gpu-id=0 ! mux0.sink_0 
filesrc location=media/in.mp4 num-buffers=512 ! qtdemux ! h264parse ! nvv4l2decoder gpu-id=0 ! mux0.sink_1 
filesrc location=media/in.mp4 num-buffers=512 ! qtdemux ! h264parse ! nvv4l2decoder gpu-id=0 ! mux0.sink_2 
filesrc location=media/in.mp4 num-buffers=512 ! qtdemux ! h264parse ! nvv4l2decoder gpu-id=0 ! mux0.sink_3 

Traceback (most recent call last):
  File "ds_pipeline.py", line 31, in <module>
    pipeline = Gst.parse_launch(pipeline_cmd)
GLib.Error: gst_parse_error: no element "nvstreammux" (1)

I also tried to run it in the miniconda environment, but another error occurred

(base) root@c4762d0ad73c:/home/deepstream_video_pipeline# python3 ds_pipeline.py 
Traceback (most recent call last):
  File "ds_pipeline.py", line 4, in <module>
    gi.require_version('Gst', '1.0')
  File "/opt/conda/lib/python3.8/site-packages/gi/__init__.py", line 126, in require_version
    raise ValueError('Namespace %s not available' % namespace)
ValueError: Namespace Gst not available

Any idea about how to solve this? Many thanks.

Running a Multi-Stream Application

Hey Paul,

It seems your apps work amazingly well for number-sources = 1
But I wish to run multiple streams in parallel. Unfortunately, your ghetto_nvds script doesn't work well with multiple streams, and I cannot get a batched-gpu buffer. I am able to use it only with one stream. Can you suggest anything that could be done about this?

Traceback (most recent call last): File "export_trt_engine.py", line 57, in <module>

root@3c52020d6f01:/app# python export_trt_engine.py --ssd-module-name ds_ssd300_4 --trt-module-name ds_trt_4

Using cache found in /root/.cache/torch/hub/NVIDIA_DeepLearningExamples_torchhub
/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /opt/conda/conda-bld/pytorch_1623448234945/work/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
&&&& RUNNING TensorRT.trtexec [TensorRT v8001] # trtexec --onnx=checkpoints/ds_trt_4.onnx --saveEngine=checkpoints/ds_trt_4.engine --fp16 --explicitBatch --minShapes=image_nchw:1x3x300x300 --optShapes=image_nchw:8x3x300x300 --maxShapes=image_nchw:8x3x300x300 --buildOnly
[07/14/2021-07:20:26] [I] === Model Options ===
[07/14/2021-07:20:26] [I] Format: ONNX
[07/14/2021-07:20:26] [I] Model: checkpoints/ds_trt_4.onnx
[07/14/2021-07:20:26] [I] Output:
[07/14/2021-07:20:26] [I] === Build Options ===
[07/14/2021-07:20:26] [I] Max batch: explicit
[07/14/2021-07:20:26] [I] Workspace: 16 MiB
[07/14/2021-07:20:26] [I] minTiming: 1
[07/14/2021-07:20:26] [I] avgTiming: 8
[07/14/2021-07:20:26] [I] Precision: FP32+FP16
[07/14/2021-07:20:26] [I] Calibration:
[07/14/2021-07:20:26] [I] Refit: Disabled
[07/14/2021-07:20:26] [I] Sparsity: Disabled
[07/14/2021-07:20:26] [I] Safe mode: Disabled
[07/14/2021-07:20:26] [I] Save engine: checkpoints/ds_trt_4.engine
[07/14/2021-07:20:26] [I] Load engine:
[07/14/2021-07:20:26] [I] NVTX verbosity: 0
[07/14/2021-07:20:26] [I] Tactic sources: Using default tactic sources
[07/14/2021-07:20:26] [I] timingCacheMode: local
[07/14/2021-07:20:26] [I] timingCacheFile:
[07/14/2021-07:20:26] [I] Input(s)s format: fp32:CHW
[07/14/2021-07:20:26] [I] Output(s)s format: fp32:CHW
[07/14/2021-07:20:26] [I] Input build shape: image_nchw=1x3x300x300+8x3x300x300+8x3x300x300
[07/14/2021-07:20:26] [I] Input calibration shapes: model
[07/14/2021-07:20:26] [I] === System Options ===
[07/14/2021-07:20:26] [I] Device: 0
[07/14/2021-07:20:26] [I] DLACore:
[07/14/2021-07:20:26] [I] Plugins:
[07/14/2021-07:20:26] [I] === Inference Options ===
[07/14/2021-07:20:26] [I] Batch: Explicit
[07/14/2021-07:20:26] [I] Input inference shape: image_nchw=8x3x300x300
[07/14/2021-07:20:26] [I] Iterations: 10
[07/14/2021-07:20:26] [I] Duration: 3s (+ 200ms warm up)
[07/14/2021-07:20:26] [I] Sleep time: 0ms
[07/14/2021-07:20:26] [I] Streams: 1
[07/14/2021-07:20:26] [I] ExposeDMA: Disabled
[07/14/2021-07:20:26] [I] Data transfers: Enabled
[07/14/2021-07:20:26] [I] Spin-wait: Disabled
[07/14/2021-07:20:26] [I] Multithreading: Disabled
[07/14/2021-07:20:26] [I] CUDA Graph: Disabled
[07/14/2021-07:20:26] [I] Separate profiling: Disabled
[07/14/2021-07:20:26] [I] Time Deserialize: Disabled
[07/14/2021-07:20:26] [I] Time Refit: Disabled
[07/14/2021-07:20:26] [I] Skip inference: Enabled
[07/14/2021-07:20:26] [I] Inputs:
[07/14/2021-07:20:26] [I] === Reporting Options ===
[07/14/2021-07:20:26] [I] Verbose: Disabled
[07/14/2021-07:20:26] [I] Averages: 10 inferences
[07/14/2021-07:20:26] [I] Percentile: 99
[07/14/2021-07:20:26] [I] Dump refittable layers:Disabled
[07/14/2021-07:20:26] [I] Dump output: Disabled
[07/14/2021-07:20:26] [I] Profile: Disabled
[07/14/2021-07:20:26] [I] Export timing to JSON file:
[07/14/2021-07:20:26] [I] Export output to JSON file:
[07/14/2021-07:20:26] [I] Export profile to JSON file:
[07/14/2021-07:20:26] [I]
[07/14/2021-07:20:26] [I] === Device Information ===
[07/14/2021-07:20:26] [I] Selected Device: GeForce RTX 2060
[07/14/2021-07:20:26] [I] Compute Capability: 7.5
[07/14/2021-07:20:26] [I] SMs: 30
[07/14/2021-07:20:26] [I] Compute Clock Rate: 1.755 GHz
[07/14/2021-07:20:26] [I] Device Global Memory: 5931 MiB
[07/14/2021-07:20:26] [I] Shared Memory per SM: 64 KiB
[07/14/2021-07:20:26] [I] Memory Bus Width: 192 bits (ECC disabled)
[07/14/2021-07:20:26] [I] Memory Clock Rate: 7.001 GHz
[07/14/2021-07:20:26] [I]
[07/14/2021-07:20:26] [I] TensorRT version: 7103
Traceback (most recent call last):
File "export_trt_engine.py", line 57, in
trt_output = subprocess.run([
File "/opt/conda/lib/python3.8/subprocess.py", line 512, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['trtexec', '--onnx=checkpoints/ds_trt_4.onnx', '--saveEngine=checkpoints/ds_trt_4.engine', '--fp16', '--explicitBatch', '--minShapes=image_nchw:1x3x300x300', '--optShapes=image_nchw:8x3x300x300', '--maxShapes=image_nchw:8x3x300x300', '--buildOnly']' died with <Signals.SIGSEGV: 11>.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.