Giter Club home page Giter Club logo

deepstream-yolo-face's Issues

Unable to get track id when using yoloface as secondary gie

Iā€™m trying to implement yoloface as my secondary gie, Iā€™m getting random track ids for it.
Screenshot from 2024-08-28 15-43-24

Below is the config file
################################################################################

SPDX-FileCopyrightText: Copyright (c) 2018-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.

SPDX-License-Identifier: LicenseRef-NvidiaProprietary

NVIDIA CORPORATION, its affiliates and licensors retain all intellectual

property and proprietary rights in and to this material, related

documentation and any modifications thereto. Any use, reproduction,

disclosure or distribution of this material and related documentation

without an express license agreement from NVIDIA CORPORATION or

its affiliates is strictly prohibited.

################################################################################

application:
enable-perf-measurement: 1
perf-measurement-interval-sec: 5
#gie-kitti-output-dir: streamscl

tiled-display:
enable: 0
rows: 1
columns: 1
width: 1280
height: 720
gpu-id: 0
nvbuf-memory-type: 0

source:
csv-file-path: sources_4.csv

sink0:
enable: 0
#Type - 1=FakeSink 2=EglSink 3=File
type: 2
sync: 0
source-id: 0
gpu-id: 0
nvbuf-memory-type: 0

sink1:
enable: 0
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
type: 6
msg-conv-config: dstest5_msgconv_sample_config.yml
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(2): PAYLOAD_DEEPSTREAM_PROTOBUF - Deepstream schema protobuf encoded payload
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM - Custom schema payload
msg-conv-payload-type: 0
#(0): Create payload using NvdsEventMsgMeta
#(1): New Api to create payload using NvDsFrameMeta
msg-conv-msg2p-new-api: 0
#Frame interval at which payload is generated
msg-conv-frame-interval: 30
#To enable dummy payload
#msg-conv-dummy-payload=1
msg-broker-proto-lib: /opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str: localhost;9092;
topic: quickstart-events
#Optional:
#msg-broker-config: /opt/nvidia/deepstream/deepstream/sources/libs/kafka_protocol_adaptor/cfg_kafka.txt
#new-api: 0
#(0) Use message adapter library api's
#(1) Use new msgbroker library api's

sink2:
enable: 1
type: 2
#1=mp4 2=mkv
container: 1
#1=h264 2=h265 3=mpeg4

only SW mpeg4 is supported right now.

codec: 3
sync: 1
bitrate: 2000000
output-file: out.mp4
source-id: 0

sink type = 6 by default creates msg converter + broker.

To use multiple brokers use this group for converter and use

sink type = 6 with disable-msgconv : 1

message-converter:
enable: 0
msg-conv-config: dstest5_msgconv_sample_config.yml
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(2): PAYLOAD_DEEPSTREAM_PROTOBUF - Deepstream schema protobuf encoded payload
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM - Custom schema payload
msg-conv-payload-type: 0

Name of library having custom implementation.

#msg-conv-msg2p-lib:

Id of component in case only selected message to parse.

#msg-conv-comp-id:

Configure this group to enable cloud message consumer.

message-consumer0:
enable: 0
proto-lib: /opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
conn-str: ;
config-file: /opt/nvidia/deepstream/deepstream/sources/libs/kafka_protocol_adaptor/cfg_kafka.txt
subscribe-topic-list: ;;

Use this option if message has sensor name as id instead of index (0,1,2 etc.).

#sensor-list-file: dstest5_msgconv_sample_config.txt

osd:
enable: 1
gpu-id: 0
border-width: 1
text-size: 15
text-color: 1;1;1;1
text-bg-color: 0.3;0.3;0.3;1
font: Arial
show-clock: 0
clock-x-offset: 800
clock-y-offset: 820
clock-text-size: 12
clock-color: 1;0;0;0
nvbuf-memory-type: 0

streammux:
gpu-id: 0
##Boolean property to inform muxer that sources are live
live-source: 0
batch-size: 1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout: 40000

Set muxer output width and height

width: 1920
height: 1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding: 0
nvbuf-memory-type: 0

If set to TRUE, system timestamp will be attached as ntp timestamp

If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached

attach-sys-ts-as-ntp: 1

config-file property is mandatory for any gie section.

Other properties are optional and if set will override the properties set in

the infer config file.

primary-gie:
enable: 1
gpu-id: 0
#Required to display the PGIE labels, should be added even when using config-file
#property
batch-size: 1
#Required by the app for OSD, not a plugin property
bbox-border-color0: 1;0;0;1 # Red for class 0
bbox-border-color1: 0;1;0;1 # Green for class 1
bbox-border-color2: 0;0;1;1 # Blue for class 2
bbox-border-color3: 1;1;0;1 # Yellow for class 3
bbox-border-color4: 1;0;1;1 # Magenta for class 4
bbox-border-color5: 0;1;1;1 # Cyan for class 5
interval: 0
#Required by the app for SGIE, when used along with config-file property
gie-unique-id: 1
nvbuf-memory-type: 0
#model-engine-file: /opt/nvidia/deepstream/deepstream-7.0/sources/apps/sample_apps/kadada_cpp/people_classification_final.onnx_b1_gpu0_fp16.engine
#labelfile-path: /opt/nvidia/deepstream/deepstream-7.0/sources/apps/sample_apps/kadada_cpp/labels.txt
config-file: /opt/nvidia/deepstream/deepstream-7.0/sources/apps/sample_apps/kadada_cpp/config_infer_primary.yml
#infer-raw-output-dir: ../../../../../samples/primary_detector_raw_output/

tracker:
enable: 0

For NvDCF and NvDeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively

tracker-width: 960
tracker-height: 544
ll-lib-file: /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so

ll-config-file required to set different tracker types

ll-config-file: ../../../../../samples/configs/deepstream-app/config_tracker_IOU.yml

ll-config-file: ../../../../../samples/configs/deepstream-app/config_tracker_NvSORT.yml

ll-config-file: ../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml

ll-config-file: ../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml

ll-config-file: ../../../../../samples/configs/deepstream-app/config_tracker_NvDeepSORT.yml

gpu-id: 0
display-tracking-id: 1

secondary-gie0:
enable: 0
gpu-id: 0
gie-unique-id: 4
operate-on-gie-id: 1
operate-on-class-ids: 0
batch-size: 16
config-file: ../../../../../samples/configs/deepstream-app/config_infer_secondary_vehicletypes.yml
labelfile-path: ../../../../../samples/models/Secondary_VehicleTypes/labels.txt
model-engine-file: ../../../../../samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet.etlt_b16_gpu0_int8.engine

secondary-gie1:
enable: 0
gpu-id: 0
gie-unique-id: 5
operate-on-gie-id: 1
operate-on-class-ids: 0
batch-size: 16
config-file: ../../../../../samples/configs/deepstream-app/config_infer_secondary_vehiclemake.yml
labelfile-path: ../../../../../samples/models/Secondary_VehicleMake/labels.txt
model-engine-file: ../../../../../samples/models/Secondary_VehicleMake/resnet18_vehiclemakenet.etlt_b16_gpu0_int8.engine

tests:
file-loop: 0

Error While Running the python code

Hello.
I am trying to run the Python file from this repository using the given command: python3 deepstream.py -s file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 -c config_infer_primary_yoloV8_face.txt

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ubuntu@ubuntu-Blade-15-2022-RZ09-0421:~/Documents/DeepStream-Yolo-Face$ python3 deepstream.py -s file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 -c config_infer_primary_yoloV8_face.txt

SOURCE: file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
CONFIG_INFER: config_infer_primary_yoloV8_face.txt
STREAMMUX_BATCH_SIZE: 1
STREAMMUX_WIDTH: 1920
STREAMMUX_HEIGHT: 1080
GPU_ID: 0
PERF_MEASUREMENT_INTERVAL_SEC: 5
JETSON: FALSE

gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
0:00:03.153337617 31142 0x2ae9860 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 1]: deserialized trt engine from :/home/ubuntu/Documents/DeepStream-Yolo-Face/yolov8n-face.onnx_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x640x640
1 OUTPUT kFLOAT boxes 8400x4
2 OUTPUT kFLOAT scores 8400x1
3 OUTPUT kFLOAT landmarks 8400x0

0:00:03.208580298 31142 0x2ae9860 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 1]: Use deserialized engine model: /home/ubuntu/Documents/DeepStream-Yolo-Face/yolov8n-face.onnx_b1_gpu0_fp32.engine
python3: nvdsinfer_context_impl.cpp:1377: NvDsInferStatus nvdsinfer::NvDsInferContextImpl::resizeOutputBufferpool(uint32_t): Assertion `bindingDims.numElements > 0' failed.
Aborted (core dumped)
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Please, I wonder if anyone could help me with this issue, how can I fix it?

DeepStream Version = 6.3
YOLO MODEL = Yolov8n-face
CUDA = 12.3
TensorRT = 8.5.3.1
NVIDIA GRAPHIC = GeForce RTX 3080 Ti
32GB RAM and 1TB SSD

Segmentation Fault ERROR

Hello.
I am trying to run the Python file from this repository using the given command: python3 deepstream.py -s file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 -c config_infer_primary_yoloV8_face.txt

Every time I run the command I'm getting Segmentation fault ERROR with a black screen, below is the log I get while running the file:

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ubuntu@ubuntu-Blade-15-2022-RZ09-0421:~/Documents/DeepStream-Yolo-Face$ python3 deepstream.py -s file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 -c config_infer_primary_yoloV8_face.txt

SOURCE: file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
CONFIG_INFER: config_infer_primary_yoloV8_face.txt
STREAMMUX_BATCH_SIZE: 1
STREAMMUX_WIDTH: 1920
STREAMMUX_HEIGHT: 1080
GPU_ID: 0
PERF_MEASUREMENT_INTERVAL_SEC: 5
JETSON: FALSE

gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
0:00:03.122840037 24409 0x3042a60 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 1]: deserialized trt engine from :/home/ubuntu/Documents/DeepStream-Yolo-Face/yolov8n-face.onnx_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT images 3x640x640
1 OUTPUT kFLOAT output0 5x8400

0:00:03.173926305 24409 0x3042a60 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 1]: Use deserialized engine model: /home/ubuntu/Documents/DeepStream-Yolo-Face/yolov8n-face.onnx_b1_gpu0_fp32.engine
0:00:03.175549029 24409 0x3042a60 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:config_infer_primary_yoloV8_face.txt sucessfully

Segmentation fault (core dumped)

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Please, I wonder if anyone could help me with this issue, how can I fix it?

DeepStream Version = 6.3
YOLO MODEL = Yolov8n-face
CUDA = 12.3
TensorRT = 8.5.3.1
NVIDIA GRAPHIC = GeForce RTX 3080 Ti
32GB RAM and 1TB SSD

Unable to Run INT8 Engine for YOLOv8 Model

Issue Description:
I have tried to create an INT8 engine for my YOLOv8 model but have been unable to do so. I have followed all the steps provided in the deepstream-yolo GitHub repository. Despite this, I am still encountering issues.

Steps to Reproduce:

Follow the instructions in the deepstream-yolo repository to set up the environment.
Attempt to create an INT8 engine for the YOLOv8 model.
Expected Behavior:
The INT8 engine should be successfully created and functional.

Actual Behavior:
The process fails, and the INT8 engine is not created.
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1208 INT8 calibration file not specified. Trying FP16 mode.

Environment:

YOLOv8 Model
DeepStream version 6.2
TensorRT version 8.5.1.7
CUDA version 11.8
GPU NVIDIA GeForce GTX 1080
OS Ubuntu 20.04.6 LTS

ModuleNotFoundError: No module named 'ultralytics.yolo'

got this error message while trying to run the export_yoloV8_face.py script

Traceback (most recent call last):
  File "export_yoloV8_face.py", line 10, in <module>
    from ultralytics.yolo.utils.torch_utils import select_device
ModuleNotFoundError: No module named 'ultralytics.yolo'

Deep stream has face recognition

i want do a project with face recognition . i want use one shotlearning and use siamese network.
can use this face detection(with deepstream) and use pretrained model in deepstream for feature extraction and compare it ?
Thanku

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    šŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. šŸ“ŠšŸ“ˆšŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ā¤ļø Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.