marcoslucianops / deepstream-yolo-face Goto Github PK
View Code? Open in Web Editor NEWNVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 application for YOLO-Face models
License: MIT License
NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 application for YOLO-Face models
License: MIT License
Iām trying to implement yoloface as my secondary gie, Iām getting random track ids for it.
Below is the config file
################################################################################
################################################################################
application:
enable-perf-measurement: 1
perf-measurement-interval-sec: 5
#gie-kitti-output-dir: streamscl
tiled-display:
enable: 0
rows: 1
columns: 1
width: 1280
height: 720
gpu-id: 0
nvbuf-memory-type: 0
source:
csv-file-path: sources_4.csv
sink0:
enable: 0
#Type - 1=FakeSink 2=EglSink 3=File
type: 2
sync: 0
source-id: 0
gpu-id: 0
nvbuf-memory-type: 0
sink1:
enable: 0
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
type: 6
msg-conv-config: dstest5_msgconv_sample_config.yml
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(2): PAYLOAD_DEEPSTREAM_PROTOBUF - Deepstream schema protobuf encoded payload
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM - Custom schema payload
msg-conv-payload-type: 0
#(0): Create payload using NvdsEventMsgMeta
#(1): New Api to create payload using NvDsFrameMeta
msg-conv-msg2p-new-api: 0
#Frame interval at which payload is generated
msg-conv-frame-interval: 30
#To enable dummy payload
#msg-conv-dummy-payload=1
msg-broker-proto-lib: /opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str: localhost;9092;
topic: quickstart-events
#Optional:
#msg-broker-config: /opt/nvidia/deepstream/deepstream/sources/libs/kafka_protocol_adaptor/cfg_kafka.txt
#new-api: 0
#(0) Use message adapter library api's
#(1) Use new msgbroker library api's
sink2:
enable: 1
type: 2
#1=mp4 2=mkv
container: 1
#1=h264 2=h265 3=mpeg4
codec: 3
sync: 1
bitrate: 2000000
output-file: out.mp4
source-id: 0
message-converter:
enable: 0
msg-conv-config: dstest5_msgconv_sample_config.yml
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(2): PAYLOAD_DEEPSTREAM_PROTOBUF - Deepstream schema protobuf encoded payload
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM - Custom schema payload
msg-conv-payload-type: 0
#msg-conv-msg2p-lib:
#msg-conv-comp-id:
message-consumer0:
enable: 0
proto-lib: /opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
conn-str: ;
config-file: /opt/nvidia/deepstream/deepstream/sources/libs/kafka_protocol_adaptor/cfg_kafka.txt
subscribe-topic-list: ;;
#sensor-list-file: dstest5_msgconv_sample_config.txt
osd:
enable: 1
gpu-id: 0
border-width: 1
text-size: 15
text-color: 1;1;1;1
text-bg-color: 0.3;0.3;0.3;1
font: Arial
show-clock: 0
clock-x-offset: 800
clock-y-offset: 820
clock-text-size: 12
clock-color: 1;0;0;0
nvbuf-memory-type: 0
streammux:
gpu-id: 0
##Boolean property to inform muxer that sources are live
live-source: 0
batch-size: 1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout: 40000
width: 1920
height: 1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding: 0
nvbuf-memory-type: 0
primary-gie:
enable: 1
gpu-id: 0
#Required to display the PGIE labels, should be added even when using config-file
#property
batch-size: 1
#Required by the app for OSD, not a plugin property
bbox-border-color0: 1;0;0;1 # Red for class 0
bbox-border-color1: 0;1;0;1 # Green for class 1
bbox-border-color2: 0;0;1;1 # Blue for class 2
bbox-border-color3: 1;1;0;1 # Yellow for class 3
bbox-border-color4: 1;0;1;1 # Magenta for class 4
bbox-border-color5: 0;1;1;1 # Cyan for class 5
interval: 0
#Required by the app for SGIE, when used along with config-file property
gie-unique-id: 1
nvbuf-memory-type: 0
#model-engine-file: /opt/nvidia/deepstream/deepstream-7.0/sources/apps/sample_apps/kadada_cpp/people_classification_final.onnx_b1_gpu0_fp16.engine
#labelfile-path: /opt/nvidia/deepstream/deepstream-7.0/sources/apps/sample_apps/kadada_cpp/labels.txt
config-file: /opt/nvidia/deepstream/deepstream-7.0/sources/apps/sample_apps/kadada_cpp/config_infer_primary.yml
#infer-raw-output-dir: ../../../../../samples/primary_detector_raw_output/
tracker:
enable: 0
tracker-width: 960
tracker-height: 544
ll-lib-file: /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
ll-config-file: ../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
gpu-id: 0
display-tracking-id: 1
secondary-gie0:
enable: 0
gpu-id: 0
gie-unique-id: 4
operate-on-gie-id: 1
operate-on-class-ids: 0
batch-size: 16
config-file: ../../../../../samples/configs/deepstream-app/config_infer_secondary_vehicletypes.yml
labelfile-path: ../../../../../samples/models/Secondary_VehicleTypes/labels.txt
model-engine-file: ../../../../../samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet.etlt_b16_gpu0_int8.engine
secondary-gie1:
enable: 0
gpu-id: 0
gie-unique-id: 5
operate-on-gie-id: 1
operate-on-class-ids: 0
batch-size: 16
config-file: ../../../../../samples/configs/deepstream-app/config_infer_secondary_vehiclemake.yml
labelfile-path: ../../../../../samples/models/Secondary_VehicleMake/labels.txt
model-engine-file: ../../../../../samples/models/Secondary_VehicleMake/resnet18_vehiclemakenet.etlt_b16_gpu0_int8.engine
tests:
file-loop: 0
Hello.
I am trying to run the Python file from this repository using the given command: python3 deepstream.py -s file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 -c config_infer_primary_yoloV8_face.txt
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ubuntu@ubuntu-Blade-15-2022-RZ09-0421:~/Documents/DeepStream-Yolo-Face$ python3 deepstream.py -s file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 -c config_infer_primary_yoloV8_face.txt
SOURCE: file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
CONFIG_INFER: config_infer_primary_yoloV8_face.txt
STREAMMUX_BATCH_SIZE: 1
STREAMMUX_WIDTH: 1920
STREAMMUX_HEIGHT: 1080
GPU_ID: 0
PERF_MEASUREMENT_INTERVAL_SEC: 5
JETSON: FALSE
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
0:00:03.153337617 31142 0x2ae9860 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 1]: deserialized trt engine from :/home/ubuntu/Documents/DeepStream-Yolo-Face/yolov8n-face.onnx_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x640x640
1 OUTPUT kFLOAT boxes 8400x4
2 OUTPUT kFLOAT scores 8400x1
3 OUTPUT kFLOAT landmarks 8400x0
0:00:03.208580298 31142 0x2ae9860 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 1]: Use deserialized engine model: /home/ubuntu/Documents/DeepStream-Yolo-Face/yolov8n-face.onnx_b1_gpu0_fp32.engine
python3: nvdsinfer_context_impl.cpp:1377: NvDsInferStatus nvdsinfer::NvDsInferContextImpl::resizeOutputBufferpool(uint32_t): Assertion `bindingDims.numElements > 0' failed.
Aborted (core dumped)
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Please, I wonder if anyone could help me with this issue, how can I fix it?
DeepStream Version = 6.3
YOLO MODEL = Yolov8n-face
CUDA = 12.3
TensorRT = 8.5.3.1
NVIDIA GRAPHIC = GeForce RTX 3080 Ti
32GB RAM and 1TB SSD
deepstream.py", line 13, in <module>
import pyds
ImportError: libpython3.8.so.1.0: cannot open shared object file: No such file or directory
can you please help me solve this issue! I have python 3.10 install and i dont know why its trying to find python 3.8
Hello.
I am trying to run the Python file from this repository using the given command: python3 deepstream.py -s file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 -c config_infer_primary_yoloV8_face.txt
Every time I run the command I'm getting Segmentation fault ERROR with a black screen, below is the log I get while running the file:
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ubuntu@ubuntu-Blade-15-2022-RZ09-0421:~/Documents/DeepStream-Yolo-Face$ python3 deepstream.py -s file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 -c config_infer_primary_yoloV8_face.txt
SOURCE: file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
CONFIG_INFER: config_infer_primary_yoloV8_face.txt
STREAMMUX_BATCH_SIZE: 1
STREAMMUX_WIDTH: 1920
STREAMMUX_HEIGHT: 1080
GPU_ID: 0
PERF_MEASUREMENT_INTERVAL_SEC: 5
JETSON: FALSE
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
0:00:03.122840037 24409 0x3042a60 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 1]: deserialized trt engine from :/home/ubuntu/Documents/DeepStream-Yolo-Face/yolov8n-face.onnx_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT images 3x640x640
1 OUTPUT kFLOAT output0 5x8400
0:00:03.173926305 24409 0x3042a60 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 1]: Use deserialized engine model: /home/ubuntu/Documents/DeepStream-Yolo-Face/yolov8n-face.onnx_b1_gpu0_fp32.engine
0:00:03.175549029 24409 0x3042a60 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:config_infer_primary_yoloV8_face.txt sucessfully
Segmentation fault (core dumped)
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Please, I wonder if anyone could help me with this issue, how can I fix it?
DeepStream Version = 6.3
YOLO MODEL = Yolov8n-face
CUDA = 12.3
TensorRT = 8.5.3.1
NVIDIA GRAPHIC = GeForce RTX 3080 Ti
32GB RAM and 1TB SSD
AttributeError: 'pyds.NvOSD_MaskParams' object has no attribute 'get_mask_array'
Issue Description:
I have tried to create an INT8 engine for my YOLOv8 model but have been unable to do so. I have followed all the steps provided in the deepstream-yolo GitHub repository. Despite this, I am still encountering issues.
Steps to Reproduce:
Follow the instructions in the deepstream-yolo repository to set up the environment.
Attempt to create an INT8 engine for the YOLOv8 model.
Expected Behavior:
The INT8 engine should be successfully created and functional.
Actual Behavior:
The process fails, and the INT8 engine is not created.
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1208 INT8 calibration file not specified. Trying FP16 mode.
Environment:
YOLOv8 Model
DeepStream version 6.2
TensorRT version 8.5.1.7
CUDA version 11.8
GPU NVIDIA GeForce GTX 1080
OS Ubuntu 20.04.6 LTS
got this error message while trying to run the export_yoloV8_face.py script
Traceback (most recent call last):
File "export_yoloV8_face.py", line 10, in <module>
from ultralytics.yolo.utils.torch_utils import select_device
ModuleNotFoundError: No module named 'ultralytics.yolo'
i want do a project with face recognition . i want use one shotlearning and use siamese network.
can use this face detection(with deepstream) and use pretrained model in deepstream for feature extraction and compare it ?
Thanku
A declarative, efficient, and flexible JavaScript library for building user interfaces.
š Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ššš
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ā¤ļø Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.