Comments (6)
How to change this code with opencv instead of gstreamer?
Whats difference of between gstreamer and opencv in cpu usage and speed?
My goal for this project is to have a large number of cameras for high fps processing.
For example 20 cameras
But the problem is that more than 7 cameras,my cpu usage reaches over 98% and it is not possible to add a new one.
How can I do this project on 20 cameras at a time?
490mb is model loaded by gpu for simgle camera and I probably won't get down to the gpu resource source. I think my problem is the cpu resource
gpu: 1080ti(11gb), cpu: core i7 9700k
Or at least how to choose the system I need for this project?
Decoding the H.264 (or other format) stream using CPU can cost much.
I'd suggest using your NVIDIA GPU for decoding acceleration.
By the way, opencv's default decoding implementation is slower than using gstreamer or ffmpeg.
Here is a GPU decoding example using ffmpeg with Nvidia-P40, (20 cameras are piece of cakes, open 20 scripts or what you like)
import os
import cv2
import numpy as np
import subprocess as sp
import time
import sys
V_W, V_H, V_C = 1920, 1080, 3
BUFFER_SIZE = V_W * V_H * V_C
cmd_in = ['ffmpeg', '-hwaccel_device', '0', '-hwaccel', 'cuvid',
'-c:v', 'h264_cuvid', '-resize', f'{V_W}x{V_H}', '-vsync', '1',
'-i', f'RSTP ADDRESS',
'-vf', 'scale_npp=format=yuv420p,hwdownload,format=yuv420p',
'-f', 'image2pipe', '-pix_fmt', 'bgr24', '-vcodec', 'rawvideo', '-']
pipe_in = sp.Popen(cmd_in, stdout=sp.PIPE)
# cmd_out = ['ffmpeg', '-y', '-f', 'rawvideo', '-vcodec', 'rawvideo',
# '-pix_fmt', 'bgr24', '-s', f'{V_W}x{V_H}', '-i', '-', '-c:v',
# 'h264_nvenc', '-pix_fmt', 'yuv420p', '-preset', 'fast',
# '-f', 'flv', f'rtmp://127.0.0.1:1935/hk{index}/livestream']
# pipe_out = sp.Popen(cmd_out, stdin=sp.PIPE)
while True:
src = pipe_in.stdout.read(BUFFER_SIZE)
frame = np.frombuffer(src[:BUFFER_SIZE], dtype=np.uint8)
frame = frame_22.reshape(V_H, V_W, V_C)
# pipe_out.stdin.write(frame.tostring())
cv2.imshow('res', frame)
if cv2.waitKey(10) == ord('q'):
break
from faster-mobile-retinaface.
Same question :D
Faster-mobile-retinaface can't detect landmark, right?
from faster-mobile-retinaface.
Same question :D
Faster-mobile-retinaface can't detect landmark, right?
lol, I removed the landmark branch.
from faster-mobile-retinaface.
Thank you
I do one camera in a single core process (multiprocessing)
And for this one camera I load the retinaface model once.
So when I have 20 cameras I have to load this model 20 times for each camera.
So when each model gets 1500 megabytes of graphics card then I get about 20 cameras ,I need 30gigabytes of graphics memory, which is not economical at all.
Am I going the right way?How do I do this on multiple cameras at the same time in the most effective way possible?
I agree that ffmpeg can be a lot more effective than opencv, but with the system I mentioned above how should I implement this system?
Do we need to have multiple separate Python files(scripts) and run them at the same time? Or is there a better way to do this?
from faster-mobile-retinaface.
@roiksail Did you successfully run this repo with opencv? And if yes, how about the fps when comparing with gstreamer? Plz shared your work with me if you already done it.
from faster-mobile-retinaface.
How to change this code with opencv instead of gstreamer?
Whats difference of between gstreamer and opencv in cpu usage and speed?
My goal for this project is to have a large number of cameras for high fps processing.
For example 20 cameras
But the problem is that more than 7 cameras,my cpu usage reaches over 98% and it is not possible to add a new one.
How can I do this project on 20 cameras at a time?
490mb is model loaded by gpu for simgle camera and I probably won't get down to the gpu resource source. I think my problem is the cpu resource
gpu: 1080ti(11gb), cpu: core i7 9700k
Or at least how to choose the system I need for this project?Decoding the H.264 (or other format) stream using CPU can cost much.
I'd suggest using your NVIDIA GPU for decoding acceleration.
By the way, opencv's default decoding implementation is slower than using gstreamer or ffmpeg.Here is a GPU decoding example using ffmpeg with Nvidia-P40, (20 cameras are piece of cakes, open 20 scripts or what you like)
import os import cv2 import numpy as np import subprocess as sp import time import sys V_W, V_H, V_C = 1920, 1080, 3 BUFFER_SIZE = V_W * V_H * V_C cmd_in = ['ffmpeg', '-hwaccel_device', '0', '-hwaccel', 'cuvid', '-c:v', 'h264_cuvid', '-resize', f'{V_W}x{V_H}', '-vsync', '1', '-i', f'RSTP ADDRESS', '-vf', 'scale_npp=format=yuv420p,hwdownload,format=yuv420p', '-f', 'image2pipe', '-pix_fmt', 'bgr24', '-vcodec', 'rawvideo', '-'] pipe_in = sp.Popen(cmd_in, stdout=sp.PIPE) # cmd_out = ['ffmpeg', '-y', '-f', 'rawvideo', '-vcodec', 'rawvideo', # '-pix_fmt', 'bgr24', '-s', f'{V_W}x{V_H}', '-i', '-', '-c:v', # 'h264_nvenc', '-pix_fmt', 'yuv420p', '-preset', 'fast', # '-f', 'flv', f'rtmp://127.0.0.1:1935/hk{index}/livestream'] # pipe_out = sp.Popen(cmd_out, stdin=sp.PIPE) while True: src = pipe_in.stdout.read(BUFFER_SIZE) frame = np.frombuffer(src[:BUFFER_SIZE], dtype=np.uint8) frame = frame_22.reshape(V_H, V_W, V_C) # pipe_out.stdin.write(frame.tostring()) cv2.imshow('res', frame) if cv2.waitKey(10) == ord('q'): break
Great work for sharing.
Do you remove the landmark branch by retrain network? or can edit params and jsons by scripts directly?
from faster-mobile-retinaface.
Related Issues (11)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from faster-mobile-retinaface.