Comments (8)
Thanks,
I will wait for such updates if it intersects with your interests.close
I'm definitely interested in such updates, though it might take a while )
from insightface-rest.
I have quickly read your .ipynb, can't say anything right now, I'll check my local triton branch later.
Though I have noticed you are using strange image preprocessing in preprocess_image
:
# HWC to CHW format:
image -= (104, 117, 123)
Such preprocessing isn't required for RetinaFace.
from insightface-rest.
I have quickly read your .ipynb, can't say anything right now, I'll check my local triton branch later.
Though I have noticed you are using strange image preprocessing inpreprocess_image
:
image -= (104, 117, 123)
del this row, I forgot.
I use https://netron.app for debug converted ONNX
from insightface-rest.
I'll upload up-to-date Triton backend in day or two, which you can use as reference for your experiments.
In your Triton config outputs seems to be ok, thought I 'd check that Triton returns outputs exactly in the same order as required for Retinaface
from insightface-rest.
I have updated Triton backend. Keep in mind that communication with Triton server from python introduces large delays and greatly reduces performance.
You may notice commented lines related to using CUDA shared memory, it may help you gain some performance back, though use it at your own risk )
from insightface-rest.
thank you
Potential delays can be attributed to several things
- FP32 - I was unable to get triton to treat the converted model as FP16. I think int8 should speed up the inference but need to correctly convert and calibrate the model.
- All processing needs to be transferred to Triton. For image preprocessing, I found DALI. Post-processing is more complicated because it requires transferring the NMS and something else for the retinaface. It seems that I found a plug-in from NVIDIA somewhere, but embedding it into the pipeline is not a trivial task for me, at the moment)
- following from the last point, you need to immediately do in triton ensamble, so as not to drive data back and forth.
Here's what a performance study looks like to me. But this is a rather complicated process.
Have you tried any of these?
from insightface-rest.
- I have noticed that Triton for some reason states that model is fp32, but if you compare actual performance of fp32 and fp16 models with Triton perf client difference is obvious.
- I haven't tested it yet since it requires lot of changes to source code, but since now Triton supports python backend and DALI preprocessing it's really worth a try.
- It'll be not trivial to put all face detection/recognition pipeline into Triton, but it's very promising, especially considering that some parts of pipeline could be replaced with c++.
from insightface-rest.
- I have noticed that Triton for some reason states that model is fp32, but if you compare actual performance of fp32 and fp16 models with Triton perf client difference is obvious.
- I haven't tested it yet since it requires lot of changes to source code, but since now Triton supports python backend and DALI preprocessing it's really worth a try.
- It'll be not trivial to put all face detection/recognition pipeline into Triton, but it's very promising, especially considering that some parts of pipeline could be replaced with c++.
Thanks,
I will wait for such updates if it intersects with your interests.
close
from insightface-rest.
Related Issues (20)
- How to use docker swarm hostname instead of localhost? HOT 2
- deploy_trt.sh closes while executing, server couldn't start HOT 3
- Buffalo support HOT 3
- ValueError when using the CPU build via Docker HOT 1
- ValueError: could not broadcast input array from shape (75264,) into shape (37632,) HOT 1
- rtsp stream HOT 1
- Gpu Quatro RTX 5000 error HOT 1
- onnx vs trt HOT 1
- What is the workflow to add a new model? HOT 2
- Thankyou for excllent work. I am able to run demo_client but please i want to know how i can select the model of my choice or put custom trained model. HOT 1
- Thank you for excllent work. How about TRT batch inference? HOT 1
- scrfd 10 false positive HOT 1
- Support for coordinate and synthetic mask models (landmark_2d_106 , landmark_3d_68) HOT 3
- Hash Mismatch HOT 1
- [ERROR] Exception in ASGI application HOT 1
- [ERROR] Exception in ASGI application HOT 4
- Error when run deploy_trt.sh - google.protobuf.message.DecodeError: Error parsing message
- What is the difference between scrfd_10g_bnkps and scrfd_10g_gnkps HOT 7
- KeyError: 'score_8' in trt_backend.py (called by scrfd.py) when running demo scripts with scrfd_10g_gnkps and glintr100 HOT 2
- deploy_cpu and deploy_trt scripts need to be updated due to recent changes HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from insightface-rest.