Comments (12)
By reading few blogs I found that this error has something to do with cuda context and this can be solved by
saving the CUDA context.
Issue here is CUDA context refreshed and mixed up with other applications.
Can you help me out now?
from insightface-rest.
Could you please provide more info on how image is sent and received?
from insightface-rest.
I am connected to a server !
Server is sending me an image in byte array format.
Then I am converting that byte array into numpy array and then processing it further.
I am only receiving image , not sending!
from insightface-rest.
It's hard to tell which reason might cause this exception without additional info, but I think it's mostly connected to the way you read the image before feeding it for inference.
I have tested this code with reading images from rabbitmq query, and from fastapi endpoint, which is default behavior of this project, and haven't got such exceptions.
If you could provide minimal reproducible example I could try checking it.
from insightface-rest.
Could you also provide minimum server example? For testing full cycle.
EDIT: Have you tried to save image decoded by opencv to disk with cv.imwrite
?
from insightface-rest.
Yes , image is perfect the way it should be !
There is no issues with the input image that's for sure ! I have cross checked with storing that image in local storage and also checked it , it is how it should be !!
from insightface-rest.
Found the solution!!
Closing it
from insightface-rest.
Great news, @bhargavravat! Could you please post your solution? I'll apply it to master branch if it doesn't conflict with FastAPI.
from insightface-rest.
I have modified trt_loader.py (/app/modules/model_zoo/exec_backends/trt_loader.py)
Attaching it as .txt format
from insightface-rest.
I have tested your solution with /src/converters/pipeline_tester.py
, but it fails with following error:
PyCUDA ERROR: The context stack was not empty upon module cleanup.
I was able to fix it by replacing line 84:
self.cuda_ctx = cuda_ctx
with:
device = cuda.Device(0) # enter your gpu id here
ctx = device.make_context()
self.cuda_ctx = ctx
This worked, but caused excessive GPU RAM usage, about 1GB
from insightface-rest.
hello @SthPhoenix how to create 1 engine (ex face detection) serve for more than 2 camera using threading?
i tried, but i got confused outputs between these thread.
from insightface-rest.
Hi @ThiagoMateo ! As I have said before, this use case is not tested, for now you can use provided rest API in your application.
In future versions I'm planning to add full support of Triton Inference Server, which will give you ability to serve single model for multiple threads.
from insightface-rest.
Related Issues (20)
- How to use docker swarm hostname instead of localhost? HOT 2
- deploy_trt.sh closes while executing, server couldn't start HOT 3
- Buffalo support HOT 3
- ValueError when using the CPU build via Docker HOT 1
- ValueError: could not broadcast input array from shape (75264,) into shape (37632,) HOT 1
- rtsp stream HOT 1
- Gpu Quatro RTX 5000 error HOT 1
- onnx vs trt HOT 1
- What is the workflow to add a new model? HOT 2
- Thankyou for excllent work. I am able to run demo_client but please i want to know how i can select the model of my choice or put custom trained model. HOT 1
- Thank you for excllent work. How about TRT batch inference? HOT 1
- scrfd 10 false positive HOT 1
- Support for coordinate and synthetic mask models (landmark_2d_106 , landmark_3d_68) HOT 3
- Hash Mismatch HOT 1
- [ERROR] Exception in ASGI application HOT 1
- [ERROR] Exception in ASGI application HOT 4
- Error when run deploy_trt.sh - google.protobuf.message.DecodeError: Error parsing message
- What is the difference between scrfd_10g_bnkps and scrfd_10g_gnkps HOT 7
- KeyError: 'score_8' in trt_backend.py (called by scrfd.py) when running demo scripts with scrfd_10g_gnkps and glintr100 HOT 2
- deploy_cpu and deploy_trt scripts need to be updated due to recent changes HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from insightface-rest.