Comments (9)
@deadeyegoodwin, @dzier,
Do we have a simple example to demonstrate how to use multiple, different backends to infer models?
from paddlepaddle_backend.
Hello @TANKIANAUN,
Please refer this Dockerfile for using paddle backend. You should copy libtriton_paddle.so
into /opt/tritonserver/backends
instead of copying paddle-lib
.
Following is an example Dockerfile for you to add paddle backend into tritonserver image (which include other backends). You can override default Dockerfile with the following example.
FROM nvcr.io/nvidia/tritonserver:21.04-py3 as full
RUN export DEBIAN_FRONTEND=noninteractive \
&& apt update \
&& apt install -y --no-install-recommends libre2-5 libb64-0d
COPY paddle-lib/paddle /opt/paddle
COPY build/libtriton_paddle.so /opt/tritonserver/backends/paddle/
ENV LD_LIBRARY_PATH="/opt/paddle/lib:$LD_LIBRARY_PATH"
from paddlepaddle_backend.
@zlsh80826
Seems like the paddle doesn't load into the triton
docker run --gpus all --rm -p8000:8000 -p8001:8001 -p8002:8002 -v ./model-repo:/models tritonserver_custom:latest tritonserver --model-repository=/models --model-control-mode=POLL
I'm using 21.02 triton and have "$build -t tritonserver_custom ." with the dockerfile, is there any specific i need to run it to have the backend loaded?
I also try with 21.04 version and the paddle backend seems didn't load into the triton
from paddlepaddle_backend.
@TANKIANAUN,
Did you see any warning/error message?
It seems an usage issue.
We need more information to know what happened.
from paddlepaddle_backend.
@jeng1220
I sucessfully run the new created tritonserver_custom without any errors, just that the paddle backend doesn't loaded into the triton. Perhaps you can try a simple demo on it and added to the readme.
from paddlepaddle_backend.
@TANKIANAUN,
We already provide two simple examples - ERNIE/ResNet50 in this repository.
If you follow the https://github.com/triton-inference-server/paddlepaddle_backend/blob/main/README.md step by step, can you successfully run the examples?
from paddlepaddle_backend.
@jeng1220
yeah i can run those examples. Now I'm trying to merge this backend with the tritonserver and run it as written in the server repo just with "docker run --gpus all --rm -p8000:8000 -p8001:8001 -p8002:8002 -v ./model-repo:/models tritonserver_custom:latest tritonserver --model-repository=/models --model-control-mode=POLL" so i can also use other backends.
from paddlepaddle_backend.
@TANKIANAUN Did you try my comment? I can load paddle with other backends. Custom Backend usage is written here, beside copying libtriton_paddle.so
to /opt/tritonserver/backends/paddle/
, you should also copy paddle lib
into container and setup the LD_LIBRARY_PATH. The most simple way to do this is to replace Dockerfile with the following code and then follow the README steps by steps to launch the demo, then you will get an image with paddlepaddle_backend and other default triton backends.
FROM nvcr.io/nvidia/tritonserver:21.04-py3 as full
RUN export DEBIAN_FRONTEND=noninteractive \
&& apt update \
&& apt install -y --no-install-recommends libre2-5 libb64-0d
COPY paddle-lib/paddle /opt/paddle
COPY build/libtriton_paddle.so /opt/tritonserver/backends/paddle/
ENV LD_LIBRARY_PATH="/opt/paddle/lib:$LD_LIBRARY_PATH"
BTW, you should try tritonserver:21.04. paddlepaddle backend was tested on 21.04
from paddlepaddle_backend.
@zlsh80826 I think perhaps is my cuda or driver version problem as I get error as below, i'll try update it and see how.
from paddlepaddle_backend.
Related Issues (14)
- perf_analyzer paddlepaddle model fault HOT 10
- What's the best way to deploy PaddleOCR-v3 using triton server? HOT 9
- Issues with bash scripts/build_paddle_backend.sh HOT 3
- ERROR in bash perf_ernie.sh,SUCCESS in bash perf_resnet50_v1.5.sh HOT 17
- Error when building backend with paddle 2.3 HOT 1
- Which script is the source code for loading the model? HOT 2
- error when building paddle backend HOT 1
- Paddle TensorRT配置错误 HOT 5
- Build Paddle report errors
- Can not use disenable_trt_tune option HOT 3
- InvalidArgumentError: The tensor Input (Input) of Slice op is not initialized.
- Errors when compiling with CUDA 11.6 HOT 4
- bash build_paddle.sh fails HOT 17
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from paddlepaddle_backend.