Comments (6)
有相同的错误,尝试编译paddle with tensorrt,编译成功,还是报错。
config.enable_tensorrt_engine(
precision_mode=AnalysisConfig.Precision.Half
if args.use_fp16 else AnalysisConfig.Precision.Float32,
max_batch_size=args.batch_size)
from paddleclas.
你好,编译paddle with trt成功,并且安装成功之后,还需要将trt的路径添加到LD_LiBRARY_PATH中,比如下面这种
export LD_LIBRARY_PATH=/paddle/libs/TensorRT-5.1.2.2/lib:$LD_LIBRARY_PATH
from paddleclas.
参照这个页面做的https://paddle-inference.readthedocs.io/en/latest/user_guides/source_compile.html
是在aistudio上编译的,原有的paddle是1.7.2的版本
git clone https://github.com/PaddlePaddle/Paddle.git
cd Paddle
git checkout v1.7.2
mkdir build && cd build
export C_INCLUDE_PATH=/opt/conda/envs/python35-paddle120-env/include/python3.7m/:$C_INCLUDE_PATH
export CPLUS_INCLUDE_PATH=/opt/conda/envs/python35-paddle120-env/include/python3.7m/:$CPLUS_INCLUDE_PATH
cmake -DPY_VERSION=3 -DWITH_TESTING=OFF -DWITH_MKL=ON -DWITH_GPU=ON -DON_INFER=ON -DCMAKE_BUILD_TYPE=Release -DPYTHON_INCLUDE_DIR=/opt/conda/envs/python35-paddle120-env/include/ --DPYTHON_LIBRARY=/opt/conda/envs/python35-paddle120-env/lib/ ..
这一步有2个warning:
###############################################
-- Found Paddle host system: ubuntu, version: 16.04.6
-- Found Paddle host system's CPU: 24 cores
-- CXX compiler: /usr/bin/c++, version: GNU 5.4.0
-- C compiler: /usr/bin/cc, version: GNU 5.4.0
-- AR tools: /usr/bin/ar
-- BOOST_VERSION: 1.41.0, BOOST_URL: http://paddlepaddledeps.bj.bcebos.com/boost_1_41_0.tar.gz
-- warp-ctc library: /home/aistudio/work/paddle/build/third_party/install/warpctc/lib/libwarpctc.so
-- MKLML_VER: csrmm2_mklml_lnx_2019.0.2, MKLML_URL: http://paddlepaddledeps.bj.bcebos.com/csrmm2_mklml_lnx_2019.0.2.tgz
-- Found cblas and lapack in MKLML (include: /home/aistudio/work/paddle/build/third_party/install/mklml/include, library: /home/aistudio/work/paddle/build/third_party/install/mklml/lib/libmklml_intel.so)
-- Set /home/aistudio/work/paddle/build/third_party/install/mkldnn/llib to runtime path
-- Build MKLDNN with MKLML /home/aistudio/work/paddle/build/third_party/install/mklml
-- MKLDNN library: /home/aistudio/work/paddle/build/third_party/install/mkldnn/lib/libdnnl.so
-- Protobuf protoc executable: /home/aistudio/work/paddle/build/third_party/install/protobuf/bin/protoc
-- Protobuf-lite library: /home/aistudio/work/paddle/build/third_party/install/protobuf/lib/libprotobuf-lite.a
-- Protobuf library: /home/aistudio/work/paddle/build/third_party/install/protobuf/lib/libprotobuf.a
-- Protoc library: /home/aistudio/work/paddle/build/third_party/install/protobuf/lib/libprotoc.a
-- Protobuf version: 3.1.0
-- GLOO_NAME: gloo, GLOO_URL: https://pslib.bj.bcebos.com/gloo.tar.gz
-- Current cuDNN header is /usr/include/cudnn.h. Current cuDNN version is v7.3.
-- CUDA detected: 9.2
-- WARNING: This is just a warning for publishing release.
You are building GPU version without supporting different architectures.
So the wheel package may fail on other GPU architectures.
You can add -DCUDA_ARCH_NAME=All in cmake command
to get a full wheel package to resolve this warning.
While, this version will still work on local GPU architecture.
-- Added CUDA NVCC flags for: sm_70
CMake Warning at cmake/tensorrt.cmake:46 (message):
TensorRT is NOT found when WITH_DSO is ON.
Call Stack (most recent call first):
CMakeLists.txt:179 (include)
-- Paddle version is 1.7.2
-- Enable Intel OpenMP with /home/aistudio/work/paddle/build/third_party/install/mklml/lib/libiomp5.so
-- On inference mode, will take place some specific optimization.
-- add pass graph_to_program_pass base
-- add pass graph_viz_pass base
-- add pass lock_free_optimize_pass base
-- add pass fc_fuse_pass inference
-- add pass attention_lstm_fuse_pass inference
-- add pass fc_lstm_fuse_pass inference
-- add pass embedding_fc_lstm_fuse_pass inference
-- add pass fc_gru_fuse_pass inference
-- add pass seq_concat_fc_fuse_pass inference
-- add pass multi_batch_merge_pass base
-- add pass conv_bn_fuse_pass inference
-- add pass seqconv_eltadd_relu_fuse_pass inference
-- add pass seqpool_concat_fuse_pass inference
-- add pass seqpool_cvm_concat_fuse_pass inference
-- add pass repeated_fc_relu_fuse_pass inference
-- add pass squared_mat_sub_fuse_pass inference
-- add pass is_test_pass base
-- add pass conv_elementwise_add_act_fuse_pass inference
-- add pass conv_elementwise_add2_act_fuse_pass inference
-- add pass conv_elementwise_add_fuse_pass inference
-- add pass conv_affine_channel_fuse_pass inference
-- add pass transpose_flatten_concat_fuse_pass inference
-- add pass identity_scale_op_clean_pass base
-- add pass sync_batch_norm_pass base
-- add pass runtime_context_cache_pass base
-- add pass quant_conv2d_dequant_fuse_pass inference
-- add pass shuffle_channel_detect_pass inference
-- add pass delete_quant_dequant_op_pass inference
-- add pass simplify_with_basic_ops_pass base
-- add pass fc_elementwise_layernorm_fuse_pass base
-- add pass multihead_matmul_fuse_pass inference
-- add pass cudnn_placement_pass base
-- add pass mkldnn_placement_pass base
-- add pass depthwise_conv_mkldnn_pass base
-- add pass conv_bias_mkldnn_fuse_pass inference
-- add pass conv_activation_mkldnn_fuse_pass inference
-- add pass conv_concat_relu_mkldnn_fuse_pass inference
-- add pass conv_elementwise_add_mkldnn_fuse_pass inference
-- add pass fc_mkldnn_pass inference
-- add pass cpu_quantize_placement_pass base
-- add pass cpu_quantize_pass inference
-- add pass cpu_quantize_squash_pass inference
-- commit: 92cc33c
-- branch: HEAD
-- WARNING: This is just a warning for publishing release.
You are building AVX version without NOAVX core.
So the wheel package may fail on NOAVX machine.
You can add -DFLUID_CORE_NAME=/path/to/your/core_noavx.* in cmake command
to get a full wheel package to resolve this warning.
While, this version will still work on local machine.
-- Configuring done
-- Generating done
-- Build files have been written to: /home/aistudio/work/paddle/build
###############################################
make -j6
pip install ./python/dist/paddlepaddle_gpu-1.7.2-cp37-cp37m-linux_x86_64.whl -t /home/aistudio/external-libraries/
make inference_lib_dist -j6
export PATH=/home/aistudio/external-libraries/:$PATH
export LD_LIBRARY_PATH=/home/aistudio/work/paddle/build/fluid_inference_c_install_dir/paddle/lib:$LD_LIBRARY_PATH
其中paddle/lib下有libpaddle_fluid.a libpaddle_fluid.so这两个文件
除了以上操作,还需要做哪些工作?
from paddleclas.
需要安装TENSORRT,同时在cmake 时指定 -DTENSORRT_ROOT
在make时会报错,可以参考PaddlePaddle/Paddle#25209
from paddleclas.
1.7.2
编译成功之后,在build/python/dist
下面有一个whl包,pip install一下,就可以了
from paddleclas.
Feel free to reopen it~
from paddleclas.
Related Issues (20)
- PaddleClas支持龙芯吗 HOT 1
- 关于特征提取后进行feature_normalize的疑问 HOT 3
- 您好,是否能提供MixFormer的源码? HOT 3
- 缺少PULC_table_attribute.md文档 HOT 2
- 在进行训练时如果自己真实数据样本没有翻转的情况,数据增强RandFlipImage是不是可以不加 HOT 1
- 图片分类训练时报错 HOT 1
- 关于tripletangularmarginloss.py中的负样本类距离loss计算absolut_loss_an HOT 13
- 请问 paddleClas适合会计票据的分类吗 HOT 2
- 使用Python命令索引库如何更新 HOT 4
- list index out of range的原因是?
- 使用PPLCNetV2_base_ShiTu模型,在GPU上运行加速效果不明显 HOT 1
- PaddleClas,图像识别部署,根据2.5文档服务化部署预测过程中出现报错 HOT 4
- 一张图片中两行文字发生了折叠,如果有文字发生折叠的图片和文字未发生折叠的图片,用哪一个分类模型效果会好一些? HOT 3
- PaddleClas 如何实现模型在train 以及 infer 的时候使用不同分支的forword HOT 1
- KeyError: 'save_infer_model/scale_0.tmp_1.lod' HOT 1
- PPLCNetV2_base_ShiTu模型增加图片的分辨率跟输出的维数会增加检索精度吗? HOT 1
- Direct prediction API HOT 1
- 关于paddleClas和paddlepaddle版本的问题 HOT 5
- release/2.5 ModuleNotFoundError: No module named 'ppcls' HOT 3
- paddleclas.PaddleClas使用inference_model_dir指定模型PULC模型识别有误 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from paddleclas.