Giter Club home page Giter Club logo

Comments (6)

greatyang avatar greatyang commented on May 25, 2024

有相同的错误,尝试编译paddle with tensorrt,编译成功,还是报错。
config.enable_tensorrt_engine(
precision_mode=AnalysisConfig.Precision.Half
if args.use_fp16 else AnalysisConfig.Precision.Float32,
max_batch_size=args.batch_size)

from paddleclas.

littletomatodonkey avatar littletomatodonkey commented on May 25, 2024

你好,编译paddle with trt成功,并且安装成功之后,还需要将trt的路径添加到LD_LiBRARY_PATH中,比如下面这种

export LD_LIBRARY_PATH=/paddle/libs/TensorRT-5.1.2.2/lib:$LD_LIBRARY_PATH

from paddleclas.

greatyang avatar greatyang commented on May 25, 2024

参照这个页面做的https://paddle-inference.readthedocs.io/en/latest/user_guides/source_compile.html
是在aistudio上编译的,原有的paddle是1.7.2的版本
git clone https://github.com/PaddlePaddle/Paddle.git
cd Paddle
git checkout v1.7.2

mkdir build && cd build

export C_INCLUDE_PATH=/opt/conda/envs/python35-paddle120-env/include/python3.7m/:$C_INCLUDE_PATH
export CPLUS_INCLUDE_PATH=/opt/conda/envs/python35-paddle120-env/include/python3.7m/:$CPLUS_INCLUDE_PATH

cmake -DPY_VERSION=3 -DWITH_TESTING=OFF -DWITH_MKL=ON -DWITH_GPU=ON -DON_INFER=ON -DCMAKE_BUILD_TYPE=Release -DPYTHON_INCLUDE_DIR=/opt/conda/envs/python35-paddle120-env/include/ --DPYTHON_LIBRARY=/opt/conda/envs/python35-paddle120-env/lib/ ..
这一步有2个warning:
###############################################
-- Found Paddle host system: ubuntu, version: 16.04.6
-- Found Paddle host system's CPU: 24 cores
-- CXX compiler: /usr/bin/c++, version: GNU 5.4.0
-- C compiler: /usr/bin/cc, version: GNU 5.4.0
-- AR tools: /usr/bin/ar
-- BOOST_VERSION: 1.41.0, BOOST_URL: http://paddlepaddledeps.bj.bcebos.com/boost_1_41_0.tar.gz
-- warp-ctc library: /home/aistudio/work/paddle/build/third_party/install/warpctc/lib/libwarpctc.so
-- MKLML_VER: csrmm2_mklml_lnx_2019.0.2, MKLML_URL: http://paddlepaddledeps.bj.bcebos.com/csrmm2_mklml_lnx_2019.0.2.tgz
-- Found cblas and lapack in MKLML (include: /home/aistudio/work/paddle/build/third_party/install/mklml/include, library: /home/aistudio/work/paddle/build/third_party/install/mklml/lib/libmklml_intel.so)
-- Set /home/aistudio/work/paddle/build/third_party/install/mkldnn/llib to runtime path
-- Build MKLDNN with MKLML /home/aistudio/work/paddle/build/third_party/install/mklml
-- MKLDNN library: /home/aistudio/work/paddle/build/third_party/install/mkldnn/lib/libdnnl.so
-- Protobuf protoc executable: /home/aistudio/work/paddle/build/third_party/install/protobuf/bin/protoc
-- Protobuf-lite library: /home/aistudio/work/paddle/build/third_party/install/protobuf/lib/libprotobuf-lite.a
-- Protobuf library: /home/aistudio/work/paddle/build/third_party/install/protobuf/lib/libprotobuf.a
-- Protoc library: /home/aistudio/work/paddle/build/third_party/install/protobuf/lib/libprotoc.a
-- Protobuf version: 3.1.0
-- GLOO_NAME: gloo, GLOO_URL: https://pslib.bj.bcebos.com/gloo.tar.gz
-- Current cuDNN header is /usr/include/cudnn.h. Current cuDNN version is v7.3.
-- CUDA detected: 9.2
-- WARNING: This is just a warning for publishing release.
You are building GPU version without supporting different architectures.
So the wheel package may fail on other GPU architectures.
You can add -DCUDA_ARCH_NAME=All in cmake command
to get a full wheel package to resolve this warning.
While, this version will still work on local GPU architecture.
-- Added CUDA NVCC flags for: sm_70
CMake Warning at cmake/tensorrt.cmake:46 (message):
TensorRT is NOT found when WITH_DSO is ON.
Call Stack (most recent call first):
CMakeLists.txt:179 (include)

-- Paddle version is 1.7.2
-- Enable Intel OpenMP with /home/aistudio/work/paddle/build/third_party/install/mklml/lib/libiomp5.so
-- On inference mode, will take place some specific optimization.
-- add pass graph_to_program_pass base
-- add pass graph_viz_pass base
-- add pass lock_free_optimize_pass base
-- add pass fc_fuse_pass inference
-- add pass attention_lstm_fuse_pass inference
-- add pass fc_lstm_fuse_pass inference
-- add pass embedding_fc_lstm_fuse_pass inference
-- add pass fc_gru_fuse_pass inference
-- add pass seq_concat_fc_fuse_pass inference
-- add pass multi_batch_merge_pass base
-- add pass conv_bn_fuse_pass inference
-- add pass seqconv_eltadd_relu_fuse_pass inference
-- add pass seqpool_concat_fuse_pass inference
-- add pass seqpool_cvm_concat_fuse_pass inference
-- add pass repeated_fc_relu_fuse_pass inference
-- add pass squared_mat_sub_fuse_pass inference
-- add pass is_test_pass base
-- add pass conv_elementwise_add_act_fuse_pass inference
-- add pass conv_elementwise_add2_act_fuse_pass inference
-- add pass conv_elementwise_add_fuse_pass inference
-- add pass conv_affine_channel_fuse_pass inference
-- add pass transpose_flatten_concat_fuse_pass inference
-- add pass identity_scale_op_clean_pass base
-- add pass sync_batch_norm_pass base
-- add pass runtime_context_cache_pass base
-- add pass quant_conv2d_dequant_fuse_pass inference
-- add pass shuffle_channel_detect_pass inference
-- add pass delete_quant_dequant_op_pass inference
-- add pass simplify_with_basic_ops_pass base
-- add pass fc_elementwise_layernorm_fuse_pass base
-- add pass multihead_matmul_fuse_pass inference
-- add pass cudnn_placement_pass base
-- add pass mkldnn_placement_pass base
-- add pass depthwise_conv_mkldnn_pass base
-- add pass conv_bias_mkldnn_fuse_pass inference
-- add pass conv_activation_mkldnn_fuse_pass inference
-- add pass conv_concat_relu_mkldnn_fuse_pass inference
-- add pass conv_elementwise_add_mkldnn_fuse_pass inference
-- add pass fc_mkldnn_pass inference
-- add pass cpu_quantize_placement_pass base
-- add pass cpu_quantize_pass inference
-- add pass cpu_quantize_squash_pass inference
-- commit: 92cc33c
-- branch: HEAD
-- WARNING: This is just a warning for publishing release.
You are building AVX version without NOAVX core.
So the wheel package may fail on NOAVX machine.
You can add -DFLUID_CORE_NAME=/path/to/your/core_noavx.* in cmake command
to get a full wheel package to resolve this warning.
While, this version will still work on local machine.
-- Configuring done
-- Generating done
-- Build files have been written to: /home/aistudio/work/paddle/build
###############################################

make -j6

pip install ./python/dist/paddlepaddle_gpu-1.7.2-cp37-cp37m-linux_x86_64.whl -t /home/aistudio/external-libraries/

make inference_lib_dist -j6

export PATH=/home/aistudio/external-libraries/:$PATH

export LD_LIBRARY_PATH=/home/aistudio/work/paddle/build/fluid_inference_c_install_dir/paddle/lib:$LD_LIBRARY_PATH
其中paddle/lib下有libpaddle_fluid.a libpaddle_fluid.so这两个文件

除了以上操作,还需要做哪些工作?

from paddleclas.

greatyang avatar greatyang commented on May 25, 2024

需要安装TENSORRT,同时在cmake 时指定 -DTENSORRT_ROOT
在make时会报错,可以参考PaddlePaddle/Paddle#25209

from paddleclas.

littletomatodonkey avatar littletomatodonkey commented on May 25, 2024

1.7.2

编译成功之后,在build/python/dist下面有一个whl包,pip install一下,就可以了

from paddleclas.

shippingwang avatar shippingwang commented on May 25, 2024

Feel free to reopen it~

from paddleclas.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.