We ran into the following issues and would like to get direction from paddle-onnx dev.
1. assertion error : dims.nbDims == 3
I can convert paddle paddle models to onnx using the fluid_to_onnx.py script, e.g.
(venv) root@c33e4b787188:/paddle-onnx# python fluid_to_onnx.py --fluid_model extras/fit_a_line.inference.model --onnx_model extras/fit_a_line.onnx.inference.model --to_print_model > extras/fit_a_line_onnx.inference.model.out
(venv) root@c33e4b787188:/paddle-onnx# cd extras
(venv) root@c33e4b787188:~/paddle-onnx/extras# ls
fit_a_line.inference.model fit_a_line.onnx.inference.model fit_a_line_onnx.inference.model.out
but when I try to validate the resulting onnx model using the validate.py script I get the following error when using the tensorrt backend:
(venv) root@c33e4b787188:/paddle-onnx# deactivate
root@c33e4b787188:/paddle-onnx# python validate.py --fluid_model extras/fit_a_line.inference.model --onnx_model extras/fit_a_line.onnx.inference.model --backend tensorrt
----------- Configuration Arguments -----------
a: 0.0
b: 1.0
backend: tensorrt
batch_size: 10
expected_decimal: 5
fluid_model: extras/fit_a_line.inference.model
onnx_model: extras/fit_a_line.onnx.inference.model
Inference results for fluid model:
[array([[15.874767],
[17.174097],
[14.951813],
[14.069194],
[13.003316],
[17.782452],
[16.204231],
[13.260891],
[15.537827],
[13.720056]], dtype=float32)]
Traceback (most recent call last):
File "validate.py", line 129, in
validate(args)
File "validate.py", line 113, in validate
rep = backend.prepare(onnx_model, device='CUDA:0')
File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 166, in prepare
File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 71, in init
RuntimeError: While parsing node number 1:
/root/onnx-tensorrt/builtin_op_importers.cpp:539 In function importFlatten:
[8] Assertion failed: dims.nbDims == 3
2. Issue 2 : dim_value 0
We see some dim_value of 0 in the generated model. Is this expected and how those should be interpreted.
When I look at the human readable onnx model, i.e. fit_a_line_onnx.inference.model.out that was generated by the fluid_to_onnx.py script there are nodes with dim_value of 0:
(venv) root@c33e4b787188:~/paddle-onnx/extras# cat fit_a_line_onnx.inference.model.out | more
----------- Configuration Arguments -----------
fluid_model: extras/fit_a_line.inference.model
onnx_model: extras/fit_a_line.onnx.inference.model
to_print_model: True
The converted model is:
ir_version: 3
producer_name: "PaddlePaddle"
graph {
node {
input: "x"
output: "x@flatten_0"
op_type: "Flatten"
attribute {
name: "axis"
i: 1
type: INT
}
}
node {
input: "fc_0.w_0"
output: "fc_0.w_0@flatten_0"
op_type: "Flatten"
attribute {
name: "axis"
i: 1
type: INT
}
}
node {
input: "x@flatten_0"
input: "fc_0.w_0@flatten_0"
output: "fc_0.tmp_0@matmul_0"
op_type: "MatMul"
}
node {
output: "fc_0.tmp_0@shape_0"
op_type: "Constant"
attribute {
name: "value"
t {
dims: 2
data_type: INT64
int64_data: 0
int64_data: 1
name: fc_0.tmp_0@shape_0
}
type: TENSOR
}
}
node {
input: "fc_0.tmp_0@matmul_0"
input: "fc_0.tmp_0@shape_0"
output: "fc_0.tmp_0"
op_type: "Reshape"
}
node {
input: "fc_0.tmp_0"
input: "fc_0.b_0"
output: "fc_0.tmp_1"
op_type: "Add"
attribute {
name: "axis"
i: 1
type: INT
}
attribute {
name: "broadcast"
i: 1
type: INT
}
}
name: "fit_a_line"
initializer {
dims: 1
data_type: FLOAT
float_data: 19.3055801392
name: "fc_0.b_0"
}
initializer {
dims: 13
dims: 1
data_type: FLOAT
float_data: -0.235716566443
float_data: 1.50793659687
float_data: -1.37839913368
float_data: 0.587660908699
float_data: -1.62691628933
float_data: 1.94002008438
float_data: -1.5584435463
float_data: 1.01809895039
float_data: -2.47688126564
float_data: -2.48663592339
float_data: -2.72155380249
float_data: 1.01887917519
float_data: -2.73560881615
name: "fc_0.w_0"
}
input {
name: "x"
type {
tensor_type {
elem_type: FLOAT
shape {
dim {
dim_value: 0
}
dim {
dim_value: 13
}
}
}
}
}
input {
name: "fc_0.b_0"
type {
tensor_type {
elem_type: FLOAT
shape {
dim {
dim_value: 1
}
}
}
}
}
input {
name: "fc_0.w_0"
type {
tensor_type {
elem_type: FLOAT
shape {
dim {
dim_value: 13
}
dim {
dim_value: 1
}
}
}
}
}
output {
name: "fc_0.tmp_1"
type {
tensor_type {
elem_type: FLOAT
shape {
dim {
dim_value: 0
}
dim {
dim_value: 1
}
}
}
}
}
}
opset_import {
version: 7
}
Saved converted model to path: extras/fit_a_line.onnx.inference.model
I get the same results after converting the recognize_digits_mlp.inference.model from the paddle paddle repo’s /path-to-repo/paddle-paddle/python/paddle/fluid/tests/book directory…