Giter Club home page Giter Club logo

onnx_tflite_yolov3's Introduction

Introduction

A Conversion tool to convert YOLO v3 Darknet weights to TF Lite model (YOLO v3 PyTorch > ONNX > TensorFlow > TF Lite), and to TensorRT model (dynamic_axes branch).

Prerequisites

  • python3
  • torch==1.3.1
  • torchvision==0.4.2
  • onnx==1.6.0
  • onnx-tf==1.5.0
  • onnxruntime-gpu==1.0.0
  • tensorflow-gpu==1.15.0

Docker

docker pull zldrobit/onnx:10.0-cudnn7-devel

Usage

  • 1. Download pretrained Darknet weights:
cd weights
wget https://pjreddie.com/media/files/yolov3.weights 
  • 2. Convert YOLO v3 model from Darknet weights to ONNX model: Change ONNX_EXPORT to True in models.py. Run
python3 detect.py --cfg cfg/yolov3.cfg --weights weights/yolov3.weights

The output ONNX file is weights/export.onnx.

  • 3. Convert ONNX model to TensorFlow model:
python3 onnx2tf.py

The output file is weights/yolov3.pb.

  • 4. Preprocess pb file to avoid NCHW conv, 5-D ops, and Int64 ops:
python3 prep.py

The output file is weights/yolov3_prep.pb.

  • 5. Use TOCO to convert pb -> tflite:
toco --graph_def_file weights/yolov3_prep.pb \
    --output_file weights/yolov3.tflite \
    --output_format TFLITE \
    --inference_type FLOAT \
    --inference_input_type FLOAT \
    --input_arrays input.1 \
    --output_arrays concat_84

The output file is weights/yolov3.tflite. Now, you can run python3 tflite_detect.py --weights weights/yolov3.tflite to detect objects in an image.

Quantization

  • 1. Install flatbuffers: Please refer to flatbuffers.

  • 2. Download TFLite schema:

wget https://github.com/tensorflow/tensorflow/raw/r1.15/tensorflow/lite/schema/schema.fbs
  • 3. Run TOCO to convert and quantize pb -> tflite:
toco --graph_def_file weights/yolov3_prep.pb \
    --output_file weights/yolov3_quant.tflite \
    --output_format TFLITE  \
    --input_arrays input.1 \
    --output_arrays concat_84 \
    --post_training_quantize

The output file is weights/yolov3_quant.tflite.

  • 4. Convert tflite -> json:
flatc -t --strict-json --defaults-json -o weights schema.fbs  -- weights/yolov3_quant.tflite

The output file is weights/yolov3_quant.json.

  • 5. Fix ReshapeOptions:
python3 fix_reshape.py

The output file is weights/yolov3_quant_fix_reshape.json.

  • 6. Convert json -> tflite:
flatc -b -o weights schema.fbs weights/yolov3_quant_fix_reshape.json

The output file is weights/yolov3_quant_fix_reshape.tflite. Now, you can run

python3 tflite_detect.py --weights weights/yolov3_quant_fix_reshape.tflite

to detect objects in an image.

Auxiliary Files

  • ONNX inference and detection: onnx_infer.py and onnx_detect.py.
  • TensorFlow inference and detection: tf_infer.py and tf_detect.py.
  • TF Lite inference, detection and debug: tflite_infer.py, tflite_detect.py and tflite_debug.py.

Known Issues

  • The conversion code does not work with tensorflow==1.14.0: Running prep.py cause protobuf error (Channel order issue in Conv2D).
  • fix_reshape.py does not fix shape attributes in TFLite tensors, which may cause unknown side effects.

TODO

  • support tflite quantized model
  • use dynamic_axes for ONNX export to support dynamic batching and TensorRT conversion (dynamic_axes branch)
  • add TensorRT NMS support (trt_nms branch)

Acknowledgement

We borrow PyTorch code from ultralytics/yolov3, and TensorFlow low-level API conversion code from paulbauriegel/tensorflow-tools.

onnx_tflite_yolov3's People

Contributors

d-j-kendall avatar developer0hye avatar dsuess avatar fatihbaltaci avatar franciscoreveriano avatar gabrielbianconi avatar glenn-jocher avatar guigarfr avatar idow09 avatar ilyaovodov avatar jrmh96 avatar jveitchmichaelis avatar lukeai avatar nirzarrabi avatar ownmarc avatar perry0418 avatar ttayu avatar wolffychen avatar yang-jin-hai avatar ypw-rich avatar zldrobit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

onnx_tflite_yolov3's Issues

Converting unsupported operation: PyFunc

2020-02-06 12:30:26.836297: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: PyFunc
2020-02-06 12:30:26.836358: F tensorflow/lite/toco/import_tensorflow.cc:114] Check failed: attr.value_case() == AttrValue::kType (1 vs. 6)
Fatal Python error: Aborted

请问您有遇到过这个问题吗?

Segmentation fault (core dumped) in detect.py

🐛 Bug

Segmentation fault when running python3 detect.py --cfg yolov3-tiny.cfg --weights yolov3-tiny.weights in the provided docker container with Alexeys .cfg and weights files.

Namespace(cfg='yolov3-tiny.cfg', conf_thres=0.3, data='data/coco.data', device='', fourcc='mp4v', half=False, img_size=416, nms_thres=0.5, output='output', source='data/samples', view_img=False, weights='yolov3-tiny.weights')
Using CPU

/home/onnx_tflite_yolov3/models.py:260: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  print("_io.shape", _io.shape)
_io.shape torch.Size([1, 507, 85])
_io.shape torch.Size([1, 2028, 85])
/usr/local/lib/python3.6/dist-packages/torch/onnx/symbolic_helper.py:198: UserWarning: You are trying to export the model with onnx:Upsample for ONNX opset version 9. This operator might cause results to not match the expected results by PyTorch.
ONNX's Upsample/Resize operator did not match Pytorch's Interpolation until opset 11. Attributes to determine how to transform the input were added in onnx:Resize in opset 11 to support Pytorch's behavior (like coordinate_transformation_mode and nearest_mode).
We recommend using opset 11 and above for models using this operator. 
  "" + str(_export_onnx_opset_version) + ". "
graph(%input.1 : Float(1, 3, 416, 416),
      %module_list.0.Conv2d.weight : Float(16, 3, 3, 3),
      %module_list.0.BatchNorm2d.weight : Float(16),
      %module_list.0.BatchNorm2d.bias : Float(16),
      %module_list.0.BatchNorm2d.running_mean : Float(16),
      %module_list.0.BatchNorm2d.running_var : Float(16),
      %module_list.0.BatchNorm2d.num_batches_tracked : Long(),
      %module_list.2.Conv2d.weight : Float(32, 16, 3, 3),
      %module_list.2.BatchNorm2d.weight : Float(32),
      %module_list.2.BatchNorm2d.bias : Float(32),
      %module_list.2.BatchNorm2d.running_mean : Float(32),
      %module_list.2.BatchNorm2d.running_var : Float(32),
      %module_list.2.BatchNorm2d.num_batches_tracked : Long(),
      %module_list.4.Conv2d.weight : Float(64, 32, 3, 3),
      %module_list.4.BatchNorm2d.weight : Float(64),
      %module_list.4.BatchNorm2d.bias : Float(64),
      %module_list.4.BatchNorm2d.running_mean : Float(64),
      %module_list.4.BatchNorm2d.running_var : Float(64),
      %module_list.4.BatchNorm2d.num_batches_tracked : Long(),
      %module_list.6.Conv2d.weight : Float(128, 64, 3, 3),
      %module_list.6.BatchNorm2d.weight : Float(128),
      %module_list.6.BatchNorm2d.bias : Float(128),
      %module_list.6.BatchNorm2d.running_mean : Float(128),
      %module_list.6.BatchNorm2d.running_var : Float(128),
      %module_list.6.BatchNorm2d.num_batches_tracked : Long(),
      %module_list.8.Conv2d.weight : Float(256, 128, 3, 3),
      %module_list.8.BatchNorm2d.weight : Float(256),
      %module_list.8.BatchNorm2d.bias : Float(256),
      %module_list.8.BatchNorm2d.running_mean : Float(256),
      %module_list.8.BatchNorm2d.running_var : Float(256),
      %module_list.8.BatchNorm2d.num_batches_tracked : Long(),
      %module_list.10.Conv2d.weight : Float(512, 256, 3, 3),
      %module_list.10.BatchNorm2d.weight : Float(512),
      %module_list.10.BatchNorm2d.bias : Float(512),
      %module_list.10.BatchNorm2d.running_mean : Float(512),
      %module_list.10.BatchNorm2d.running_var : Float(512),
      %module_list.10.BatchNorm2d.num_batches_tracked : Long(),
      %module_list.12.Conv2d.weight : Float(1024, 512, 3, 3),
      %module_list.12.BatchNorm2d.weight : Float(1024),
      %module_list.12.BatchNorm2d.bias : Float(1024),
      %module_list.12.BatchNorm2d.running_mean : Float(1024),
      %module_list.12.BatchNorm2d.running_var : Float(1024),
      %module_list.12.BatchNorm2d.num_batches_tracked : Long(),
      %module_list.13.Conv2d.weight : Float(256, 1024, 1, 1),
      %module_list.13.BatchNorm2d.weight : Float(256),
      %module_list.13.BatchNorm2d.bias : Float(256),
      %module_list.13.BatchNorm2d.running_mean : Float(256),
      %module_list.13.BatchNorm2d.running_var : Float(256),
      %module_list.13.BatchNorm2d.num_batches_tracked : Long(),
      %module_list.14.Conv2d.weight : Float(512, 256, 3, 3),
      %module_list.14.BatchNorm2d.weight : Float(512),
      %module_list.14.BatchNorm2d.bias : Float(512),
      %module_list.14.BatchNorm2d.running_mean : Float(512),
      %module_list.14.BatchNorm2d.running_var : Float(512),
      %module_list.14.BatchNorm2d.num_batches_tracked : Long(),
      %module_list.15.Conv2d.weight : Float(255, 512, 1, 1),
      %module_list.15.Conv2d.bias : Float(255),
      %module_list.18.Conv2d.weight : Float(128, 256, 1, 1),
      %module_list.18.BatchNorm2d.weight : Float(128),
      %module_list.18.BatchNorm2d.bias : Float(128),
      %module_list.18.BatchNorm2d.running_mean : Float(128),
      %module_list.18.BatchNorm2d.running_var : Float(128),
      %module_list.18.BatchNorm2d.num_batches_tracked : Long(),
      %module_list.21.Conv2d.weight : Float(256, 384, 3, 3),
      %module_list.21.BatchNorm2d.weight : Float(256),
      %module_list.21.BatchNorm2d.bias : Float(256),
      %module_list.21.BatchNorm2d.running_mean : Float(256),
      %module_list.21.BatchNorm2d.running_var : Float(256),
      %module_list.21.BatchNorm2d.num_batches_tracked : Long(),
      %module_list.22.Conv2d.weight : Float(255, 256, 1, 1),
      %module_list.22.Conv2d.bias : Float(255)):
  %71 : Float(1, 16, 416, 416) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%input.1, %module_list.0.Conv2d.weight), scope: Darknet/Sequential/Conv2d[Conv2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:342:0
  %72 : Float(1, 16, 416, 416) = onnx::BatchNormalization[epsilon=1e-05, momentum=0.9](%71, %module_list.0.BatchNorm2d.weight, %module_list.0.BatchNorm2d.bias, %module_list.0.BatchNorm2d.running_mean, %module_list.0.BatchNorm2d.running_var), scope: Darknet/Sequential/BatchNorm2d[BatchNorm2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1670:0
  %73 : Float(1, 16, 416, 416) = onnx::LeakyRelu[alpha=0.1](%72), scope: Darknet/Sequential/LeakyReLU[activation] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1061:0
  %74 : Float(1, 16, 208, 208) = onnx::MaxPool[kernel_shape=[2, 2], pads=[0, 0, 0, 0], strides=[2, 2]](%73), scope: Darknet/MaxPool2d # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:488:0
  %75 : Float(1, 32, 208, 208) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%74, %module_list.2.Conv2d.weight), scope: Darknet/Sequential/Conv2d[Conv2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:342:0
  %76 : Float(1, 32, 208, 208) = onnx::BatchNormalization[epsilon=1e-05, momentum=0.9](%75, %module_list.2.BatchNorm2d.weight, %module_list.2.BatchNorm2d.bias, %module_list.2.BatchNorm2d.running_mean, %module_list.2.BatchNorm2d.running_var), scope: Darknet/Sequential/BatchNorm2d[BatchNorm2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1670:0
  %77 : Float(1, 32, 208, 208) = onnx::LeakyRelu[alpha=0.1](%76), scope: Darknet/Sequential/LeakyReLU[activation] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1061:0
  %78 : Float(1, 32, 104, 104) = onnx::MaxPool[kernel_shape=[2, 2], pads=[0, 0, 0, 0], strides=[2, 2]](%77), scope: Darknet/MaxPool2d # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:488:0
  %79 : Float(1, 64, 104, 104) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%78, %module_list.4.Conv2d.weight), scope: Darknet/Sequential/Conv2d[Conv2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:342:0
  %80 : Float(1, 64, 104, 104) = onnx::BatchNormalization[epsilon=1e-05, momentum=0.9](%79, %module_list.4.BatchNorm2d.weight, %module_list.4.BatchNorm2d.bias, %module_list.4.BatchNorm2d.running_mean, %module_list.4.BatchNorm2d.running_var), scope: Darknet/Sequential/BatchNorm2d[BatchNorm2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1670:0
  %81 : Float(1, 64, 104, 104) = onnx::LeakyRelu[alpha=0.1](%80), scope: Darknet/Sequential/LeakyReLU[activation] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1061:0
  %82 : Float(1, 64, 52, 52) = onnx::MaxPool[kernel_shape=[2, 2], pads=[0, 0, 0, 0], strides=[2, 2]](%81), scope: Darknet/MaxPool2d # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:488:0
  %83 : Float(1, 128, 52, 52) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%82, %module_list.6.Conv2d.weight), scope: Darknet/Sequential/Conv2d[Conv2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:342:0
  %84 : Float(1, 128, 52, 52) = onnx::BatchNormalization[epsilon=1e-05, momentum=0.9](%83, %module_list.6.BatchNorm2d.weight, %module_list.6.BatchNorm2d.bias, %module_list.6.BatchNorm2d.running_mean, %module_list.6.BatchNorm2d.running_var), scope: Darknet/Sequential/BatchNorm2d[BatchNorm2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1670:0
  %85 : Float(1, 128, 52, 52) = onnx::LeakyRelu[alpha=0.1](%84), scope: Darknet/Sequential/LeakyReLU[activation] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1061:0
  %86 : Float(1, 128, 26, 26) = onnx::MaxPool[kernel_shape=[2, 2], pads=[0, 0, 0, 0], strides=[2, 2]](%85), scope: Darknet/MaxPool2d # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:488:0
  %87 : Float(1, 256, 26, 26) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%86, %module_list.8.Conv2d.weight), scope: Darknet/Sequential/Conv2d[Conv2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:342:0
  %88 : Float(1, 256, 26, 26) = onnx::BatchNormalization[epsilon=1e-05, momentum=0.9](%87, %module_list.8.BatchNorm2d.weight, %module_list.8.BatchNorm2d.bias, %module_list.8.BatchNorm2d.running_mean, %module_list.8.BatchNorm2d.running_var), scope: Darknet/Sequential/BatchNorm2d[BatchNorm2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1670:0
  %89 : Float(1, 256, 26, 26) = onnx::LeakyRelu[alpha=0.1](%88), scope: Darknet/Sequential/LeakyReLU[activation] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1061:0
  %90 : Float(1, 256, 13, 13) = onnx::MaxPool[kernel_shape=[2, 2], pads=[0, 0, 0, 0], strides=[2, 2]](%89), scope: Darknet/MaxPool2d # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:488:0
  %91 : Float(1, 512, 13, 13) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%90, %module_list.10.Conv2d.weight), scope: Darknet/Sequential/Conv2d[Conv2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:342:0
  %92 : Float(1, 512, 13, 13) = onnx::BatchNormalization[epsilon=1e-05, momentum=0.9](%91, %module_list.10.BatchNorm2d.weight, %module_list.10.BatchNorm2d.bias, %module_list.10.BatchNorm2d.running_mean, %module_list.10.BatchNorm2d.running_var), scope: Darknet/Sequential/BatchNorm2d[BatchNorm2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1670:0
  %93 : Float(1, 512, 13, 13) = onnx::LeakyRelu[alpha=0.1](%92), scope: Darknet/Sequential/LeakyReLU[activation] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1061:0
  %94 : Float(1, 512, 14, 14) = onnx::Pad[mode="constant", pads=[0, 0, 0, 0, 0, 0, 1, 1], value=0](%93), scope: Darknet/Sequential/ZeroPad2d[ZeroPad2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2848:0
  %95 : Float(1, 512, 13, 13) = onnx::MaxPool[kernel_shape=[2, 2], pads=[0, 0, 0, 0], strides=[1, 1]](%94), scope: Darknet/Sequential/MaxPool2d[MaxPool2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:488:0
  %96 : Float(1, 1024, 13, 13) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%95, %module_list.12.Conv2d.weight), scope: Darknet/Sequential/Conv2d[Conv2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:342:0
  %97 : Float(1, 1024, 13, 13) = onnx::BatchNormalization[epsilon=1e-05, momentum=0.9](%96, %module_list.12.BatchNorm2d.weight, %module_list.12.BatchNorm2d.bias, %module_list.12.BatchNorm2d.running_mean, %module_list.12.BatchNorm2d.running_var), scope: Darknet/Sequential/BatchNorm2d[BatchNorm2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1670:0
  %98 : Float(1, 1024, 13, 13) = onnx::LeakyRelu[alpha=0.1](%97), scope: Darknet/Sequential/LeakyReLU[activation] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1061:0
  %99 : Float(1, 256, 13, 13) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%98, %module_list.13.Conv2d.weight), scope: Darknet/Sequential/Conv2d[Conv2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:342:0
  %100 : Float(1, 256, 13, 13) = onnx::BatchNormalization[epsilon=1e-05, momentum=0.9](%99, %module_list.13.BatchNorm2d.weight, %module_list.13.BatchNorm2d.bias, %module_list.13.BatchNorm2d.running_mean, %module_list.13.BatchNorm2d.running_var), scope: Darknet/Sequential/BatchNorm2d[BatchNorm2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1670:0
  %101 : Float(1, 256, 13, 13) = onnx::LeakyRelu[alpha=0.1](%100), scope: Darknet/Sequential/LeakyReLU[activation] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1061:0
  %102 : Float(1, 512, 13, 13) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%101, %module_list.14.Conv2d.weight), scope: Darknet/Sequential/Conv2d[Conv2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:342:0
  %103 : Float(1, 512, 13, 13) = onnx::BatchNormalization[epsilon=1e-05, momentum=0.9](%102, %module_list.14.BatchNorm2d.weight, %module_list.14.BatchNorm2d.bias, %module_list.14.BatchNorm2d.running_mean, %module_list.14.BatchNorm2d.running_var), scope: Darknet/Sequential/BatchNorm2d[BatchNorm2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1670:0
  %104 : Float(1, 512, 13, 13) = onnx::LeakyRelu[alpha=0.1](%103), scope: Darknet/Sequential/LeakyReLU[activation] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1061:0
  %105 : Float(1, 255, 13, 13) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%104, %module_list.15.Conv2d.weight, %module_list.15.Conv2d.bias), scope: Darknet/Sequential/Conv2d[Conv2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:342:0
  %106 : Tensor = onnx::Constant[value=   1    3   85  169 [ Variable[CPULongType]{4} ]](), scope: Darknet/YOLOLayer
  %107 : Float(1, 3, 85, 169) = onnx::Reshape(%105, %106), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:170:0
  %108 : Float(1, 3, 169, 85) = onnx::Transpose[perm=[0, 1, 3, 2]](%107), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:170:0
  %109 : Tensor = onnx::Constant[value=   1    3  169   85 [ Variable[CPULongType]{4} ]](), scope: Darknet/YOLOLayer
  %110 : Float(1, 3, 169, 85) = onnx::Reshape(%108, %109), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:176:0
  %111 : Float(1, 3, 169, 2) = onnx::Slice[axes=[3], ends=[2], starts=[0]](%110), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:177:0
  %112 : Float(1, 3, 169, 2) = onnx::Sigmoid(%111), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:177:0
  %113 : Float(1, 1, 169, 2) = onnx::Constant[value=<Tensor>]()
  %114 : Float(1, 3, 169, 2) = onnx::Add(%112, %113), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:177:0
  %115 : Float() = onnx::Constant[value={32}]()
  %116 : Float(1, 3, 169, 2) = onnx::Mul(%114, %115)
  %117 : Float(1, 3, 169, 2) = onnx::Slice[axes=[3], ends=[4], starts=[2]](%110), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:178:0
  %118 : Float(1, 3, 169, 2) = onnx::Exp(%117), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:178:0
  %119 : Float(1, 3, 1, 2) = onnx::Constant[value=(1,1,.,.) =    2.5312  2.5625  (1,2,.,.) =    4.2188  5.2812  (1,3,.,.) =    10.7500   9.9688 [ Variable[CPUFloatType]{1,3,1,2} ]]()
  %120 : Float(1, 3, 169, 2) = onnx::Mul(%118, %119), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:178:0
  %121 : Float() = onnx::Constant[value={32}]()
  %122 : Float(1, 3, 169, 2) = onnx::Mul(%120, %121)
  %123 : Float(1, 3, 169, 1) = onnx::Slice[axes=[3], ends=[5], starts=[4]](%110), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:182:0
  %124 : Float(1, 3, 169, 1) = onnx::Sigmoid(%123), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:182:0
  %125 : Float(1, 3, 169, 80) = onnx::Slice[axes=[3], ends=[9223372036854775807], starts=[5]](%110), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:183:0
  %126 : Float(1, 3, 169, 80) = onnx::Sigmoid(%125), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:183:0
  %127 : Float(1, 3, 169, 85) = onnx::Concat[axis=-1](%116, %122, %124, %126), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:208:0
  %128 : Tensor = onnx::Constant[value=  1  -1  85 [ Variable[CPULongType]{3} ]](), scope: Darknet/YOLOLayer
  %129 : Float(1, 507, 85) = onnx::Reshape(%127, %128), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:209:0
  %130 : Tensor = onnx::Constant[value=  1   3  13  13  85 [ Variable[CPULongType]{5} ]](), scope: Darknet/YOLOLayer
  %131 : Float(1, 3, 13, 13, 85) = onnx::Reshape(%110, %130), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:209:0
  %132 : Float(1, 128, 13, 13) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%101, %module_list.18.Conv2d.weight), scope: Darknet/Sequential/Conv2d[Conv2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:342:0
  %133 : Float(1, 128, 13, 13) = onnx::BatchNormalization[epsilon=1e-05, momentum=0.9](%132, %module_list.18.BatchNorm2d.weight, %module_list.18.BatchNorm2d.bias, %module_list.18.BatchNorm2d.running_mean, %module_list.18.BatchNorm2d.running_var), scope: Darknet/Sequential/BatchNorm2d[BatchNorm2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1670:0
  %134 : Float(1, 128, 13, 13) = onnx::LeakyRelu[alpha=0.1](%133), scope: Darknet/Sequential/LeakyReLU[activation] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1061:0
  %135 : Long() = onnx::Constant[value={2}](), scope: Darknet/Upsample
  %136 : Tensor = onnx::Shape(%134), scope: Darknet/Upsample
  %137 : Long() = onnx::Gather[axis=0](%136, %135), scope: Darknet/Upsample # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2481:0
  %138 : Float() = onnx::Cast[to=1](%137), scope: Darknet/Upsample # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2481:0
  %139 : Float() = onnx::Constant[value={2}]()
  %140 : Float() = onnx::Mul(%138, %139)
  %141 : Float() = onnx::Cast[to=1](%140), scope: Darknet/Upsample # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2481:0
  %142 : Float() = onnx::Floor(%141), scope: Darknet/Upsample # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2481:0
  %143 : Long() = onnx::Constant[value={3}](), scope: Darknet/Upsample
  %144 : Tensor = onnx::Shape(%134), scope: Darknet/Upsample
  %145 : Long() = onnx::Gather[axis=0](%144, %143), scope: Darknet/Upsample # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2481:0
  %146 : Float() = onnx::Cast[to=1](%145), scope: Darknet/Upsample # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2481:0
  %147 : Float() = onnx::Constant[value={2}]()
  %148 : Float() = onnx::Mul(%146, %147)
  %149 : Float() = onnx::Cast[to=1](%148), scope: Darknet/Upsample # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2481:0
  %150 : Float() = onnx::Floor(%149), scope: Darknet/Upsample # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2481:0
  %151 : Tensor = onnx::Unsqueeze[axes=[0]](%142)
  %152 : Tensor = onnx::Unsqueeze[axes=[0]](%150)
  %153 : Tensor = onnx::Concat[axis=0](%151, %152)
  %154 : Tensor = onnx::Constant[value= 1  1 [ Variable[CPUFloatType]{2} ]](), scope: Darknet/Upsample
  %155 : Tensor = onnx::Cast[to=1](%153), scope: Darknet/Upsample
  %156 : Tensor = onnx::Shape(%134), scope: Darknet/Upsample
  %157 : Tensor = onnx::Slice[axes=[0], ends=[4], starts=[2]](%156), scope: Darknet/Upsample
  %158 : Tensor = onnx::Cast[to=1](%157), scope: Darknet/Upsample
  %159 : Tensor = onnx::Div(%155, %158), scope: Darknet/Upsample
  %160 : Tensor = onnx::Concat[axis=0](%154, %159), scope: Darknet/Upsample
  %161 : Float(1, 128, 26, 26) = onnx::Upsample[mode="nearest"](%134, %160), scope: Darknet/Upsample # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2500:0
  %162 : Float(1, 384, 26, 26) = onnx::Concat[axis=1](%161, %89), scope: Darknet # /home/onnx_tflite_yolov3/models.py:241:0
  %163 : Float(1, 256, 26, 26) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%162, %module_list.21.Conv2d.weight), scope: Darknet/Sequential/Conv2d[Conv2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:342:0
  %164 : Float(1, 256, 26, 26) = onnx::BatchNormalization[epsilon=1e-05, momentum=0.9](%163, %module_list.21.BatchNorm2d.weight, %module_list.21.BatchNorm2d.bias, %module_list.21.BatchNorm2d.running_mean, %module_list.21.BatchNorm2d.running_var), scope: Darknet/Sequential/BatchNorm2d[BatchNorm2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1670:0
  %165 : Float(1, 256, 26, 26) = onnx::LeakyRelu[alpha=0.1](%164), scope: Darknet/Sequential/LeakyReLU[activation] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1061:0
  %166 : Float(1, 255, 26, 26) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%165, %module_list.22.Conv2d.weight, %module_list.22.Conv2d.bias), scope: Darknet/Sequential/Conv2d[Conv2d] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:342:0
  %167 : Tensor = onnx::Constant[value=   1    3   85  676 [ Variable[CPULongType]{4} ]](), scope: Darknet/YOLOLayer
  %168 : Float(1, 3, 85, 676) = onnx::Reshape(%166, %167), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:170:0
  %169 : Float(1, 3, 676, 85) = onnx::Transpose[perm=[0, 1, 3, 2]](%168), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:170:0
  %170 : Tensor = onnx::Constant[value=   1    3  676   85 [ Variable[CPULongType]{4} ]](), scope: Darknet/YOLOLayer
  %171 : Float(1, 3, 676, 85) = onnx::Reshape(%169, %170), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:176:0
  %172 : Float(1, 3, 676, 2) = onnx::Slice[axes=[3], ends=[2], starts=[0]](%171), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:177:0
  %173 : Float(1, 3, 676, 2) = onnx::Sigmoid(%172), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:177:0
  %174 : Float(1, 1, 676, 2) = onnx::Constant[value=<Tensor>]()
  %175 : Float(1, 3, 676, 2) = onnx::Add(%173, %174), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:177:0
  %176 : Float() = onnx::Constant[value={16}]()
  %177 : Float(1, 3, 676, 2) = onnx::Mul(%175, %176)
  %178 : Float(1, 3, 676, 2) = onnx::Slice[axes=[3], ends=[4], starts=[2]](%171), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:178:0
  %179 : Float(1, 3, 676, 2) = onnx::Exp(%178), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:178:0
  %180 : Float(1, 3, 1, 2) = onnx::Constant[value=(1,1,.,.) =    0.6250  0.8750  (1,2,.,.) =    1.4375  1.6875  (1,3,.,.) =    2.3125  3.6250 [ Variable[CPUFloatType]{1,3,1,2} ]]()
  %181 : Float(1, 3, 676, 2) = onnx::Mul(%179, %180), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:178:0
  %182 : Float() = onnx::Constant[value={16}]()
  %183 : Float(1, 3, 676, 2) = onnx::Mul(%181, %182)
  %184 : Float(1, 3, 676, 1) = onnx::Slice[axes=[3], ends=[5], starts=[4]](%171), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:182:0
  %185 : Float(1, 3, 676, 1) = onnx::Sigmoid(%184), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:182:0
  %186 : Float(1, 3, 676, 80) = onnx::Slice[axes=[3], ends=[9223372036854775807], starts=[5]](%171), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:183:0
  %187 : Float(1, 3, 676, 80) = onnx::Sigmoid(%186), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:183:0
  %188 : Float(1, 3, 676, 85) = onnx::Concat[axis=-1](%177, %183, %185, %187), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:208:0
  %189 : Tensor = onnx::Constant[value=  1  -1  85 [ Variable[CPULongType]{3} ]](), scope: Darknet/YOLOLayer
  %190 : Float(1, 2028, 85) = onnx::Reshape(%188, %189), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:209:0
  %191 : Tensor = onnx::Constant[value=  1   3  26  26  85 [ Variable[CPULongType]{5} ]](), scope: Darknet/YOLOLayer
  %192 : Float(1, 3, 26, 26, 85) = onnx::Reshape(%171, %191), scope: Darknet/YOLOLayer # /home/onnx_tflite_yolov3/models.py:209:0
  %193 : Float(1, 2535, 85) = onnx::Concat[axis=1](%129, %190), scope: Darknet # /home/onnx_tflite_yolov3/models.py:262:0
  return (%193, %131, %192)

Segmentation fault (core dumped)

To Reproduce

Steps to reproduce the behavior:

  1. Start zldrobit/onnx:10.0-cudnn7-devel container
  2. Download https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov3-tiny.cfg and https://pjreddie.com/media/files/yolov3-tiny.weights
  3. Run python3 detect.py --cfg yolov3-tiny.cfg --weights yolov3-tiny.weights

Desktop (please complete the following information):

  • OS: Ubuntu
  • Version 18.04

pytorch和tensorflow的cudnn冲突?

pt模型转成onnx模型后,测试通过。onnx模型转成pb模型,使用tf_infer.py推理没有错误,但是在使用tf_detect.py时报错。错误如下:`File "D:\Anaconda3\envs\py365\lib\site-packages\tensorflow_core\python\client\session.py", line 1365, in _do_call
return fn(*args)
File "D:\Anaconda3\envs\py365\lib\site-packages\tensorflow_core\python\client\session.py", line 1350, in _run_fn
target_list, run_metadata)
File "D:\Anaconda3\envs\py365\lib\site-packages\tensorflow_core\python\client\session.py", line 1443, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
(0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node convolution}}]]
[[815/_27]]
(1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node convolution}}]]
0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:/ccpd_dataset/onnx_tflite_yolov3-master/tf_detect.py", line 213, in
detect()
File "D:/ccpd_dataset/onnx_tflite_yolov3-master/tf_detect.py", line 117, in detect
pred = sess.run("815:0", feed_dict={'input.1:0': img})
File "D:\Anaconda3\envs\py365\lib\site-packages\tensorflow_core\python\client\session.py", line 956, in run
run_metadata_ptr)
File "D:\Anaconda3\envs\py365\lib\site-packages\tensorflow_core\python\client\session.py", line 1180, in _run
feed_dict_tensor, options, run_metadata)
File "D:\Anaconda3\envs\py365\lib\site-packages\tensorflow_core\python\client\session.py", line 1359, in _do_run
run_metadata)
File "D:\Anaconda3\envs\py365\lib\site-packages\tensorflow_core\python\client\session.py", line 1384, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
(0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[node convolution (defined at \Anaconda3\envs\py365\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) ]]
[[815/_27]]
(1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[node convolution (defined at \Anaconda3\envs\py365\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) ]]
0 successful operations.
0 derived errors ignored.

Original stack trace for 'convolution':
File "/ccpd_dataset/onnx_tflite_yolov3-master/tf_detect.py", line 213, in
detect()
File "/ccpd_dataset/onnx_tflite_yolov3-master/tf_detect.py", line 43, in detect
name="")
File "\Anaconda3\envs\py365\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "\Anaconda3\envs\py365\lib\site-packages\tensorflow_core\python\framework\importer.py", line 405, in import_graph_def
producer_op_list=producer_op_list)
File "\Anaconda3\envs\py365\lib\site-packages\tensorflow_core\python\framework\importer.py", line 517, in _import_graph_def_internal
_ProcessNewOps(graph)
File "\Anaconda3\envs\py365\lib\site-packages\tensorflow_core\python\framework\importer.py", line 243, in _ProcessNewOps
for new_op in graph._add_new_tf_operations(compute_devices=False): # pylint: disable=protected-access
File "\Anaconda3\envs\py365\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3561, in _add_new_tf_operations
for c_op in c_api_util.new_tf_operations(self)
File "\Anaconda3\envs\py365\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3561, in
for c_op in c_api_util.new_tf_operations(self)
File "\Anaconda3\envs\py365\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3451, in _create_op_from_tf_operation
ret = Operation(c_op, self)
File "\Anaconda3\envs\py365\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1748, in init
self._traceback = tf_stack.extract_stack()`,我是按照requirements.txt配置的环境,不知道你是否遇见过这个问题,希望指教,感谢!

Use of Reshape feature

Hi, first of all KUDOS to your awesome work.
I can't understand what is the purpose of the Reshape Option feature ? I mean, what does it do ?

Doesn't work with anything remotely recent.

Doesn't work.

Tensorflow older versions won't work with newer cuda 10.1 which I need for my own project.
New tensorflow lacks tf.contrib.slim and ridiculously expects everyone to just rewrite everything despite the fact I'm following a guide.
Now this requires tensorflow-addons, and that complains about tensorflow being too new.

To think I almost made this time.... What a sad state of affairs, terribly glad I live in C++ land.

Traceback (most recent call last):
File "onnx2tf.py", line 13, in
file.write(output.graph.as_graph_def().SerializeToString())
AttributeError: 'NoneType' object has no attribute 'as_graph_def'

镜像环境运行报错

您好,我把您的镜像拉到本地运行后,出现了如下错误
WARNING:tensorflow:From prep.py:19: The name tf.gfile.Open is deprecated. Please use tf.io.gfile.GFile instead.

Traceback (most recent call last):
File "prep.py", line 21, in
graph_def.ParseFromString(data2read)
google.protobuf.message.DecodeError: Error parsing messag
请问应该如何解决呢?

Facing problem while converting Darknet weights to ONNX model for YOLOv3

!python3 detect.py --cfg cfg/yolov3_counter_detection.cfg --weights weights/yolov3_counter_detection.weights

I ran this code on Google Colab.

The error which I got is:

Namespace(cfg='cfg/yolov3_counter_detection.cfg', conf_thres=0.3, data='data/coco.data', device='', fourcc='mp4v', half=False, nms_thres=0.5, output='output', source='data/samples', view_img=False, weights='weights/yolov3_counter_detection.weights')
Using CPU

Traceback (most recent call last):
File "detect.py", line 174, in
detect()
File "detect.py", line 48, in detect
torch.onnx.export(model, img, 'weights/export.onnx', verbose=True, opset_version=9)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/init.py", line 230, in export
custom_opsets, enable_onnx_checker, use_external_data_format)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 91, in export
use_external_data_format=use_external_data_format)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 639, in _export
dynamic_axes=dynamic_axes)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 411, in _model_to_graph
use_new_jit_passes)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 379, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 342, in _trace_and_get_graph_from_model
torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
File "/usr/local/lib/python3.6/dist-packages/torch/jit/_trace.py", line 1148, in _get_trace_graph
outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/jit/_trace.py", line 130, in forward
self._force_outplace,
File "/usr/local/lib/python3.6/dist-packages/torch/jit/_trace.py", line 116, in wrapper
outs.append(self.inner(*trace_inputs))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 725, in _call_impl
result = self._slow_forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 709, in _slow_forward
result = self.forward(*input, **kwargs)
File "/content/gdrive/My Drive/Internship_SecureMeters/TFlite/onnx_tflite_yolov3/models.py", line 249, in forward
x = module(x, img_size)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 725, in _call_impl
result = self._slow_forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 709, in _slow_forward
result = self.forward(*input, **kwargs)
File "/content/gdrive/My Drive/Internship_SecureMeters/TFlite/onnx_tflite_yolov3/models.py", line 203, in forward
p_cls = torch.cat(p_cls0, io[..., 6:], -1)
TypeError: cat() received an invalid combination of arguments - got (int, Tensor, int), but expected one of:

  • (tuple of Tensors tensors, name dim, *, Tensor out)
  • (tuple of Tensors tensors, int dim, *, Tensor out)

Error while trying to convert custom built yolov3 model to tflite model

D:\TFlite_model_conversion\onnx_tflite_yolov3-master>python prep.py
2020-07-09 10:58:21.547627: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_100.dll'; dlerror: cudart64_100.dll not found
2020-07-09 10:58:21.557802: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
WARNING:tensorflow:From prep.py:8: The name tf.reset_default_graph is deprecated. Please use tf.compat.v1.reset_default_graph instead.

WARNING:tensorflow:From prep.py:9: The name tf.GraphDef is deprecated. Please use tf.compat.v1.GraphDef instead.

WARNING:tensorflow:From prep.py:11: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2020-07-09 10:58:25.651074: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'nvcuda.dll'; dlerror: nvcuda.dll not found
2020-07-09 10:58:25.664006: E tensorflow/stream_executor/cuda/cuda_driver.cc:318] failed call to cuInit: UNKNOWN ERROR (303)
2020-07-09 10:58:25.694386: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: DESKTOP-3GEOLB6
2020-07-09 10:58:25.707493: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: DESKTOP-3GEOLB6
2020-07-09 10:58:25.722747: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
WARNING:tensorflow:From prep.py:13: The name tf.gfile.Open is deprecated. Please use tf.io.gfile.GFile instead.

WARNING:tensorflow:From prep.py:42: The name tf.AttrValue is deprecated. Please use tf.compat.v1.AttrValue instead.

Traceback (most recent call last):
File "C:\Users\LKB-L-097\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1607, in _create_c_op
c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Depth of input (418) is not a multiple of input depth of filter (3) for 'convolution_new' (op: 'Conv2D') with input shapes: [1,418,3,418], [3,3,3,32].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "prep.py", line 45, in
op = sess.graph.create_op(op_type=n_org.type, inputs=op_inputs, name=n_org.name+'_new', attrs=atts)
File "C:\Users\LKB-L-097\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "C:\Users\LKB-L-097\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "C:\Users\LKB-L-097\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "C:\Users\LKB-L-097\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1770, in init
control_input_ops)
File "C:\Users\LKB-L-097\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1610, in _create_c_op
raise ValueError(str(e))
ValueError: Depth of input (418) is not a multiple of input depth of filter (3) for 'convolution_new' (op: 'Conv2D') with input shapes: [1,418,3,418], [3,3,3,32].

Model conversion to .pb successful. But shows error in this next step. I would like to know if anyone else encountered this issue and how to fix it.

Segmentation fault(core dump)

🐛 Bug

I ran the docker image that author provided.
Then, I ran python3 detect.py --cfg cfg/yolov3.cfg --weights weights/yolov3.weights
I got the error shown below.
------------error message-----------------
Segmentation fault (core dumped)
image


I am not sure whether the docker image has something error or not.
Do anyone know what is the exact problem?
Thanks!

An Issue when loading OpenGL backend TFLite GPU delegate in Python

🐛 Bug

A clear and concise description of what the bug is.
I am going to use TFLite GPU runtime on aarch64 platform.
So I built an OpenGL backend TFLite GPU delegate on Ubuntu 18.04 x86_64 PC.
I tried to use the built OpenGL backend TFLite GPU delegate in Python on RK3399 Ubuntu 18.04 LTS aarch64.
But I met the following issue when loading the delegate.

image

So I used the env variable LD_PRELOAD as the following.
export LD_PRELOAD="/usr/lib/aarch64-linux-gnu/libEGL.so /usr/lib/aarch64-linux-gnu/libGLESv2.so"
The first issue disappeared.
But the second issue appeared as the following.

image

Environment

If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: Ubuntu 18.04 LTS x86_64
  • OpenGL version string: 3.3 (Compatibility Profile) Mesa 19.2.8
  • native gcc(g++): 7.5.0
  • cross-compiler aarch64-linux-gnu-gcc(aarch64-linux-gnu-g++): 8.3.0, which is download automatically by bazel when building
  • bazel: 3.7.2
  • python: virtual env python 3.7

Smartphone (please complete the following information):

  • Device: RK3399
  • OS: Ubuntu 18.04 LTS aarch64
  • CPU: armv8
  • GPU: Mali T860
  • OpenGL version string: 3.1 Mesa 19.2.8
  • python: 3.7 (virtual env)

To Reproduce

Steps to reproduce the behavior:
I followed the following steps on my host PC.

sudo apt update
sudo apt-get install software-properties-common
sudo apt update
sudo apt install git curl
sudo apt install python3.7 python3.7-dev python3.7-venv python3.7-distutils
sudo apt install mesa-common-dev libegl1-mesa-dev libgles2-mesa-dev

cd ~
python3.7 -m venv py37
source ~/py37/bin/activate
pip install cython
pip install wheel
pip install numpy

git clone -b r2.4 https://github.com/tensorflow/tensorflow.git tensorflow_r2.4
cd tensorflow_r2.4
./configure

image

bazel build -s -c opt --config=elinux_aarch64 --copt="-DMESA_EGL_NO_X11_HEADERS" --copt="-DEGL_NO_X11" tensorflow/lite/delegates/gpu:libtensorflowlite_gpu_gl.so

image

Can you provide some details about these lines in yolov5_tflite

Hello sir

I am working on an object detection project and currently have come across your code. but I don't understand these lines from this one to this one.

grid = [torch.zeros(1)] * nl  # init grid
a = torch.tensor(anchors).float().view(nl, -1, 2).to(device)
anchor_grid = a.clone().view(nl, 1, -1, 1, 2)  # shape(nl,1,na,1,2)
z = []  # inference output
for i in range(nl):
    _, _, ny_nx, _ = x[i].shape
    r = imgsz[0] / imgsz[1]
    nx = int(np.sqrt(ny_nx / r))
    ny = int(r * nx)
    grid[i] = _make_grid(nx, ny).to(x[i].device)
    stride = imgsz[0] // ny
    y = x[i].sigmoid()
    y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + grid[i].to(x[i].device)) * stride  # xy
    y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * anchor_grid[i]  # wh
    z.append(y.view(-1, no))

I could not open an issue in the same yolov5 that you have forked from the original repo, so I opened this one in here.

The detailed commands for building TFLite GPU delegate so files

🚀 Feature

I want to get the correct steps & command for building OpenCL backend TFLite GPU delegate so file for aarch64 platform on host PC or aarch64 platform.

Motivation

I am going to use OpenCL backend TFLite runtime on RK3399 Ubuntu 18.04 LTS aarch64.
So I tried to build OpenCL backend TFLite GPU delegate so file by using Bazel.
I found that u built OpenCL backend GPU delegate and OpenGL backend GPU delegate for x86_64.
Could u provide the correct steps & commands to build OpenCL backend tflite GPU delegate so file and OpenGL backend tflite GPU delegate so file?
Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.