Giter Club home page Giter Club logo

pinto0309 / tflite2tensorflow Goto Github PK

View Code? Open in Web Editor NEW
252.0 14.0 38.0 34.66 MB

Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite, ONNX, OpenVINO, Myriad Inference Engine blob and .pb from .tflite. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support. Supports inverse quantization of INT8 quantization model.

Home Page: https://qiita.com/PINTO

License: MIT License

Shell 2.01% Python 97.88% Dockerfile 0.11%
tflite tensorflow tensorflow-lite converter models-converter coreml tf-trt tfjs tensorflowjs edgetpu

tflite2tensorflow's Introduction

tflite2tensorflow

Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite, ONNX, OpenVINO, Myriad Inference Engine blob and .pb from .tflite. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support. Supports inverse quantization of INT8 quantization model.

Special custom TensorFlow binaries and special custom TensorFLow Lite binaries are used.

Downloads GitHub PyPI CodeQL

1. Supported Layers

Supported Layers

No. TFLite Layer TF Layer Remarks
1 CONV_2D tf.nn.conv2d
2 DEPTHWISE_CONV_2D tf.nn.depthwise_conv2d
3 MAX_POOL_2D tf.nn.max_pool
4 PAD tf.pad
5 MIRROR_PAD tf.raw_ops.MirrorPad
6 RELU tf.nn.relu
7 PRELU tf.keras.layers.PReLU
8 RELU6 tf.nn.relu6
9 RESHAPE tf.reshape
10 ADD tf.add
11 SUB tf.math.subtract
12 CONCATENATION tf.concat
13 LOGISTIC tf.math.sigmoid
14 TRANSPOSE_CONV tf.nn.conv2d_transpose
15 MUL tf.multiply
16 HARD_SWISH x*tf.nn.relu6(x+3)*0.16666667 Or x*tf.nn.relu6(x+3)*0.16666666
17 AVERAGE_POOL_2D tf.keras.layers.AveragePooling2D
18 FULLY_CONNECTED tf.keras.layers.Dense
19 RESIZE_BILINEAR tf.image.resize Or tf.image.resize_bilinear The behavior differs depending on the optimization options of openvino and edgetpu.
20 RESIZE_NEAREST_NEIGHBOR tf.image.resize Or tf.image.resize_nearest_neighbor The behavior differs depending on the optimization options of openvino and edgetpu.
21 MEAN tf.math.reduce_mean
22 SQUARED_DIFFERENCE tf.math.squared_difference
23 RSQRT tf.math.rsqrt
24 DEQUANTIZE (const)
25 FLOOR tf.math.floor
26 TANH tf.math.tanh
27 DIV tf.math.divide
28 FLOOR_DIV tf.math.floordiv
29 SUM tf.math.reduce_sum
30 POW tf.math.pow
31 SPLIT tf.split
32 SOFTMAX tf.nn.softmax
33 STRIDED_SLICE tf.strided_slice
34 TRANSPOSE ttf.transpose
35 SPACE_TO_DEPTH tf.nn.space_to_depth
36 DEPTH_TO_SPACE tf.nn.depth_to_space
37 REDUCE_MAX tf.math.reduce_max
38 Convolution2DTransposeBias tf.nn.conv2d_transpose, tf.math.add CUSTOM, MediaPipe
39 LEAKY_RELU tf.keras.layers.LeakyReLU
40 MAXIMUM tf.math.maximum
41 MINIMUM tf.math.minimum
42 MaxPoolingWithArgmax2D tf.raw_ops.MaxPoolWithArgmax CUSTOM, MediaPipe
43 MaxUnpooling2D tf.cast, tf.shape, tf.math.floordiv, tf.math.floormod, tf.ones_like, tf.shape, tf.concat, tf.reshape, tf.transpose, tf.scatter_nd CUSTOM, MediaPipe
44 GATHER tf.gather
45 CAST tf.cast
46 SLICE tf.slice
47 PACK tf.stack
48 UNPACK tf.unstack
49 ARG_MAX tf.math.argmax Or tf.math.reduce_max, tf.subtract, tf.math.minimum, tf.multiply The behavior differs depending on the optimization options of edgetpu.
50 EXP tf.exp
51 TOPK_V2 tf.math.top_k
52 LOG_SOFTMAX tf.nn.log_softmax
53 L2_NORMALIZATION tf.math.l2_normalize
54 LESS tf.math.less
55 LESS_EQUAL tf.math.less_equal
56 GREATER tf.math.greater
57 GREATER_EQUAL tf.math.greater_equal
58 NEG tf.math.negative
59 WHERE tf.where
60 SELECT tf.where
61 SELECT_V2 tf.where
62 PADV2 tf.raw_ops.PadV2
63 SIN tf.math.sin
64 TILE tf.tile
65 EQUAL tf.math.equal
66 NOT_EQUAL tf.math.not_equal
67 LOG tf.math.log
68 SQRT tf.math.sqrt
69 ARG_MIN tf.math.argmin or tf.math.negative,tf.math.argmax
70 REDUCE_PROD tf.math.reduce_prod
71 LOGICAL_OR tf.math.logical_or
72 LOGICAL_AND tf.math.logical_and
73 LOGICAL_NOT tf.math.logical_not
74 REDUCE_MIN tf.math.reduce_min or tf.math.negative,tf.math.reduce_max
75 REDUCE_ANY tf.math.reduce_any
76 SQUARE tf.math.square
77 ZEROS_LIKE tf.zeros_like
78 FILL tf.fill
79 FLOOR_MOD tf.math.floormod
80 RANGE tf.range
81 ABS tf.math.abs
82 UNIQUE tf.unique
83 CEIL tf.math.ceil
84 REVERSE_V2 tf.reverse
85 ADD_N tf.math.add_n
86 GATHER_ND tf.gather_nd
87 COS tf.math.cos
88 RANK tf.math.rank
89 ELU tf.nn.elu
90 WHILE tf.while_loop
91 REVERSE_SEQUENCE tf.reverse_sequence
92 MATRIX_DIAG tf.linalg.diag
93 ROUND tf.math.round
94 NON_MAX_SUPPRESSION_V4 tf.raw_ops.NonMaxSuppressionV4
95 NON_MAX_SUPPRESSION_V5 tf.raw_ops.NonMaxSuppressionV5, tf.raw_ops.NonMaxSuppressionV4, tf.raw_ops.NonMaxSuppressionV3
96 SCATTER_ND tf.scatter_nd
97 SEGMENT_SUM tf.math.segment_sum
98 CUMSUM tf.math.cumsum
99 BROADCAST_TO tf.broadcast_to
100 RFFT2D tf.signal.rfft2d
101 L2_POOL_2D tf.square, tf.keras.layers.AveragePooling2D, tf.sqrt
102 LOCAL_RESPONSE_NORMALIZATION tf.nn.local_response_normalization
103 RELU_N1_TO_1 tf.minimum, tf.maximum
104 SPLIT_V tf.raw_ops.SplitV
105 MATRIX_SET_DIAG tf.linalg.set_diag
106 SHAPE tf.shape
107 EXPAND_DIMS tf.expand_dims
108 SQUEEZE tf.squeeze
109 FlexRFFT tf.signal.rfft Flex OP
110 FlexImag tf.math.imag Flex OP
111 FlexReal tf.math.real Flex OP
112 FlexRFFT2D tf.signal.rfft2d Flex OP
113 FlexComplexAbs tf.raw_ops.ComplexAbs Flex OP
114 IMAG tf.math.imag
115 REAL tf.math.real
116 COMPLEX_ABS tf.raw_ops.ComplexAbs
117 TFLite_Detection_PostProcess tf.divide, tf.strided_slice, tf.math.argmax, tf.math.reduce_max, tf.math.multiply, tf.math.add, tf.math.exp, tf.math.subtract, tf.expand_dims, tf.gather, tf.reshape, tf.identity, tf.raw_ops.NonMaxSuppressionV5 CUSTOM
118 ONE_HOT tf.one_hot
119 FlexMultinomial tf.random.categorical Flex OP
120 FlexAll tf.math.reduce_all Flex OP
121 FlexErf tf.math.erf Flex OP
122 FlexRoll tf.roll Flex OP
123 CONV_3D tf.keras.layers.Conv3D
124 CONV_3D_TRANSPOSE tf.nn.conv3d_transpose
125 Densify (const)
126 SPACE_TO_BATCH_ND tf.space_to_batch_nd
127 BATCH_TO_SPACE_ND tf.compat.v1.batch_to_space_nd
128 TransformLandmarks tf.reshape, tf.linalg.matmul, tf.math.add CUSTOM, MediaPipe
129 TransformTensorBilinear tf.reshape, tf.linalg.matmul, tf.math.add, tf.tile, tf.math.floor, tf.math.subtract, tf.math.multiply, tf.math.reduce_prod, tf.cast, tf.math.maximum, tf.math.maximum, tf.concat, tf.gather_nd CUSTOM, MediaPipe
130 Landmarks2TransformMatrix tf.constant, tf.math.subtract, tf.math.norm, tf.math.divide, tf.linalg.matmul, tf.concat, tf.transpose, tf.gather, tf.math.reduce_min, tf.math.reduce_max, tf.math.multiply, tf.zeros, tf.math.add, tf.tile CUSTOM, MediaPipe

2. Environment

  • Python3.8+
  • TensorFlow v2.9.0+
  • TensorFlow Lite v2.9.0 with MediaPipe Custom OP, FlexDelegate and XNNPACK enabled
  • flatc v2.0.8
  • PyTorch v1.12.0 (with grid_sample)
  • TorchVision
  • TorchAudio
  • OpenVINO 2021.4.582+
  • TensorRT 8.4+
  • trtexec
  • pycuda 2021.1
  • tensorflowjs
  • coremltools
  • paddle2onnx
  • onnx
  • onnxruntime-gpu (CUDA, TensorRT, OpenVINO)
  • onnxruntime-extensions
  • onnx_graphsurgeon
  • onnx-simplifier
  • onnxconverter-common
  • onnxmltools
  • onnx-tensorrt
  • tf2onnx
  • torch2trt
  • onnx-tf
  • tensorflow-datasets
  • tf_slim
  • edgetpu_compiler
  • tflite2tensorflow
  • openvino2tensorflow
  • simple-onnx-processing-tools
  • gdown
  • pandas
  • matplotlib
  • paddlepaddle
  • paddle2onnx
  • pycocotools
  • scipy
  • Intel-Media-SDK
  • Intel iHD GPU (iGPU) support
  • OpenCL
  • gluoncv
  • LLVM
  • NNPACK
  • WSL2 OpenCL

3. Setup

3-1. [Environment construction pattern 1] Execution by Docker (strongly recommended)

You do not need to install any packages other than Docker. It consumes about 26.7GB of host storage.

$ docker pull ghcr.io/pinto0309/tflite2tensorflow:latest
or
$ docker build -t ghcr.io/pinto0309/tflite2tensorflow:latest .

# If you don't need to access the GUI of the HostPC and the USB camera.
$ docker run -it --rm \
  -v `pwd`:/home/user/workdir \
  ghcr.io/pinto0309/tflite2tensorflow:latest

# If conversion to TF-TRT is not required. And if you need to access the HostPC GUI and USB camera.
$ xhost +local: && \
  docker run -it --rm \
  -v `pwd`:/home/user/workdir \
  -v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
  --device /dev/video0:/dev/video0:mwr \
  --net=host \
  -e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
  -e DISPLAY=$DISPLAY \
  --privileged \
  ghcr.io/pinto0309/tflite2tensorflow:latest

# If you need to convert to TF-TRT. And if you need to access the HostPC GUI and USB camera.
$ xhost +local: && \
  docker run --gpus all -it --rm \
  -v `pwd`:/home/user/workdir \
  -v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
  --device /dev/video0:/dev/video0:mwr \
  --net=host \
  -e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
  -e DISPLAY=$DISPLAY \
  --privileged \
  ghcr.io/pinto0309/tflite2tensorflow:latest

# If you are using iGPU (OpenCL). And if you need to access the HostPC GUI and USB camera.
$ xhost +local: && \
  docker run -it --rm \
  -v `pwd`:/home/user/workdir \
  -v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
  --device /dev/video0:/dev/video0:mwr \
  --net=host \
  -e LIBVA_DRIVER_NAME=iHD \
  -e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
  -e DISPLAY=$DISPLAY \
  --privileged \
  ghcr.io/pinto0309/tflite2tensorflow:latest

3-2. [Environment construction pattern 2] Execution by Host machine

To install using the Python Package Index (PyPI), use the following command.

$ pip3 install --user --upgrade tflite2tensorflow

Or, To install with the latest source code of the main branch, use the following command.

$ pip3 install --user --upgrade git+https://github.com/PINTO0309/tflite2tensorflow

Installs a customized TensorFlow Lite runtime with support for MediaPipe Custom OP, FlexDelegate, and XNNPACK. If tflite_runtime does not install properly, please follow the instructions in the next article to build a custom build in the environment you are using. Add a custom OP to the TFLite runtime to build the whl installer (for Python), MaxPoolingWithArgmax2D, MaxUnpooling2D, Convolution2DTransposeBias, TransformLandmarks, TransformTensorBilinear, Landmarks2TransformMatrix

$ sudo pip3 uninstall -y \
    tensorboard-plugin-wit \
    tb-nightly \
    tensorboard \
    tf-estimator-nightly \
    tensorflow-gpu \
    tensorflow \
    tf-nightly \
    tensorflow_estimator \
    tflite_runtime

$ APPVER=v1.20.7
$ TENSORFLOWVER=2.8.0

### Customized version of TensorFlow Lite installation
$ wget https://github.com/PINTO0309/tflite2tensorflow/releases/download/${APPVER}/tflite_runtime-${TENSORFLOWVER}-cp38-none-linux_x86_64.whl \
  && sudo chmod +x tflite_runtime-${TENSORFLOWVER}-cp38-none-linux_x86_64.whl \
  && pip3 install --user --force-reinstall tflite_runtime-${TENSORFLOWVER}-cp38-none-linux_x86_64.whl \
  && rm tflite_runtime-${TENSORFLOWVER}-cp38-none-linux_x86_64.whl

### Install the Customized Full TensorFlow package
### (MediaPipe Custom OP, FlexDelegate, XNNPACK enabled)
$ wget https://github.com/PINTO0309/tflite2tensorflow/releases/download/${APPVER}/tflite_runtime-${TENSORFLOWVER}-cp38-none-linux_x86_64.whl \
  && sudo chmod +x tensorflow-${TENSORFLOWVER}-cp38-none-linux_x86_64.whl \
  && pip3 install --user --force-reinstall tensorflow-${TENSORFLOWVER}-cp38-none-linux_x86_64.whl \
  && rm tensorflow-${TENSORFLOWVER}-cp38-none-linux_x86_64.whl

 or

### Install the Non-customized TensorFlow package
$ pip3 install --user tf-nightly

### Download schema.fbs
$ wget https://github.com/PINTO0309/tflite2tensorflow/raw/main/schema/schema.fbs

Build flatc

$ git clone -b v2.0.8 https://github.com/google/flatbuffers.git
$ cd flatbuffers && mkdir build && cd build
$ cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE=Release ..
$ make -j$(nproc)

vvtvsu0y1791ow2ybdk61s9fv7e4 saxqukktcjncsk2hp7m8p2cns4q4

The Windows version of flatc v2.0.8 can be downloaded from here. https://github.com/google/flatbuffers/releases/download/v2.0.8/Windows.flatc.binary.zip

4. Usage / Execution sample

4-1. Command line options

usage: tflite2tensorflow
  [-h]
  --model_path MODEL_PATH
  --flatc_path FLATC_PATH
  --schema_path SCHEMA_PATH
  [--model_output_path MODEL_OUTPUT_PATH]
  [--output_pb]
  [--output_no_quant_float32_tflite]
  [--output_dynamic_range_quant_tflite]
  [--output_weight_quant_tflite]
  [--output_float16_quant_tflite]
  [--output_integer_quant_tflite]
  [--output_full_integer_quant_tflite]
  [--output_integer_quant_type]
  [--string_formulas_for_normalization STRING_FORMULAS_FOR_NORMALIZATION]
  [--calib_ds_type CALIB_DS_TYPE]
  [--ds_name_for_tfds_for_calibration DS_NAME_FOR_TFDS_FOR_CALIBRATION]
  [--split_name_for_tfds_for_calibration SPLIT_NAME_FOR_TFDS_FOR_CALIBRATION]
  [--download_dest_folder_path_for_the_calib_tfds DOWNLOAD_DEST_FOLDER_PATH_FOR_THE_CALIB_TFDS]
  [--tfds_download_flg]
  [--load_dest_file_path_for_the_calib_npy LOAD_DEST_FILE_PATH_FOR_THE_CALIB_NPY]
  [--output_tfjs]
  [--output_tftrt_float32]
  [--output_tftrt_float16]
  [--output_coreml]
  [--optimizing_coreml]
  [--output_edgetpu]
  [--edgetpu_compiler_timeout EDGETPU_COMPILER_TIMEOUT]
  [--edgetpu_num_segments EDGETPU_NUM_SEGMENTS]
  [--output_onnx]
  [--onnx_opset ONNX_OPSET]
  [--onnx_extra_opset ONNX_EXTRA_OPSET]
  [--disable_onnx_nchw_conversion]
  [--disable_onnx_optimization]
  [--output_openvino_and_myriad]
  [--vpu_number_of_shaves VPU_NUMBER_OF_SHAVES]
  [--vpu_number_of_cmx_slices VPU_NUMBER_OF_CMX_SLICES]
  [--optimizing_for_openvino_and_myriad]
  [--rigorous_optimization_for_myriad]
  [--replace_swish_and_hardswish]
  [--optimizing_for_edgetpu]
  [--replace_prelu_and_minmax]
  [--disable_experimental_new_quantizer]
  [--disable_per_channel]
  [--optimizing_barracuda]
  [--locationids_of_the_terminating_output]

optional arguments:
  -h, --help
          show this help message and exit
  --model_path MODEL_PATH
          input tflite model path (*.tflite)
  --flatc_path FLATC_PATH
          flatc file path (flatc)
  --schema_path SCHEMA_PATH
          schema.fbs path (schema.fbs)
  --model_output_path MODEL_OUTPUT_PATH
          The output folder path of the converted model file
  --output_pb
          .pb output switch
  --output_no_quant_float32_tflite
          float32 tflite output switch
  --output_dynamic_range_quant_tflite
          dynamic range quant tflite output switch
  --output_weight_quant_tflite
          weight quant tflite output switch
  --output_float16_quant_tflite
          float16 quant tflite output switch
  --output_integer_quant_tflite
          integer quant tflite output switch
  --output_full_integer_quant_tflite
          full integer quant tflite output switch
  --output_integer_quant_type OUTPUT_INTEGER_QUANT_TYPE
          Input and output types when doing Integer Quantization
          ('int8 (default)' or 'uint8')
  --string_formulas_for_normalization STRING_FORMULAS_FOR_NORMALIZATION
          String formulas for normalization. It is evaluated by
          Python's eval() function. Default: '(data -
          [127.5,127.5,127.5]) / [127.5,127.5,127.5]'
  --calib_ds_type CALIB_DS_TYPE
          Types of data sets for calibration. tfds or numpy
          Default: numpy
  --ds_name_for_tfds_for_calibration DS_NAME_FOR_TFDS_FOR_CALIBRATION
          Dataset name for TensorFlow Datasets for calibration.
          https://www.tensorflow.org/datasets/catalog/overview
  --split_name_for_tfds_for_calibration SPLIT_NAME_FOR_TFDS_FOR_CALIBRATION
          Split name for TensorFlow Datasets for calibration.
          https://www.tensorflow.org/datasets/catalog/overview
  --download_dest_folder_path_for_the_calib_tfds DOWNLOAD_DEST_FOLDER_PATH_FOR_THE_CALIB_TFDS
          Download destination folder path for the calibration
          dataset. Default: $HOME/TFDS
  --tfds_download_flg
          True to automatically download datasets from
          TensorFlow Datasets. True or False
  --load_dest_file_path_for_the_calib_npy LOAD_DEST_FILE_PATH_FOR_THE_CALIB_NPY
          The path from which to load the .npy file containing
          the numpy binary version of the calibration data.
          Default: sample_npy/calibration_data_img_sample.npy
          [20, 513, 513, 3] -> [Number of images, h, w, c]
  --output_tfjs
          tfjs model output switch
  --output_tftrt32
          tftrt float32 model output switch
  --output_tftrt16
          tftrt float16 model output switch
  --output_coreml
          coreml model output switch
  --optimizing_for_coreml
          Optimizing graph for coreml
  --output_edgetpu
          edgetpu model output switch
  --edgetpu_compiler_timeout
          edgetpu_compiler timeout for one compilation process in seconds.
          Default: 3600
  --edgetpu_num_segments
          Partition the model into 'num_segments' segments.
          Default: 1 (no partition)
  --output_onnx
          onnx model output switch
  --onnx_opset ONNX_OPSET
          onnx opset version number
  --onnx_extra_opset ONNX_EXTRA_OPSET
          The name of the onnx 'extra_opset' to enable.
          Default: ''
          'com.microsoft:1' or 'ai.onnx.contrib:1' or 'ai.onnx.ml:1'
  --disable_onnx_nchw_conversion
          Disable onnx NCHW conversion
  --disable_onnx_optimization
          Disable onnx optimization
  --output_openvino_and_myriad
          openvino model and myriad inference engine blob output switch
  --vpu_number_of_shaves VPU_NUMBER_OF_SHAVES
          vpu number of shaves. Default: 4
  --vpu_number_of_cmx_slices VPU_NUMBER_OF_CMX_SLICES
          vpu number of cmx slices. Default: 4
  --optimizing_for_openvino_and_myriad
          Optimizing graph for openvino/myriad
  --rigorous_optimization_for_myriad
          Replace operations that are not supported by myriad with operations
          that are as feasible as possible.
          e.g. 'Abs' -> 'Square' + 'Sqrt'
  --replace_swish_and_hardswish
          Replace swish and hard-swish with each other
  --optimizing_for_edgetpu
          Optimizing for edgetpu
  --replace_prelu_and_minmax
          Replace prelu and minimum/maximum with each other
  --disable_experimental_new_quantizer
          Disable MLIRs new quantization feature during INT8 quantization
          in TensorFlowLite.
  --disable_per_channel
          Disable per-channel quantization for tflite.
  --optimizing_barracuda
          Generates ONNX by replacing Barracuda unsupported layers
          with standard layers. For example, GatherND.
  --locationids_of_the_terminating_output
          A comma-separated list of LocationIDs to be used as output layers.
          e.g. --locationids_of_the_terminating_output 100,201,560
          Default: ''

4-2. Step 1 : Generating saved_model and FreezeGraph (.pb)

$ tflite2tensorflow \
  --model_path segm_full_v679.tflite \
  --flatc_path ../flatc \
  --schema_path ../schema.fbs \
  --output_pb

or

$ tflite2tensorflow \
  --model_path segm_full_v679.tflite \
  --flatc_path ../flatc \
  --schema_path ../schema.fbs \
  --output_pb \
  --optimizing_for_openvino_and_myriad

or

$ tflite2tensorflow \
  --model_path segm_full_v679.tflite \
  --flatc_path ../flatc \
  --schema_path ../schema.fbs \
  --output_pb \
  --optimizing_for_openvino_and_myriad \
  --rigorous_optimization_for_myriad

or

$ tflite2tensorflow \
  --model_path segm_full_v679.tflite \
  --flatc_path ../flatc \
  --schema_path ../schema.fbs \
  --output_pb \
  --optimizing_for_edgetpu

or

$ tflite2tensorflow \
  --model_path segm_full_v679.tflite \
  --flatc_path ../flatc \
  --schema_path ../schema.fbs \
  --output_pb \
  --optimizing_for_coreml

or

$ tflite2tensorflow \
  --model_path segm_full_v679.tflite \
  --flatc_path ../flatc \
  --schema_path ../schema.fbs \
  --output_pb \
  --optimizing_barracuda

4-3. Step 2 : Generation of quantized tflite, TFJS, TF-TRT, EdgeTPU, CoreML and ONNX

$ tflite2tensorflow \
  --model_path segm_full_v679.tflite \
  --flatc_path ../flatc \
  --schema_path ../schema.fbs \
  --output_no_quant_float32_tflite \
  --output_dynamic_range_quant_tflite \
  --output_weight_quant_tflite \
  --output_float16_quant_tflite \
  --output_integer_quant_tflite \
  --string_formulas_for_normalization 'data / 255.0' \
  --output_tfjs \
  --output_coreml \
  --output_tftrt_float32 \
  --output_tftrt_float16 \
  --output_onnx \
  --onnx_opset 11 \
  --output_openvino_and_myriad

or

$ tflite2tensorflow \
  --model_path segm_full_v679.tflite \
  --flatc_path ../flatc \
  --schema_path ../schema.fbs \
  --output_no_quant_float32_tflite \
  --output_dynamic_range_quant_tflite \
  --output_weight_quant_tflite \
  --output_float16_quant_tflite \
  --output_integer_quant_tflite \
  --output_edgetpu \
  --output_integer_quant_typ 'uint8' \
  --string_formulas_for_normalization 'data / 255.0' \
  --output_tfjs \
  --output_coreml \
  --output_tftrt_float32 \
  --output_tftrt_float16 \
  --output_onnx \
  --onnx_opset 11

4-4. Check the contents of the .npy file, which is a binary version of the image file

$ view_npy --npy_file_path calibration_data_img_sample.npy

Press the Q button to display the next image. calibration_data_img_sample.npy contains 20 images extracted from the MS-COCO data set.

image

5. Sample image

This is the result of converting MediaPipe's Meet Segmentation model (segm_full_v679.tflite / Float16 / Google Meet) to saved_model and then reconverting it to Float32 tflite. Replace the GPU-optimized Convolution2DTransposeBias layer with the standard TransposeConv and BiasAdd layers in a fully automatic manner. The weights and biases of the Float16 Dequantize layer are automatically back-quantized to Float32 precision. The generated saved_model in Float32 precision can be easily converted to Float16, INT8, EdgeTPU, TFJS, TF-TRT, CoreML, ONNX, OpenVINO, Myriad Inference Engine blob.

Before After
segm_full_v679 tflite model_float32 tflite

tflite2tensorflow's People

Contributors

kenjiasaba avatar pinto0309 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tflite2tensorflow's Issues

Add option to optimize tf.math.reduce_prod to Myriad (OAK)

Issue Type

Feature Request

OS

Other

OS architecture

Other

Programming Language

Other

Framework

OpenVINO, Myriad Inference Engine

Download URL for tflite file

None

Convert Script

None

Description

Replace tf.math.reduce_prod with tf.math.multiply
https://www.tensorflow.org/api_docs/python/tf/math/reduce_prod

https://github.com/PINTO0309/PINTO_model_zoo/tree/main/282_face_landmark_with_attention

https://github.com/iwatake2222/play_with_tflite/tree/master/pj_tflite_face_landmark_with_attention

image

Relevant Log Output

None

Source code for simple inference testing code

No response

Error converting pose_detection.tflite [ No such file or directory: './pose_detection.json' ]

Issue Type

Support

OS

Ubuntu

OS architecture

x86_64

Programming Language

Python

Framework

TensorFlowLite

Download URL for tflite file

https://github.com/google/mediapipe/blob/master/mediapipe/modules/pose_detection/pose_detection.tflite

Description

I want to convert this tflite model file to tensorflow, TF-TRT and also onnx. While trying to convert to tensorflow, I am receiving the following error "No such file or directory: './pose_detection.json'". I tried both docker and os installation.

Relevant Log Output

sh: ../flatc: No such file or directory
Traceback (most recent call last):
  File "/usr/local/bin/tflite2tensorflow", line 6429, in <module>
    main()
  File "/usr/local/bin/tflite2tensorflow", line 5724, in main
    ops, json_tensor_details, op_types, full_json = parse_json(jsonfile_path)
  File "/usr/local/bin/tflite2tensorflow", line 246, in parse_json
    j = json.load(open(jsonfile_path))
FileNotFoundError: [Errno 2] No such file or directory: './pose_detection.json'

Source code for simple inference testing code

No response

ValueError: The name 'serving_default_input_1:0:0' looks a like a Tensor name, but is not a valid one. Tensor names must be of the form "<op_name>:<output_index>".

Issue Type

Bug

OS

Ubuntu

OS architecture

x86_64

Programming Language

Python

Framework

TensorFlow

Download URL for tflite file

nms_yolact_edge.zip

Convert Script

python convert_onnx.py

onnxsim yolact_resnet50_62_17262.onnx yolact_resnet50_62_17262.onnx
onnxsim yolact_resnet50_62_17262.onnx yolact_resnet50_62_17262.onnx
onnxsim yolact_resnet50_62_17262.onnx yolact_resnet50_62_17262.onnx
onnxsim yolact_resnet50_62_17262.onnx yolact_resnet50_62_17262.onnx

python nms_yolact_edge.py

tflite2tensorflow
--model_path nms_yolact_edge.tflite
--flatc_path ../flatc
--schema_path ../schema.fbs
--output_pb
--optimizing_for_openvino_and_myriad

Description

Hi @PINTO0309,
I was trying to convert yolact-edge with backbone resnet50.

Then I got error while trying the following steps.

Error message shows 'serving_default_input_1:0:0' is invalid. Perhaps it should be 'serving_default_input_1:0'?

Would you please help check it?
Thanks you.

Relevant Log Output

op_types: ['RESHAPE', 'STRIDED_SLICE', 'CONCATENATION', 'NON_MAX_SUPPRESSION_V4', 'GATHER', 'EXPAND_DIMS', 'CAST', 'TRANSPOSE']
num of ops: 29
[{'builtin_options_type': 'NONE',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [0, 20],
  'opcode_index': 0,
  'outputs': [23]},
 {'builtin_options_type': 'NONE',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [3, 21],
  'opcode_index': 0,
  'outputs': [24]},
 {'builtin_options_type': 'NONE',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [2, 21],
  'opcode_index': 0,
  'outputs': [25]},
 {'builtin_options_type': 'NONE',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [1, 22],
  'opcode_index': 0,
  'outputs': [26]},
 {'builtin_options': {'begin_mask': 1,
                      'ellipsis_mask': 0,
                      'end_mask': 1,
                      'new_axis_mask': 0,
                      'shrink_axis_mask': 2},
  'builtin_options_type': 'StridedSliceOptions',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [23, 11, 15, 10],
  'opcode_index': 1,
  'outputs': [27]},
 {'builtin_options_type': 'NONE',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [27, 16],
  'opcode_index': 0,
  'outputs': [28]},
 {'builtin_options': {'begin_mask': 3,
                      'ellipsis_mask': 0,
                      'end_mask': 3,
                      'new_axis_mask': 0,
                      'shrink_axis_mask': 0},
  'builtin_options_type': 'StridedSliceOptions',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [28, 11, 11, 10],
  'opcode_index': 1,
  'outputs': [29]},
 {'builtin_options': {'begin_mask': 1,
                      'ellipsis_mask': 0,
                      'end_mask': 1,
                      'new_axis_mask': 0,
                      'shrink_axis_mask': 2},
  'builtin_options_type': 'StridedSliceOptions',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [23, 15, 14, 10],
  'opcode_index': 1,
  'outputs': [30]},
 {'builtin_options_type': 'NONE',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [30, 16],
  'opcode_index': 0,
  'outputs': [31]},
 {'builtin_options': {'begin_mask': 3,
                      'ellipsis_mask': 0,
                      'end_mask': 3,
                      'new_axis_mask': 0,
                      'shrink_axis_mask': 0},
  'builtin_options_type': 'StridedSliceOptions',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [31, 11, 11, 10],
  'opcode_index': 1,
  'outputs': [32]},
 {'builtin_options': {'begin_mask': 1,
                      'ellipsis_mask': 0,
                      'end_mask': 1,
                      'new_axis_mask': 0,
                      'shrink_axis_mask': 2},
  'builtin_options_type': 'StridedSliceOptions',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [23, 14, 13, 10],
  'opcode_index': 1,
  'outputs': [33]},
 {'builtin_options_type': 'NONE',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [33, 16],
  'opcode_index': 0,
  'outputs': [34]},
 {'builtin_options': {'begin_mask': 3,
                      'ellipsis_mask': 0,
                      'end_mask': 3,
                      'new_axis_mask': 0,
                      'shrink_axis_mask': 0},
  'builtin_options_type': 'StridedSliceOptions',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [34, 11, 11, 10],
  'opcode_index': 1,
  'outputs': [35]},
 {'builtin_options': {'begin_mask': 1,
                      'ellipsis_mask': 0,
                      'end_mask': 1,
                      'new_axis_mask': 0,
                      'shrink_axis_mask': 2},
  'builtin_options_type': 'StridedSliceOptions',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [23, 13, 12, 10],
  'opcode_index': 1,
  'outputs': [36]},
 {'builtin_options_type': 'NONE',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [36, 16],
  'opcode_index': 0,
  'outputs': [37]},
 {'builtin_options': {'begin_mask': 3,
                      'ellipsis_mask': 0,
                      'end_mask': 3,
                      'new_axis_mask': 0,
                      'shrink_axis_mask': 0},
  'builtin_options_type': 'StridedSliceOptions',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [37, 11, 11, 10],
  'opcode_index': 1,
  'outputs': [38]},
 {'builtin_options': {'begin_mask': 14,
                      'ellipsis_mask': 0,
                      'end_mask': 14,
                      'new_axis_mask': 0,
                      'shrink_axis_mask': 1},
  'builtin_options_type': 'StridedSliceOptions',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [4, 17, 18, 19],
  'opcode_index': 1,
  'outputs': [39]},
 {'builtin_options': {'axis': 1, 'fused_activation_function': 'NONE'},
  'builtin_options_type': 'ConcatenationOptions',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [32, 29, 38, 35],
  'opcode_index': 2,
  'outputs': [40]},
 {'builtin_options_type': 'NONE',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [40, 24, 7, 6, 5],
  'opcode_index': 3,
  'outputs': [41, 42]},
 {'builtin_options': {'axis': 0, 'batch_dims': 0},
  'builtin_options_type': 'GatherOptions',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [23, 41],
  'opcode_index': 4,
  'outputs': [43]},
 {'builtin_options': {'axis': 0, 'batch_dims': 0},
  'builtin_options_type': 'GatherOptions',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [24, 41],
  'opcode_index': 4,
  'outputs': [44]},
 {'builtin_options': {},
  'builtin_options_type': 'ExpandDimsOptions',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [44, 8],
  'opcode_index': 5,
  'outputs': [45]},
 {'builtin_options': {'axis': 0, 'batch_dims': 0},
  'builtin_options_type': 'GatherOptions',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [25, 41],
  'opcode_index': 4,
  'outputs': [46]},
 {'builtin_options_type': 'NONE',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [46],
  'opcode_index': 6,
  'outputs': [47]},
 {'builtin_options': {},
  'builtin_options_type': 'ExpandDimsOptions',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [47, 8],
  'opcode_index': 5,
  'outputs': [48]},
 {'builtin_options': {'axis': 0, 'batch_dims': 0},
  'builtin_options_type': 'GatherOptions',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [26, 41],
  'opcode_index': 4,
  'outputs': [49]},
 {'builtin_options': {'axis': 1, 'fused_activation_function': 'NONE'},
  'builtin_options_type': 'ConcatenationOptions',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [43, 45, 48, 49],
  'opcode_index': 2,
  'outputs': [50]},
 {'builtin_options': {'axis': 2, 'batch_dims': 0},
  'builtin_options_type': 'GatherOptions',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [39, 41],
  'opcode_index': 4,
  'outputs': [51]},
 {'builtin_options_type': 'NONE',
  'custom_options_format': 'FLEXBUFFERS',
  'inputs': [51, 9],
  'opcode_index': 7,
  'outputs': [52]}]
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
inputs:
{'dtype': <class 'numpy.float32'>,
 'index': 0,
 'name': 'serving_default_input_1:0',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([    1, 19248,     4], dtype=int32),
 'shape_signature': array([    1, 19248,     4], dtype=int32),
 'sparsity_parameters': {}}
{'dtype': <class 'numpy.float32'>,
 'index': 1,
 'name': 'serving_default_input_4:0',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([    1, 19248,    32], dtype=int32),
 'shape_signature': array([    1, 19248,    32], dtype=int32),
 'sparsity_parameters': {}}
{'dtype': <class 'numpy.int64'>,
 'index': 2,
 'name': 'serving_default_input_3:0',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([    1, 19248], dtype=int32),
 'shape_signature': array([    1, 19248], dtype=int32),
 'sparsity_parameters': {}}
{'dtype': <class 'numpy.float32'>,
 'index': 3,
 'name': 'serving_default_input_2:0',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([    1, 19248], dtype=int32),
 'shape_signature': array([    1, 19248], dtype=int32),
 'sparsity_parameters': {}}
{'dtype': <class 'numpy.float32'>,
 'index': 4,
 'name': 'serving_default_input_5:0',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([    1,   138,   138, 19248], dtype=int32),
 'shape_signature': array([    1,   138,   138, 19248], dtype=int32),
 'sparsity_parameters': {}}
TensorFlow/Keras model building process starts ======================================
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: Placeholder
{'dtype': <class 'numpy.float32'>,
 'index': 0,
 'name': 'serving_default_input_1:0',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([    1, 19248,     4], dtype=int32),
 'shape_signature': array([    1, 19248,     4], dtype=int32),
 'sparsity_parameters': {}}
{'dtype': <class 'numpy.float32'>,
 'index': 1,
 'name': 'serving_default_input_4:0',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([    1, 19248,    32], dtype=int32),
 'shape_signature': array([    1, 19248,    32], dtype=int32),
 'sparsity_parameters': {}}
{'dtype': <class 'numpy.int64'>,
 'index': 2,
 'name': 'serving_default_input_3:0',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([    1, 19248], dtype=int32),
 'shape_signature': array([    1, 19248], dtype=int32),
 'sparsity_parameters': {}}
{'dtype': <class 'numpy.float32'>,
 'index': 3,
 'name': 'serving_default_input_2:0',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([    1, 19248], dtype=int32),
 'shape_signature': array([    1, 19248], dtype=int32),
 'sparsity_parameters': {}}
{'dtype': <class 'numpy.float32'>,
 'index': 4,
 'name': 'serving_default_input_5:0',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([    1,   138,   138, 19248], dtype=int32),
 'shape_signature': array([    1,   138,   138, 19248], dtype=int32),
 'sparsity_parameters': {}}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: RESHAPE
{'builtin_options_type': 'NONE',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [0, 20],
 'opcode_index': 0,
 'outputs': [23]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: RESHAPE
{'builtin_options_type': 'NONE',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [3, 21],
 'opcode_index': 0,
 'outputs': [24]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: RESHAPE
{'builtin_options_type': 'NONE',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [2, 21],
 'opcode_index': 0,
 'outputs': [25]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: RESHAPE
{'builtin_options_type': 'NONE',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [1, 22],
 'opcode_index': 0,
 'outputs': [26]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: STRIDED_SLICE
{'builtin_options': {'begin_mask': 1,
                     'ellipsis_mask': 0,
                     'end_mask': 1,
                     'new_axis_mask': 0,
                     'shrink_axis_mask': 2},
 'builtin_options_type': 'StridedSliceOptions',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [23, 11, 15, 10],
 'opcode_index': 1,
 'outputs': [27]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: RESHAPE
{'builtin_options_type': 'NONE',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [27, 16],
 'opcode_index': 0,
 'outputs': [28]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: STRIDED_SLICE
{'builtin_options': {'begin_mask': 3,
                     'ellipsis_mask': 0,
                     'end_mask': 3,
                     'new_axis_mask': 0,
                     'shrink_axis_mask': 0},
 'builtin_options_type': 'StridedSliceOptions',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [28, 11, 11, 10],
 'opcode_index': 1,
 'outputs': [29]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: STRIDED_SLICE
{'builtin_options': {'begin_mask': 1,
                     'ellipsis_mask': 0,
                     'end_mask': 1,
                     'new_axis_mask': 0,
                     'shrink_axis_mask': 2},
 'builtin_options_type': 'StridedSliceOptions',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [23, 15, 14, 10],
 'opcode_index': 1,
 'outputs': [30]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: RESHAPE
{'builtin_options_type': 'NONE',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [30, 16],
 'opcode_index': 0,
 'outputs': [31]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: STRIDED_SLICE
{'builtin_options': {'begin_mask': 3,
                     'ellipsis_mask': 0,
                     'end_mask': 3,
                     'new_axis_mask': 0,
                     'shrink_axis_mask': 0},
 'builtin_options_type': 'StridedSliceOptions',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [31, 11, 11, 10],
 'opcode_index': 1,
 'outputs': [32]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: STRIDED_SLICE
{'builtin_options': {'begin_mask': 1,
                     'ellipsis_mask': 0,
                     'end_mask': 1,
                     'new_axis_mask': 0,
                     'shrink_axis_mask': 2},
 'builtin_options_type': 'StridedSliceOptions',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [23, 14, 13, 10],
 'opcode_index': 1,
 'outputs': [33]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: RESHAPE
{'builtin_options_type': 'NONE',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [33, 16],
 'opcode_index': 0,
 'outputs': [34]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: STRIDED_SLICE
{'builtin_options': {'begin_mask': 3,
                     'ellipsis_mask': 0,
                     'end_mask': 3,
                     'new_axis_mask': 0,
                     'shrink_axis_mask': 0},
 'builtin_options_type': 'StridedSliceOptions',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [34, 11, 11, 10],
 'opcode_index': 1,
 'outputs': [35]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: STRIDED_SLICE
{'builtin_options': {'begin_mask': 1,
                     'ellipsis_mask': 0,
                     'end_mask': 1,
                     'new_axis_mask': 0,
                     'shrink_axis_mask': 2},
 'builtin_options_type': 'StridedSliceOptions',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [23, 13, 12, 10],
 'opcode_index': 1,
 'outputs': [36]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: RESHAPE
{'builtin_options_type': 'NONE',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [36, 16],
 'opcode_index': 0,
 'outputs': [37]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: STRIDED_SLICE
{'builtin_options': {'begin_mask': 3,
                     'ellipsis_mask': 0,
                     'end_mask': 3,
                     'new_axis_mask': 0,
                     'shrink_axis_mask': 0},
 'builtin_options_type': 'StridedSliceOptions',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [37, 11, 11, 10],
 'opcode_index': 1,
 'outputs': [38]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: STRIDED_SLICE
{'builtin_options': {'begin_mask': 14,
                     'ellipsis_mask': 0,
                     'end_mask': 14,
                     'new_axis_mask': 0,
                     'shrink_axis_mask': 1},
 'builtin_options_type': 'StridedSliceOptions',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [4, 17, 18, 19],
 'opcode_index': 1,
 'outputs': [39]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: CONCATENATION
{'builtin_options': {'axis': 1, 'fused_activation_function': 'NONE'},
 'builtin_options_type': 'ConcatenationOptions',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [32, 29, 38, 35],
 'opcode_index': 2,
 'outputs': [40]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: NON_MAX_SUPPRESSION_V4
{'builtin_options_type': 'NONE',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [40, 24, 7, 6, 5],
 'opcode_index': 3,
 'outputs': [41, 42]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: GATHER
{'builtin_options': {'axis': 0, 'batch_dims': 0},
 'builtin_options_type': 'GatherOptions',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [23, 41],
 'opcode_index': 4,
 'outputs': [43]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: GATHER
{'builtin_options': {'axis': 0, 'batch_dims': 0},
 'builtin_options_type': 'GatherOptions',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [24, 41],
 'opcode_index': 4,
 'outputs': [44]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: EXPAND_DIMS
{'builtin_options': {},
 'builtin_options_type': 'ExpandDimsOptions',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [44, 8],
 'opcode_index': 5,
 'outputs': [45]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: GATHER
{'builtin_options': {'axis': 0, 'batch_dims': 0},
 'builtin_options_type': 'GatherOptions',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [25, 41],
 'opcode_index': 4,
 'outputs': [46]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: CAST
{'builtin_options_type': 'NONE',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [46],
 'opcode_index': 6,
 'outputs': [47]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: EXPAND_DIMS
{'builtin_options': {},
 'builtin_options_type': 'ExpandDimsOptions',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [47, 8],
 'opcode_index': 5,
 'outputs': [48]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: GATHER
{'builtin_options': {'axis': 0, 'batch_dims': 0},
 'builtin_options_type': 'GatherOptions',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [26, 41],
 'opcode_index': 4,
 'outputs': [49]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: CONCATENATION
{'builtin_options': {'axis': 1, 'fused_activation_function': 'NONE'},
 'builtin_options_type': 'ConcatenationOptions',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [43, 45, 48, 49],
 'opcode_index': 2,
 'outputs': [50]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: GATHER
{'builtin_options': {'axis': 2, 'batch_dims': 0},
 'builtin_options_type': 'GatherOptions',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [39, 41],
 'opcode_index': 4,
 'outputs': [51]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: TRANSPOSE
{'builtin_options_type': 'NONE',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [51, 9],
 'opcode_index': 7,
 'outputs': [52]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
outputs:
{'dtype': <class 'numpy.float32'>,
 'index': 50,
 'name': 'PartitionedCall:1',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([100,  38], dtype=int32),
 'shape_signature': array([-1, 38], dtype=int32),
 'sparsity_parameters': {}}
{'dtype': <class 'numpy.float32'>,
 'index': 52,
 'name': 'PartitionedCall:0',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([100, 138, 138], dtype=int32),
 'shape_signature': array([ -1, 138, 138], dtype=int32),
 'sparsity_parameters': {}}
TensorFlow/Keras model building process complete!
saved_model / .pb output started ====================================================
ERROR: The name 'serving_default_input_1:0:0' looks a like a Tensor name, but is not a valid one. Tensor names must be of the form "<op_name>:<output_index>".
Traceback (most recent call last):
  File "/home/yc/.local/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 4011, in _as_graph_element_locked
    op_name, out_n = name.split(":")
ValueError: too many values to unpack (expected 2)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/yc/.local/bin/tflite2tensorflow", line 5974, in main
    inputs= {re.sub(':0*', '', t): graph.get_tensor_by_name(t) for t in input_node_names},
  File "/home/yc/.local/bin/tflite2tensorflow", line 5974, in <dictcomp>
    inputs= {re.sub(':0*', '', t): graph.get_tensor_by_name(t) for t in input_node_names},
  File "/home/yc/.local/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 4156, in get_tensor_by_name
    return self.as_graph_element(name, allow_tensor=True, allow_operation=False)
  File "/home/yc/.local/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 3974, in as_graph_element
    return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
  File "/home/yc/.local/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 4014, in _as_graph_element_locked
    raise ValueError("The name %s looks a like a Tensor name, but is "
ValueError: The name 'serving_default_input_1:0:0' looks a like a Tensor name, but is not a valid one. Tensor names must be of the form "<op_name>:<output_index>".

Source code for simple inference testing code

No response

Hand landmark model: AssertionError: Identity is not in graph

1. OS Ubuntu 18.04

2. OS Architecture x86_64

4. Version of TensorFlow v2.4.1

9. Download URL for .tflite https://github.com/google/mediapipe/blob/master/mediapipe/modules/hand_landmark/hand_landmark.tflite

12. Issue Details

Hi PINTO,
My goal is to convert the Mediapipe Hand landmark tflite model (the version of Nov 2020 which takes
224x224x3 frames as input) into a DepthAI blob file, so my first step is to use your tflite2tensorflow tool to convert the tflite into a pb file.

Here is the command line I use: tflite2tensorflow --model_path hand_landmark.tflite --flatc_path ./flatc --schema_path schema.fbs --output_pb True

Here is the begining and end of the output :
op_types: ['ADD', 'ADD', 'ADD', 'ADD', 'ADD', 'ADD']
INFO: Replace the model generated by the old FlatBuffer with the new operator code.
op_new_types: ['CONV_2D', 'DEPTHWISE_CONV_2D', 'ADD', 'RESHAPE', 'FULLY_CONNECTED', 'LOGISTIC']
num of ops: 64
[{'builtin_options': {'dilation_h_factor': 1,
'dilation_w_factor': 1,
'fused_activation_function': 'RELU6',
'padding': 'SAME',
'stride_h': 2,
'stride_w': 2},
'builtin_options_type': 'Conv2DOptions',
'custom_options_format': 'FLEXBUFFERS',
'inputs': [0, 41, 5],
'opcode_index': 0,
'outputs': [106]},
....
TensorFlow/Keras model building process complete!
saved_model / .pb output started ====================================================
WARNING:tensorflow:From /home/gx/env_tflite2tf/bin/tflite2tensorflow:2567: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.convert_variables_to_constants
WARNING:tensorflow:From /home/gx/env_tflite2tf/lib/python3.8/site-packages/tensorflow/python/framework/convert_to_constants.py:856: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.extract_sub_graph
ERROR: Identity is not in graph
Traceback (most recent call last):
File "/home/gx/env_tflite2tf/bin/tflite2tensorflow", line 2567, in main
graph_def = tf.graph_util.convert_variables_to_constants(
File "/home/gx/env_tflite2tf/lib/python3.8/site-packages/tensorflow/python/util/deprecation.py", line 340, in new_func
return func(*args, **kwargs)
File "/home/gx/env_tflite2tf/lib/python3.8/site-packages/tensorflow/python/framework/graph_util_impl.py", line 275, in convert_variables_to_constants
ret = convert_to_constants.convert_variables_to_constants_from_session_graph(
File "/home/gx/env_tflite2tf/lib/python3.8/site-packages/tensorflow/python/framework/convert_to_constants.py", line 1143, in convert_variables_to_constants_from_session_graph
converter_data=_SessionConverterData(
File "/home/gx/env_tflite2tf/lib/python3.8/site-packages/tensorflow/python/framework/convert_to_constants.py", line 856, in init
graph_def = graph_util.extract_sub_graph(graph_def, output_node_names)
File "/home/gx/env_tflite2tf/lib/python3.8/site-packages/tensorflow/python/util/deprecation.py", line 340, in new_func
return func(*args, **kwargs)
File "/home/gx/env_tflite2tf/lib/python3.8/site-packages/tensorflow/python/framework/graph_util_impl.py", line 208, in extract_sub_graph
_assert_nodes_are_present(name_to_node, dest_nodes)
File "/home/gx/env_tflite2tf/lib/python3.8/site-packages/tensorflow/python/framework/graph_util_impl.py", line 163, in _assert_nodes_are_present
assert d in name_to_node, "%s is not in graph" % d
AssertionError: Identity is not in graph

Do you have an idea ? Identity is the name of one of the 3 model outputs.

Note that I have done the installation of Tensorflow Lite and Tensorflow as described in the README:
pip install gdown
gdown --id 1RWZmfFgtxm3muunv6BSf4yU29SKKFXIh
chmod +x tflite_runtime-2.4.1-py3-none-any.whl
pip install tflite_runtime-2.4.1-py3-none-any.whl
pip install tensorflow==2.4.1
And I was able to successfully convert another model (the palm detection model).

Got error while converting magenta_arbitrary-image-stylization-v1-256_fp16_transfer_1.tflite to .h5 format

Issue Type

Bug

OS

Ubuntu

OS architecture

x86_64

Programming Language

Python

Framework

TensorFlow

Download URL for tflite file

https://tfhub.dev/google/lite-model/magenta/arbitrary-image-stylization-v1-256/fp16/transfer/1

Convert Script

sudo docker run -it --rm
-v pwd:/home/user/workdir
ghcr.io/pinto0309/tflite2tensorflow:latest

tflite2tensorflow --model_path ./magenta_arbitrary-image-stylization-v1-256_fp16_transfer_1.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_no_quant_float32_tflite --output_dynamic_range_quant_tflite --output_weight_quant_tflite --output_float16_quant_tflite --output_integer_quant_tflite --string_formulas_for_normalization 'data / 255.0' --output_tfjs --output_coreml --output_onnx --onnx_opset 11 --output_openvino_and_myriad

openvino2tensorflow --model_path saved_model/openvino/FP32/saved_model.xml --output_h5

Description

I was trying to convert the saved_model or tflite model format into Keras model. The magenta_arbitrary-image-stylization-v1-256_fp16_prediction_1.tflite was good with your great help.

I got error again while trying magenta_arbitrary-image-stylization-v1-256_fp16_transfer_1.tflite by following the similar steps.
Would you please help check it?

Thank you so much!

Relevant Log Output

Error log for openvino2tensorflow --model_path saved_model/openvino/FP32/saved_model.xml --output_h5

====================================================================================
layer_type: Sigmoid
layer_id: 473
input_layer0: layer_id=472: KerasTensor(type_spec=TensorSpec(shape=(1, 384, 384, 3), dtype=tf.float32, name=None), name='tf.math.add_72/Add:0', description="created by layer 'tf.math.add_72'")
tf_layers_dict: KerasTensor(type_spec=TensorSpec(shape=(1, 384, 384, 3), dtype=tf.float32, name=None), name='tf.math.sigmoid/Sigmoid:0', description="created by layer 'tf.math.sigmoid'")
====================================================================================
layer_type: Result
layer_id: 474
input_layer0: layer_id=473: KerasTensor(type_spec=TensorSpec(shape=(1, 384, 384, 3), dtype=tf.float32, name=None), name='tf.math.sigmoid/Sigmoid:0', description="created by layer 'tf.math.sigmoid'")
tf_layers_dict: KerasTensor(type_spec=TensorSpec(shape=(1, 384, 384, 3), dtype=tf.float32, name=None), name='tf.identity/Identity:0', description="created by layer 'tf.identity'")
====================================================================================
TensorFlow/Keras model building process complete!
.h5 output started ==================================================================
ERROR: cannot pickle 'module' object
Traceback (most recent call last):
  File "/usr/local/bin/openvino2tensorflow", line 7022, in convert
    model.save(f'{model_output_path}/model_float32.h5', include_optimizer=False, save_format='h5')
  File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/usr/lib/python3.8/copy.py", line 146, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.8/copy.py", line 230, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 146, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.8/copy.py", line 205, in _deepcopy_list
    append(deepcopy(a, memo))
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 296, in _reconstruct
    value = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 146, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.8/copy.py", line 230, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 146, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.8/copy.py", line 210, in _deepcopy_tuple
    y = [deepcopy(a, memo) for a in x]
  File "/usr/lib/python3.8/copy.py", line 210, in <listcomp>
    y = [deepcopy(a, memo) for a in x]
  File "/usr/lib/python3.8/copy.py", line 146, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.8/copy.py", line 210, in _deepcopy_tuple
    y = [deepcopy(a, memo) for a in x]
  File "/usr/lib/python3.8/copy.py", line 210, in <listcomp>
    y = [deepcopy(a, memo) for a in x]
  File "/usr/lib/python3.8/copy.py", line 161, in deepcopy
    rv = reductor(4)
TypeError: cannot pickle 'module' object
All the conversion process is finished! =============================================

Source code for simple inference testing code

No response

Error converting face_landmark_with_attention.tflite

Issue Type

Bug

OS

Mac OS

OS architecture

x86_64

Programming Language

Python

Framework

TensorFlowLite

Download URL for tflite file

https://github.com/google/mediapipe/blob/master/mediapipe/modules/face_landmark/face_landmark_with_attention.tflite

Description

Converting with tflite2tensorflow --model_path face_landmark_with_attention.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_pb gives error

RuntimeError: Encountered unresolved custom op: Landmarks2TransformMatrix.
See instructions: https://www.tensorflow.org/lite/guide/ops_customNode number 192 (Landmarks2TransformMatrix) failed to prepare.
Failed to apply the default TensorFlow Lite delegate indexed at 0.

Relevant Log Output

RuntimeError: Encountered unresolved custom op: Landmarks2TransformMatrix.
See instructions: https://www.tensorflow.org/lite/guide/ops_customNode number 192 (Landmarks2TransformMatrix) failed to prepare.
Failed to apply the default TensorFlow Lite delegate indexed at 0.

Source code for simple inference testing code

No response

converted mediapipe model pose_detection not work

Hi , thanks for your great work.

recently, I am working on pose estimation with mediapipe. I convert the pose_detection.tflite model to onnx with your tflite2tensorflow, the coversion processing is ok, the log info shows that the converison is success.
But when I use the converted .onnx model the outpt value, seems not correct, and is different from what the orginal .tflite model dose.

In the original tflite model, the max confidence value of bounxing box is 0.9, but using the converted model , the max value of bounxing box is only 0.078, which is not correct. And I also tried the model you've already converted, the result is also not right.
Is there something wrong in my step or code?

1. WIndows10

2. x86_64

3. Version of OpenVINO : none

4. Version of TensorFlow e.g. v2.6.0

5. Version of TensorRT : none

6. Version of TFJS : none

7. Version of coremltools : none

8. Version of ONNX : 1.10.1

9. Download URL for .tflite IR model

10. URL of the repository from which the transformed model was taken : https://github.com/google/mediapipe/tree/master/mediapipe/modules/pose_detection

11. URL or source code for simple inference testing code

def image_preprocess(img):
    img = cv2.resize(img, dsize=(224, 224), interpolation=cv2.INTER_AREA)
    img = img.astype("float32")
    img /= 255.0
    print(img.shape)
    img.resize((1, 224, 224, 3))
    return img

def inference(img):
    # Test the model on random input data.
    input_shape = input_details[0]['shape']
    input_data = np.array(img, dtype=np.float32)
    interpreter.set_tensor(input_details[0]['index'], input_data)

    interpreter.invoke()

    # The function `get_tensor()` returns a copy of the tensor data.
    # Use `tensor()` in order to get a pointer to the tensor.
    output_data = interpreter.get_tensor(output_details[0]['index'])
    output_data1 = interpreter.get_tensor(output_details[1]['index'])

    result = [output_data, output_data1]

    print("result[0] shape:", result[0].shape)  # (1, 2254, 12)
    print("result[1] shape:", result[1].shape)  # (1, 2254, 1)
    return result

model = "model_float32.onnx"

# load model
session = onnxruntime.InferenceSession(model, None)
input_name = session.get_inputs()[0].name
output_name = session.get_outputs()[0].name
print(input_name)
print(output_name)

12. Issue Details

Densify ?

1. Ubuntu 18.04

2. OS Architecture x86_64

3. OpenVINO e.g. 2021.4.582

9. Download URL for .tflite IR model https://github.com/google/mediapipe/blob/master/mediapipe/modules/pose_detection/pose_detection.tflite

Hi @PINTO0309 !
New mediapipe version 0.8.6 comes with new models for Blazepose (that's a never ending story :-)
The size of the pose detection model (link above) has been significantly reduced (from ~7.5MB to ~3MB) but unfortunately the model is using a layer named Densify that is not implemented in tflite2tensorflow. I guess it is a relatively new layer. When trying to visualize its data in Netron, I get an "Invalid tensor data size" message.
image

Do you think Densify can be easily implemented in your tools ? Note that it is not something I am eagerly waiting for since I can do without it by using the previous version of the pose detection model.

The shape of the output layer is different from the result of the tensorflow2onnx transformation.

Issue Type

Bug

OS

Ubuntu

OS architecture

x86_64

Programming Language

Python

Framework

ONNX

Download URL for tflite file

VertexAI Auto ML classification TFlite Format model

Convert Script

โ—tflite2tensorflow

docker run -it --rm --gpus all -v pwd:/home/user/workdir ghcr.io/pinto0309/tflite2tensorflow:latest

tflite2tensorflow
--model_path model.tflite
--flatc_path ../flatc
--schema_path ../schema.fbs
--output_pb
--optimizing_for_openvino_and_myriad
--rigorous_optimization_for_myriad

tflite2tensorflow
--model_path model.tflite
--flatc_path ../flatc
--schema_path ../schema.fbs
--output_onnx
--onnx_opset 13
--output_openvino_and_myriad

โ—tensorflow2onnx
pip install git+https://github.com/onnx/tensorflow-onnx

python -m tf2onnx.convert --opset 13 --tflite model.tflite --output model.onnx --dequantize

Description

I'm trying to convert the attached file from tflite to onnx. (Eventually I want to run it in myriad).
In the conversion source tflite, the input to sortmax is [1ร—8], but in tflite2tensorflow, it is [7ร—7ร—8].
If you try tensorflow2tflite, the input to sortmax becomes [1ร—8] as in tflite.
I think [1ร—8] is correct because it is an 8-class classification model.

[tflite]
tflite_netron

[tflite2tensorflow(savedmodel and onnx)]
tflite2tensorflow_onnx

[tensorflow2onnx]
tensorflow2onnx_onnx

Relevant Log Output

None.

Source code for simple inference testing code

No response

The UNIDIRECTIONAL_SEQUENCE_LSTM layer is not yet implemented.

OS you are using: MacOS 11.4

Version of TensorFlow: v2.5.0

Environment: Docker

Under tf 2.5.0, I converted my pre-trained model from saved_model to tflite.

Afterwards, in Docker container, when I was converting this tflite model to pb format using tflite2tensorflow, the following error occured:

ERROR: The UNIDIRECTIONAL_SEQUENCE_LSTM layer is not yet implemented.

(In this experiment, I did not perform quantization/optimization, but later on I do plan to use tflite to quantize my model that is to be saved as .tflite, which is why I did not directly convert saved_model to pb)

How to add custom operators to tflite runtime?

Issue Type

Others

OS

Ubuntu

OS architecture

x86_64

Programming Language

C++

Framework

TensorFlowLite

Download URL for tflite file

https://github.com/charlesq34/pointnet2

https://github.com/charlesq34/pointnet2/tree/master/tf_ops

Convert Script

None

Description

Hi,

I'm doing some test on pointnet2. See https://github.com/charlesq34/pointnet2.
There are some custom operators, see https://github.com/charlesq34/pointnet2/tree/master/tf_ops.
I want to do inference with tflite_runtime, but always got error "RuntimeError: Encountered unresolved custom op: FarthestPointSample.".
It seems that need to add these custom operators to tensorflow lite. 
Do you know how to convert there tf custom operators to tflite_runtime?

Relevant Log Output

None

Source code for simple inference testing code

One of the tf custom operators.

REGISTER_OP("FarthestPointSample")
.Attr("npoint: int") >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Attr
.Input("inp: float32")
.Output("out: int32")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
::tensorflow::shape_inference::ShapeHandle dims1; // batch_size * npoint * 3
TF_RETURN_IF_ERROR(c->WithRank(c->input(0), 3, &dims1));
int npoint;
TF_RETURN_IF_ERROR(c->GetAttr("npoint", &npoint));
::tensorflow::shape_inference::ShapeHandle output = c->MakeShape({c->Dim(dims1, 0), npoint});
c->set_output(0, output);
return OkStatus();
});

MacOS environment installation error (xhost: command not found)

OS you are using: MacOS 11.4

Hi, first of all, thank you so much for this tool kit! It is exactly what I've been looking for.
However, I couldn't successfully use tflite2tensorflow to do conversion in my Docker environment.
I was able to run

docker pull pinto0309/tflite2tensorflow

and downloaded the image, but I encountered an error upon the command

xhost +local: && \
  docker run -it --rm \
  -v `pwd`:/home/user/workdir \
  -v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
  --device /dev/video0:/dev/video0:mwr \
  --net=host \
  -e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
  -e DISPLAY=$DISPLAY \
  --privileged \
  pinto0309/tflite2tensorflow:latest

the error reads:

-bash: xhost: command not found

I suppose this is a command specific to Linux environment, and I was wondering if there is an alternative equivalent command that is executable on MacOS?

Side note:
after the error, I tried directly running

docker run -it --rm \
 pinto0309/tflite2tensorflow:latest

and it did open an TensorRT OpenVINO virtual environment, but I do not need my model to run on TensorRT for further inference.
In addition, I couldn't access my model file stored in my host machine.
My understanding is the command that I was unable to run should somehow mount my host machine file system to the virtual environment, so without the xhost command, perhaps I can use some other way such as ssh or ftp to upload my file onto the virtual envionrment?

model quantified slower than not quantifed on windows x64

Issue Type

Performance

OS

Windows

OS architecture

Other

Programming Language

Python

Framework

TensorFlow, TensorFlowLite

Download URL for tflite file

https://storage.googleapis.com/mediapipe-assets/face_detection_short_range.tflite
https://storage.googleapis.com/mediapipe-assets/face_detection_full_range.tflite

Convert Script

tflite2tensorflow --model_path face_detection_short_range.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_pb

tflite2tensorflow --model_path face_detection_short_range.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_no_quant_float32_tflite --output_dynamic_range_quant_tflite --output_weight_quant_tflite --output_float16_quant_tflite --output_integer_quant_tflite

Description

hello, as the title say. i convert successfully the model, but the quant model is slower than original float32 model, and integer quant is slower than weight quant on windows x64. Do you know the reason? looking forward to your replay.

Relevant Log Output

inference time:
original float32  15ms
weight quant 150
integer quant 200

Source code for simple inference testing code

No response

I was trying to convert hair_segmentation.tflite in google colab. and getting following error.

output json command = /content/flatbuffers/build/flatc -t --strict-json --defaults-json -o . /content/flatbuffers/build/schema.fbs -- /content/hair_segmentation.tflite
Traceback (most recent call last):
File "/usr/local/bin/tflite2tensorflow", line 2882, in
main()
File "/usr/local/bin/tflite2tensorflow", line 2556, in main
ops, op_types = parse_json(jsonfile_path)
File "/usr/local/bin/tflite2tensorflow", line 247, in parse_json
j = json.load(open(jsonfile_path))
FileNotFoundError: [Errno 2] No such file or directory: './/content/hair_segmentation.json'

Set batch_size during conversion

How do i set the batch size when converting the model to tensortrt using

tflite2tensorflow \
--model_path tflite_from_saved_model/model_float32.tflite \
--flatc_path ../../flatc \
--schema_path ../../schema.fbs \
--string_formulas_for_normalization 'data / 255.0' \
--output_tftrt

Problem converting pose_landmark (full)

Issue Type

Performance

OS

Mac OS

OS architecture

Other

Programming Language

Python

Framework

TensorFlowLite

Download URL for tflite file

https://developers.google.com/mediapipe/solutions/vision/pose_landmarker

Convert Script

user@docker-desktop:~/workdir$ tflite2tensorflow --model_path /Users/kingsize/Documents/binedgeml/pose_detector.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_pb

Description

I am trying to install a mediapipe file that you can download from the mediapipe website https://developers.google.com/mediapipe/solutions/vision/pose_landmarker , I downloaded the full file, unzip it, and now I have two models, I am trying to converto to tflite but it doesn't seem to be working for some reason an dI am getting this error, can somebody help me to do this operation ?

Relevant Log Output

Traceback (most recent call last):   File "/usr/local/bin/tflite2tensorflow", line 6614, in <module>     main()   File "/usr/local/bin/tflite2tensorflow", line 5861, in main     ops, json_tensor_details, op_types, full_json = parse_json(jsonfile_path)   File "/usr/local/bin/tflite2tensorflow", line 247, in parse_json     j = json.load(open(jsonfile_path)) FileNotFoundError: [Errno 2] No such file or directory: './/Users/kingsize/Documents/binedgeml/pose_detector.json'

Source code for simple inference testing code

No response

The same generated IR gives different results on CPU and MYRIAD

1. OS Ubuntu 18.04

2. OS Architecture x86_64

3. Version of OpenVINO 2021.2.185

4. Version of tflite2tensorflow 1.8.0

11. URL or source code for simple inference testing code: https://github.com/geaxgx/openvino_blazepose

12. Issue Details

Hi Katsuya,
I would like to have your opinion on the following problem:
I have used tflite2tensorflow 1.8.0 to convert the new version of mediapipe Blazepose models (mediapipe version 0.8.4). In this new version, now we have 3 pose landmark models (full, lite and heavy) and one common pose detection model.
Then I try to test my code BlazeposeOpenvino.py with each model first on CPU then on MYRIAD (OAK-D). Everything seems to work great except when I run the heavy model on the MYRIAD.
For instance, below is the outuput of the heavy model running on the CPU (which looks accurate):
heavy_on_cpu
And below is the output of the same heavy model running on the MYRIADX :
heavy_on_myriad
We can see the skeleton is kind of distorded. Actually, in many cases, with other images, we don't even have a skeleton drawn because the score given by the model is too low.

Do you have an idea of what the problem is ? The heavy model is much bigger (27Mo for the bin fle) than the other models. It takes about 30s to load on the MyriadX. But I guess the size is still acceptable otherwise I would get some error messages.

In case, you want to reproduce the problem:

# Clone my repo
git clone https://github.com/geaxgx/openvino_blazepose.git
cd openvino_blazepose
# Start your tflite2tensorflow docker
./docker_tflite2tensorflow.sh
cd workdir/models
# In my repo, there are only the FP32 IR version of the models, so we need to download the original tflite file 
# and then convert it using tflite2tensorflow. All is done with the following command:
./get_and_convert_heavy.sh
# Exit docker container
exit

# To test with the heavy model running on the CPU :
python BlazeposeOpenvino.py --lm_xml models/pose_landmark_heavy_FP16.xml -i img/yoga.jpg
# To test on the MYRIAD
python BlazeposeOpenvino.py --lm_xml models/pose_landmark_heavy_FP16.xml -i img/yoga.jpg --lm_device MYRIAD 

You can test also with the image img/yoga2.jpg (no skeleton detected).

Thank you.

Latest docker image: error: unrecognized arguments

Hello Pinto,
I am following the tutorial form here:
https://github.com/geaxgx/openvino_hand_tracker
Can it be that in the latest docker version the API has changed?

This is what I do:
marco@CHROTZ:/tmp/depthai_hand_tracker$ docker run --gpus all -it --rm
-v pwd:/workspace/resources
-e LOCAL_UID=$(id -u $USER)
-e LOCAL_GID=$(id -g $USER)
pinto0309/tflite2tensorflow:latest bash

NVIDIA Release 20.09 (build 15985252)

NVIDIA TensorRT 7.1.3 (c) 2016-2020, NVIDIA CORPORATION. All rights reserved.
Container image (c) 2020, NVIDIA CORPORATION. All rights reserved.

https://developer.nvidia.com/tensorrt

To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh

To install open source parsers, plugins, and samples, run /opt/tensorrt/install_opensource.sh. See https://github.com/NVIDIA/TensorRT for more information.

error: XDG_RUNTIME_DIR not set in the environment.
[setupvars.sh] OpenVINO environment initialized

wget https://github.com/google/mediapipe/blob/master/mediapipe/modules/hand_landmark/hand_landmark.tflite
wget https://github.com/google/mediapipe/blob/master/mediapipe/modules/palm_detection/palm_detection.tflite

tflite2tensorflow --model_path ./hand_landmark.tflite --model_output_path palm_detection --flatc_path /home/user/flatc --schema_path /home/user/schema.fbs --output_pb True
tflite2tensorflow: error: unrecognized arguments: True

tflite2tensorflow --model_path ./hand_landmark.tflite --model_output_path palm_detection --flatc_path /home/user/flatc --schema_path /home/user/schema.fbs --output_pb
output json command = /home/user/flatc -t --strict-json --defaults-json -o . /home/user/schema.fbs -- ./hand_landmark.tflite
/home/user/flatc: error: binary "./hand_landmark.tflite" does not have expected file_identifier "TFL3", use --raw-binary to read this file anyway.

Thanks
Marco

Objectron 3D tfjs models: chair and cup are not correct

1. OS you are using: Mac OS

2. OS Architecture: Mac OS

3. Version of OpenVINO: same as your Docker

4. Version of TensorFlow: same as your Docker

5. Version of TensorRT: same as your Docker

6. Version of TFJS: 2.7.0

7. Version of coremltools: same as your Docker

8. Version of ONNX: same as your Docker

9. Download URL for .tflite IR model: directly from Mediapipe repo: https://github.com/google/mediapipe/tree/master/mediapipe/models

10. URL of the repository from which the transformed model was taken: use your Docker command

11. URL or source code for simple inference testing code

let image_tf = tf.browser.fromPixels(inputCanvasElement).asType('float32');  
let image_tf_resized = tf.image.resizeBilinear(image_tf, [224, 224]).expandDims(0).div(255.0);
prediction = model.predict(image_tf_resized);
    
[prediction_keypoints] = prediction[0].arraySync();
[prediction_prob] = prediction[1].arraySync();

12. Issue Details

I can convert the tflite models to tfjs but they seem to predict wrong results while testing by tfjs. The 9 points are kind of random and cannot be detected [correctly.] My preprocessing is the same as https://github.com/sfsrd/objectron. Not sure what's wrong in the conversion pipeline.

IndexError: list index (0) out of range

1. Ubuntu 18.04

2. OS Architecture x86_64

4. Version of TensorFlow e.g. v2.6.0

9. segm_full_sparse_v1008.tflite

10. Coverted model

Converted by command (from Docker):
tflite2tensorflow --model_path segm_full_sparse_v1008.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_pb

12. Issue Details

This code:
temp = tf.keras.models.load_model("saved_model/", compile=False, )
Produce error:

WARNING:tensorflow:SavedModel saved prior to TF 2.5 detected when loading Keras model. Please ensure that you are saving the model with model.save() or tf.keras.models.save_model(), *NOT* tf.saved_model.save(). To confirm, there should be a file named "keras_metadata.pb" in the SavedModel directory.

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
/tmp/ipykernel_2258/3543548130.py in <module>
----> 1 temp = tf.keras.models.load_model("saved_model/", compile=False)

~/anaconda3/envs/tfmot/lib/python3.8/site-packages/keras/saving/save.py in load_model(filepath, custom_objects, compile, options)
    203         filepath = path_to_string(filepath)
    204         if isinstance(filepath, str):
--> 205           return saved_model_load.load(filepath, compile, options)
    206 
    207   raise IOError(

~/anaconda3/envs/tfmot/lib/python3.8/site-packages/keras/saving/saved_model/load.py in load(path, compile, options)
    123                     'tf.saved_model.save(). To confirm, there should be a file '
    124                     'named "keras_metadata.pb" in the SavedModel directory.')
--> 125     _read_legacy_metadata(object_graph_def, metadata)
    126 
    127   if not metadata.nodes:

~/anaconda3/envs/tfmot/lib/python3.8/site-packages/keras/saving/saved_model/load.py in _read_legacy_metadata(object_graph_def, metadata)
    194   # Older SavedModels store the metadata directly in the proto instead of the
    195   # separate pb file.
--> 196   node_paths = _generate_object_paths(object_graph_def)
    197   for node_id, proto in enumerate(object_graph_def.nodes):
    198     if (proto.WhichOneof('kind') == 'user_object' and

~/anaconda3/envs/tfmot/lib/python3.8/site-packages/keras/saving/saved_model/load.py in _generate_object_paths(object_graph_def)
    221     current_node = nodes_to_visit.pop()
    222     current_path = paths[current_node]
--> 223     for reference in object_graph_def.nodes[current_node].children:
    224       if reference.node_id in paths:
    225         continue

IndexError: list index (0) out of range

How to convert this model to Keras (h5) ?

segm_full_v679.tflite and segm_full_v679_opt.tflite

I run the following sample test.

python tflite2tensorflow.py --model_path segm_full_v679.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_pb

And I got the following error.

Traceback (most recent call last):
File "tflite2tensorflow.py", line 4816, in
main()
File "tflite2tensorflow.py", line 4279, in main
interpreter.allocate_tensors()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/interpreter.py", line 408, in allocate_tensors
return self._interpreter.AllocateTensors()
RuntimeError: Encountered unresolved custom op: Convolution2DTransposeBias.Node number 240 (Convolution2DTransposeBias) failed to prepare

However, it works if the input model is changed to segm_full_v679_opt.tflite which is from PINTO_model_zoo/082_MediaPipe_Meet.

Could you add the conversion from segm_full_v679.tflite to segm_full_v679_opt.tflite into the script?

Thanks!

Error on converting of face_landmark.tflite

Issue Type

Bug, Support

OS

Mac OS

OS architecture

x86_64

Programming Language

Python

Framework

TensorFlowLite

Download URL for tflite file

https://github.com/google/mediapipe/blob/master/mediapipe/modules/face_landmark/face_landmark.tflite

Description

I get AssertionError: conv2d_21 is not in graph when running tflite2tensorflow --model_path face_landmark.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_pb

Relevant Log Output

AssertionError: conv2d_21 is not in graph

Source code for simple inference testing code

No response

Use docker but no *.json error

Issue Type

Bug

OS

Ubuntu

OS architecture

x86_64

Programming Language

C++, Python

Framework

ONNX, TensorFlow, TensorFlowLite

Download URL for tflite file

https://github.com/google/mediapipe/blob/master/mediapipe/modules/hand_landmark/hand_landmark_full.tflite

Convert Script

tflite2tensorflow --model_path hand_landmark_full.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_pb

Description

hi, i saw this issue: #33, and i use docker. want to convert model from mediapipe tflite to pd and quantify the model, but show FileNotFoundError: [Errno 2] No such file or directory: './hand_landmark_full.json' error, what's the problom?

Relevant Log Output

user@90fbb8a901d3:~/workdir$ tflite2tensorflow --model_path hand_landmark_full.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_pb
output json command = ../flatc -t --strict-json --defaults-json -o . ../schema.fbs -- hand_landmark_full.tflite
../flatc: error: Unable to generate text for hand_landmark_full
Usage: ../flatc [OPTION]... FILE... [-- FILE...]



FILEs may be schemas (must end in .fbs), binary schemas (must end in .bfbs),
or JSON files (conforming to preceding schema). FILEs after the -- must be
binary flatbuffer format files.
Output files are named using the base file name of the input,
and written to the current directory or the path given by -o.
example: ../flatc -c -b schema1.fbs schema2.fbs data.json
Traceback (most recent call last):
  File "/usr/local/bin/tflite2tensorflow", line 6614, in <module>
    main()
  File "/usr/local/bin/tflite2tensorflow", line 5861, in main
    ops, json_tensor_details, op_types, full_json = parse_json(jsonfile_path)
  File "/usr/local/bin/tflite2tensorflow", line 247, in parse_json
    j = json.load(open(jsonfile_path))
FileNotFoundError: [Errno 2] No such file or directory: './hand_landmark_full.json'

Source code for simple inference testing code

No response

No json file

Issue Type

Bug

OS

Ubuntu

OS architecture

x86_64

Programming Language

Python

Framework

TensorFlow, TensorFlowLite

Download URL for tflite file

I get model from mediapipe PoseEstimate

Convert Script

tflite2tensorflow --model_path pose_landmark_heavy.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_pb

Description

When i want to convert model from mediapipe (PoseEstimate) tensorflow lite i get error that no exist json file. I have only tensorflow lite model. I run at local machine, not docker

Relevant Log Output

output json command = ../flatc -t --strict-json --defaults-json -o . ../schema.fbs -- pose_landmark_heavy.tflite
sh: 1: ../flatc: not found
Traceback (most recent call last):
  File "tflite2tf/tflite2tf/bin/tflite2tensorflow", line 6614, in <module>
    main()
  File "tflite2tf/tflite2tf/bin/tflite2tensorflow", line 5861, in main
    ops, json_tensor_details, op_types, full_json = parse_json(jsonfile_path)
  File "tflite2tf/tflite2tf/bin/tflite2tensorflow", line 247, in parse_json
    j = json.load(open(jsonfile_path))
FileNotFoundError: [Errno 2] No such file or directory: './pose_landmark_heavy.json'

Source code for simple inference testing code

No response

CoreML conversions of face_landmark_with_attention.tflite fails

Issue Type

Bug

OS

Mac OS

OS architecture

aarch64

Programming Language

Python

Framework

CoreML

Download URL for tflite file

https://github.com/google/mediapipe/blob/master/mediapipe/modules/face_landmark/face_landmark_with_attention.tflite

Convert Script

tflite2tensorflow --model_path face_landmark_with_attention.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_coreml

Description

Error's out with ValueError: axes don't match array

Relevant Log Output

CoreML convertion started
ERROR: axes don't match array

Source code for simple inference testing code

No response

ValueError: Dimension size, given by scalar input 1 must be in range [-1, 1)

I am trying to convert these tflite models: https://github.com/breizhn/DTLN-aec/tree/main/pretrained_models
The command I ran was: tflite2tensorflow --model_path dtln_aec_128_2.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_pb
But ran into this error:

ValueError: Dimension size, given by scalar input 1 must be in range [-1, 1) for '{{node split_functional_5/lstm_6/lstm_cell_6/StatefulPartitionedCall/split}} = Split[T=DT_FLOAT, num_split=4](split_functional_5/lstm_6/lstm_cell_6/StatefulPartitionedCall/split/split_dim, BiasAdd_functional_5/lstm_6/lstm_cell_6/StatefulPartitionedCall/BiasAdd)' with input shapes: [], [512] and with computed input tensors: input[0] = <1>.

Tried with both pip installation and docker version.
Is it a bug or something that not supported?

Converted mediapipe palm detection coreml model cannot be deployed on IOS app

Issue Type

Bug

OS

Mac OS

OS architecture

x86_64

Programming Language

Python

Framework

CoreML

Download URL for tflite file

https://github.com/google/mediapipe/blob/master/mediapipe/modules/palm_detection/palm_detection_full.tflite

Convert Script

docker run -it --rm \
  -v `pwd`:/home/user/workdir \
  ghcr.io/pinto0309/tflite2tensorflow:latest

tflite2tensorflow \
  --model_path palm_detection_full.tflite \
  --flatc_path ../flatc \
  --schema_path ../schema.fbs \
  --output_pb

tflite2tensorflow \
  --model_path palm_detection_full.tflite \
  --flatc_path ../flatc \
  --schema_path ../schema.fbs \
  --output_no_quant_float32_tflite \
  --output_dynamic_range_quant_tflite \
  --output_weight_quant_tflite \
  --output_float16_quant_tflite \
  --output_integer_quant_tflite \
  --string_formulas_for_normalization 'data / 255.0' \
  --output_tfjs \
  --output_coreml \
  --output_tftrt_float32 \
  --output_tftrt_float16 \
  --output_onnx \
  --onnx_opset 13 \
  --output_openvino_and_myriad

Description

when I deployed the converted mp palm detection coreml model on IOS app, I got error like:

2022-02-15 16:31:50.577312+0000 MLModelCamera[12602:4637828] [espresso] [Espresso::handle_ex_plan] exception=Espresso exception: "Generic error": [Exception from layer 240: model_1/model/add_24/add]Espresso exception: "Invalid blob shape": elementwise_kernel_cpu: Cannot broadcast [12, 2, 1, 256, 1] and [12, 1, 1, 256, 1] status=-1

seems like the resizeBilinear op causes this problem. Is there any way to fix this issue? Thanks in advance.

The related code in tflite2tensorlfow.py is:

        elif op_type == 'RESIZE_BILINEAR':
            input_tensor = tensors[op['inputs'][0]]
            size_detail = interpreter._get_tensor_details(op['inputs'][1])
            size = interpreter.get_tensor(size_detail['index'])
            size_height = size[0]
            size_width  = size[1]

            options = op['builtin_options']
            align_corners = options['align_corners']
            half_pixel_centers = options['half_pixel_centers']

            def upsampling2d_bilinear(x, size_height, size_width, align_corners, half_pixel_centers):
                if optimizing_for_edgetpu_flg:
                    return tf.image.resize_bilinear(x, (size_height, size_width))
                else:
                    if optimizing_for_openvino_and_myriad:
                        if half_pixel_centers:
                            return tf.image.resize_bilinear(
                                x,
                                (size_height, size_width),
                                align_corners=False,
                                half_pixel_centers=half_pixel_centers
                            )
                        else:
                            return tf.image.resize_bilinear(
                                x,
                                (size_height, size_width),
                                align_corners=True,
                                half_pixel_centers=half_pixel_centers
                            )
                    else:
                        return tfv2.image.resize(
                            x,
                            [size_height, size_width],
                            method='bilinear'
                        )

            output_detail = interpreter._get_tensor_details(op['outputs'][0])
            lambda_name = get_op_name(output_detail['name']) + '_lambda'
            output_tensor_lambda = tf.keras.layers.Lambda(
                upsampling2d_bilinear,
                arguments={'size_height': size_height,
                            'size_width': size_width,
                            'align_corners': align_corners,
                            'half_pixel_centers': half_pixel_centers},
                name=lambda_name
            )(input_tensor)
            output_tensor = tf.identity(
                output_tensor_lambda,
                name=get_op_name(output_detail['name'])
            )
            tensors[output_detail['index']] = output_tensor

Relevant Log Output

2022-02-15 16:31:50.577312+0000 MLModelCamera[12602:4637828] [espresso] [Espresso::handle_ex_plan] exception=Espresso exception: "Generic error":ย [Exception from layer 240: model_1/model/add_24/add]Espresso exception: "Invalid blob shape": elementwise_kernel_cpu: Cannot broadcast [12, 2, 1, 256, 1] and [12, 1, 1, 256, 1] status=-1

Source code for simple inference testing code

No response

fail to convert [KNIFT]

Issue Type

Bug

OS

Ubuntu

OS architecture

x86_64

Programming Language

Python

Framework

TensorFlow, TensorFlowLite

Download URL for tflite file

https://github.com/google/mediapipe/blob/master/docs/solutions/models.md#knift

Convert Script

tar="KNIFT"
option="--flatc_path ../flatc --schema_path ../schema.fbs"

./../flatc -t --strict-json --defaults-json -o ./${tar}/ ../schema.fbs -- ${tar}/knift_float.tflite

tflite2tensorflow --model_path ${tar}/knift_float.tflite ${option} --output_pb

Description

I tried to convert mediapipe KNIFT model but it failed due to channel size mismatch between input and filter.
Is there a mistake in my procedure? If no, could you fix the bug?

Relevant Log Output

Traceback (most recent call last):
  File "/usr/local/bin/tflite2tensorflow", line 6614, in <module>
    main()
  File "/usr/local/bin/tflite2tensorflow", line 5881, in main
    TFLite_Detection_PostProcess_flg = make_graph(
  File "/usr/local/bin/tflite2tensorflow", line 720, in make_graph
    output_tensor = tf.nn.depthwise_conv2d(
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1963, in _create_c_op
    raise ValueError(e.message)
ValueError: Dimensions must be equal, but are 1 and 16 for '{{node depthwise}} = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], explicit_paddings=[], padding="SAME", strides=[1, 1, 1, 1]](rgb_to_grayscale_1, depthwise/filter_in)' with input shapes: [200,32,32,1], [3,3,16,1].

Source code for simple inference testing code

No response

Order of input channels switched on ONNX

Issue Type

Others

OS

Mac OS

OS architecture

aarch64

Programming Language

C++

Framework

TensorFlowLite

Download URL for tflite file

https://github.com/google/mediapipe/blob/master/mediapipe/modules/face_detection/face_detection_short_range.tflite

Convert Script

tflite2tensorflow --model_path face_detection_short_range.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_pb
tflite2tensorflow --model_path face_detection_short_range.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_onnx --onnx_opset 9

Description

The input to the TFlite model is 1x128x128x3 but it is switched to 1x3x128x128 on ONNX output.

Relevant Log Output

INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
inputs:
{'dtype': <class 'numpy.float32'>,
 'index': 0,
 'name': 'input',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([  1, 128, 128,   3], dtype=int32),
 'shape_signature': array([  1, 128, 128,   3], dtype=int32),
 'sparsity_parameters': {}}
outputs:
{'dtype': <class 'numpy.float32'>,
 'index': 175,
 'name': 'regressors',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([  1, 896,  16], dtype=int32),
 'shape_signature': array([  1, 896,  16], dtype=int32),
 'sparsity_parameters': {}}
{'dtype': <class 'numpy.float32'>,
 'index': 174,
 'name': 'classificators',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([  1, 896,   1], dtype=int32),
 'shape_signature': array([  1, 896,   1], dtype=int32),
 'sparsity_parameters': {}}
ONNX convertion started =============================================================

ONNX convertion complete! - saved_model/model_float32.onnx

Source code for simple inference testing code

No response

KeyError: 'operator_codes'

Running tflite2tensorflow in docker container, against tflite model generated in the AutoML Vision service in google cloud, produces:

sh-5.0$ tflite2tensorflow \

--model_path model.tflite
--flatc_path ../flatc
--schema_path ../schema.fbs
--output_pb
--optimizing_for_openvino_and_myriad
Traceback (most recent call last):
File "/usr/local/bin/tflite2tensorflow", line 6201, in
main()
File "/usr/local/bin/tflite2tensorflow", line 5592, in main
ops, json_tensor_details, op_types, full_json = parse_json(jsonfile_path)
File "/usr/local/bin/tflite2tensorflow", line 265, in parse_json
op_types = [v['builtin_code'] for v in j['operator_codes']]
KeyError: 'operator_codes'

The tflite output from AutoML Vision was three files: model.tflite, tflite_metadata.json, and dict.txt (which contains the list of labels).

I first received the error that it could not find model.json, so I renamed tflite_metadata.json to model.json and thus the error above.

Here is contents of model.json (tflite_metadata.json) - obviously it does not include operator codes.

{
"inferenceType": "QUANTIZED_UINT8",
"inputShape": [
1,
320,
320,
3
],
"inputTensor": "normalized_input_image_tensor",
"maxDetections": 40,
"outputTensorRepresentation": [
"bounding_boxes",
"class_labels",
"class_confidences",
"num_of_boxes"
],
"outputTensors": [
"TFLite_Detection_PostProcess",
"TFLite_Detection_PostProcess:1",
"TFLite_Detection_PostProcess:2",
"TFLite_Detection_PostProcess:3"
]
}

Is tflite2tensorflow expecting a different set of files?

OSError: SavedModel file does not exist at: saved_model/{saved_model.pbtxt|saved_model.pb}

Issue Type

Bug

OS

Ubuntu

OS architecture

x86_64

Programming Language

Python

Framework

TensorFlowLite

Download URL for tflite file

https://github.com/google/mediapipe/blob/0.8.0/mediapipe/modules/face_landmark/face_landmark.tflite

https://github.com/google/mediapipe/blob/0.8.0/mediapipe/modules/face_detection/face_detection_front.tflite

https://github.com/google/mediapipe/blob/0.8.0/mediapipe/modules/hand_landmark/hand_landmark.tflite

Convert Script

~/workdir $ tflite2tensorflow --model_path face_landmark.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_no_quant_float32_tflite

Description

docker pull ghcr.io/pinto0309/tflite2tensorflow:latest

docker run -it --rm
-v pwd:/home/user/workdir
ghcr.io/pinto0309/tflite2tensorflow:latest

Relevant Log Output

user@4cd1e8cc1577:~/workdir$ tflite2tensorflow --model_path tflite2tensorflow/resources/face_landmark_with_attention.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_no_quant_float32_tflite 
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
inputs:
{'dtype': <class 'numpy.float32'>,
 'index': 0,
 'name': 'input_1',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([  1, 192, 192,   3], dtype=int32),
 'shape_signature': array([  1, 192, 192,   3], dtype=int32),
 'sparsity_parameters': {}}
outputs:
{'dtype': <class 'numpy.float32'>,
 'index': 719,
 'name': 'output_mesh_identity',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([   1,    1,    1, 1404], dtype=int32),
 'shape_signature': array([   1,    1,    1, 1404], dtype=int32),
 'sparsity_parameters': {}}
{'dtype': <class 'numpy.float32'>,
 'index': 720,
 'name': 'output_lips',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([  1,   1,   1, 160], dtype=int32),
 'shape_signature': array([  1,   1,   1, 160], dtype=int32),
 'sparsity_parameters': {}}
{'dtype': <class 'numpy.float32'>,
 'index': 721,
 'name': 'output_left_eye',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([  1,   1,   1, 142], dtype=int32),
 'shape_signature': array([  1,   1,   1, 142], dtype=int32),
 'sparsity_parameters': {}}
{'dtype': <class 'numpy.float32'>,
 'index': 723,
 'name': 'output_right_eye',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([  1,   1,   1, 142], dtype=int32),
 'shape_signature': array([  1,   1,   1, 142], dtype=int32),
 'sparsity_parameters': {}}
{'dtype': <class 'numpy.float32'>,
 'index': 722,
 'name': 'output_left_iris',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([ 1,  1,  1, 10], dtype=int32),
 'shape_signature': array([ 1,  1,  1, 10], dtype=int32),
 'sparsity_parameters': {}}
{'dtype': <class 'numpy.float32'>,
 'index': 724,
 'name': 'output_right_iris',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([ 1,  1,  1, 10], dtype=int32),
 'shape_signature': array([ 1,  1,  1, 10], dtype=int32),
 'sparsity_parameters': {}}
{'dtype': <class 'numpy.float32'>,
 'index': 710,
 'name': 'conv_faceflag',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([1, 1, 1, 1], dtype=int32),
 'shape_signature': array([1, 1, 1, 1], dtype=int32),
 'sparsity_parameters': {}}
tflite Float32 convertion started ===================================================
ERROR: SavedModel file does not exist at: saved_model/{saved_model.pbtxt|saved_model.pb}
Traceback (most recent call last):
  File "/usr/local/bin/tflite2tensorflow", line 6046, in main
    converter = tf.lite.TFLiteConverter.from_saved_model(model_output_path)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py", line 1786, in from_saved_model
    saved_model = _load(saved_model_dir, tags)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 782, in load
    result = load_partial(export_dir, None, tags, options)["root"]
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 887, in load_partial
    loader_impl.parse_saved_model_with_debug_info(export_dir))
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/loader_impl.py", line 57, in parse_saved_model_with_debug_info
    saved_model = parse_saved_model(export_dir)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/loader_impl.py", line 115, in parse_saved_model
    raise IOError(
OSError: SavedModel file does not exist at: saved_model/{saved_model.pbtxt|saved_model.pb}

Source code for simple inference testing code

No response

Palm detectin model: tflite and openvino IR model give different ouputs

1. OS Ubuntu 18.04

2. OS Architecture x86_64

3. Version of OpenVINO 2021.2.185 (the one from your dockerfile)

4. Version of TensorFlow e.g. v2.4.1 (the one from your dockerfile)

9. Download URL for .tflite : https://github.com/google/mediapipe/blob/master/mediapipe/modules/palm_detection/palm_detection.tflite

Hi Pinto ! First of all, I want to thank you for your last version of tflite2tensorflow. The dockerfile will surely make users life much easier !

I have installed and run the docker image of tflite2tensorflow to convert the Mediapipe palm detection model (see link above) into Openvino IR format. This model takes 128x128 images as input, whereas the previous model took 256x256 images.
When running the FP32 model on my cpu, I noticed that sometimes the palm bounding box seemed a bit off.
When comparing with the output from the original tflite model, we can see the bounding boxes are not the same:
Below is the output from the FP32 openvino model:
output_hands_openvino_128

Below is the output from the tflite model:
output_hands_tflite_128

Note that if I compare the outputs of the older 256x256 models, there are no differences between tflite and Openvino versions.

Do you have an idea of what could explain the different ouputs ?
Using Netron, I can see that the new tflite model now uses Prelu and ResizeBilinear that were not used in the older model. I don't see how Prelu could cause differences in the conversion process, but ResizeBilinear may be trickier (converted into Interpolate). Do you have any thoughts about that ?

Thanks for your help! I would like to use the new model, which is much faster than the previous version.

I can send you the code to reproduce the problem if you want.

tflite2tensorflow Docker has flatc error

Issue Type

Bug

OS

Ubuntu

OS architecture

x86_64

Programming Language

C++

Framework

TensorFlowLite

Download URL for tflite file

https://storage.googleapis.com/mediapipe-assets/pose_detection.tflite

Convert Script

tflite2tensorflow

Description

I assume this is just a user error but I don't know how to debug it how what I'm doing differently from the instructions. I've downloaded the current Dockerfile and I'm attempting to run tflite2tensorflow with this command:

sudo docker run a670fceb0fe1 tflite2tensorflow --model_path pose_detection.tflite --flatc_path ./flatc --schema_path ./schema.fbs --output_pb

And I get this error:

...
FILEs may be schemas (must end in .fbs), binary schemas (must end in .bfbs),
or JSON files (conforming to preceding schema). FILEs after the -- must be
binary flatbuffer format files.
Output files are named using the base file name of the input,
and written to the current directory or the path given by -o.
example: ./flatc -c -b schema1.fbs schema2.fbs data.json
output json command = ./flatc -t --strict-json --defaults-json -o . ./schema.fbs -- pose_detection.tflite

which makes the script unable to do it's thing. This flatc file is built on v1.12.0 of the current repository.

When I copy paste that output command into another docker command

sudo docker run a670fceb0fe1 ./flatc -t --strict-json --defaults-json -o . ./schema.fbs -- pose_detection.tflite

I get the same error. However when I run it directly in my terminal, the command creates a json file. With all of that in mind am I doing something wrong? Btw I'm incredibly grateful that you have published all of these conversion programs! It's amazing ๐Ÿ˜„

Relevant Log Output

sudo docker run a670fceb0fe1 tflite2tensorflow --model_path pose_detection.tflite --flatc_path ./flatc --schema_path ./schema.fbs --output_pb
./flatc: error: unable to load file: pose_detection.tflite
Usage: ./flatc [OPTION]... FILE... [-- FILE...]
  --binary         -b    Generate wire format binaries for any data definitions.
  --json           -t    Generate text output for any data definitions.
  --cpp            -c    Generate C++ headers for tables/structs.
  --go             -g    Generate Go files for tables/structs.
  --java           -j    Generate Java classes for tables/structs.
  --js             -s    Generate JavaScript code for tables/structs.
  --dart           -d    Generate Dart classes for tables/structs.
  --ts             -T    Generate TypeScript code for tables/structs.
  --csharp         -n    Generate C# classes for tables/structs.
  --python         -p    Generate Python files for tables/structs.
  --lobster              Generate Lobster files for tables/structs.
  --lua            -l    Generate Lua files for tables/structs.
  --rust           -r    Generate Rust files for tables/structs.
  --php                  Generate PHP files for tables/structs.
  --kotlin               Generate Kotlin classes for tables/structs.
  --jsonschema           Generate Json schema.
  --swift                Generate Swift files for tables/structs.
  -o PATH                Prefix PATH to all generated files.
  -I PATH                Search for includes in the specified path.
  -M                     Print make rules for generated files.
  --version              Print the version number of flatc and exit.
  --strict-json          Strict JSON: field names must be / will be quoted,
                         no trailing commas in tables/vectors.
  --allow-non-utf8       Pass non-UTF-8 input through parser and emit nonstandard
                         \x escapes in JSON. (Default is to raise parse error on
                         non-UTF-8 input.)
  --natural-utf8         Output strings with UTF-8 as human-readable strings.
                         By default, UTF-8 characters are printed as \uXXXX escapes.
  --defaults-json        Output fields whose value is the default when
                         writing JSON
  --unknown-json         Allow fields in JSON that are not defined in the
                         schema. These fields will be discared when generating
                         binaries.
  --no-prefix            Don't prefix enum values with the enum type in C++.
  --scoped-enums         Use C++11 style scoped and strongly typed enums.
                         also implies --no-prefix.
  --gen-includes         (deprecated), this is the default behavior.
                         If the original behavior is required (no include
                         statements) use --no-includes.
  --no-includes          Don't generate include statements for included
                         schemas the generated file depends on (C++ / Python).
  --gen-mutable          Generate accessors that can mutate buffers in-place.
  --gen-onefile          Generate single output file for C# and Go.
  --gen-name-strings     Generate type name functions for C++ and Rust.
  --gen-object-api       Generate an additional object-based API.
  --gen-compare          Generate operator== for object-based API types.
  --gen-nullable         Add Clang _Nullable for C++ pointer. or @Nullable for Java
  --java-checkerframe    work Add @Pure for Java.
  --gen-generated        Add @Generated annotation for Java
  --gen-all              Generate not just code for the current schema files,
                         but for all files it includes as well.
                         If the language uses a single file for output (by default
                         the case for C++ and JS), all code will end up in this one
                         file.
  --cpp-include          Adds an #include in generated file.
  --cpp-ptr-type T       Set object API pointer type (default std::unique_ptr).
  --cpp-str-type T       Set object API string type (default std::string).
                         T::c_str(), T::length() and T::empty() must be supported.
                         The custom type also needs to be constructible from std::string
                         (see the --cpp-str-flex-ctor option to change this behavior).
  --cpp-str-flex-ctor    Don't construct custom string types by passing std::string
                         from Flatbuffers, but (char* + length).
  --cpp-std CPP_STD      Generate a C++ code using features of selected C++ standard.
                         Supported CPP_STD values:
                          * 'c++0x' - generate code compatible with old compilers;
                          * 'c++11' - use C++11 code generator (default);
                          * 'c++17' - use C++17 features in generated code (experimental).
  --object-prefix        Customise class prefix for C++ object-based API.
  --object-suffix        Customise class suffix for C++ object-based API.
                         Default value is "T".
  --no-js-exports        Removes Node.js style export lines in JS.
  --goog-js-export       Uses goog.exports* for closure compiler exporting in JS.
  --es6-js-export        Uses ECMAScript 6 export style lines in JS.
  --go-namespace         Generate the overrided namespace in Golang.
  --go-import            Generate the overrided import for flatbuffers in Golang
                         (default is "github.com/google/flatbuffers/go").
  --raw-binary           Allow binaries without file_indentifier to be read.
                         This may crash flatc given a mismatched schema.
  --size-prefixed        Input binaries are size prefixed buffers.
  --proto                Input is a .proto, translate to .fbs.
  --proto-namespace-suffix Add this namespace to any flatbuffers generated
    SUFFIX                 from protobufs.
  --oneof-union          Translate .proto oneofs to flatbuffer unions.
  --grpc                 Generate GRPC interfaces for the specified languages.
  --schema               Serialize schemas instead of JSON (use with -b).
  --bfbs-comments        Add doc comments to the binary schema files.
  --bfbs-builtins        Add builtin attributes to the binary schema files.
  --bfbs-gen-embed       Generate code to embed the bfbs schema to the source.
  --conform FILE         Specify a schema the following schemas should be
                         an evolution of. Gives errors if not.
  --conform-includes     Include path for the schema given with --conform PATH
  --filename-suffix      The suffix appended to the generated file names.
                         Default is '_generated'.
  --filename-ext         The extension appended to the generated file names.
                         Default is language-specific (e.g., '.h' for C++)
  --include-prefix       Prefix this path to any generated include statements.
    PATH
  --keep-prefix          Keep original prefix of schema include statement.
  --no-fb-import         Don't include flatbuffers import statement for TypeScript.
  --no-ts-reexport       Don't re-export imported dependencies for TypeScript.
  --short-names          Use short function names for JS and TypeScript.
  --reflect-types        Add minimal type reflection to code generation.
  --reflect-names        Add minimal type/name reflection.
  --root-type T          Select or override the default root_type
  --force-defaults       Emit default values in binary output from JSON
  --force-empty          When serializing from object API representation,
                         force strings and vectors to empty rather than null.
  --force-empty-vectors  When serializing from object API representation,
                         force vectors to empty rather than null.
  --flexbuffers          Used with "binary" and "json" options, it generates
                         data using schema-less FlexBuffers.
FILEs may be schemas (must end in .fbs), binary schemas (must end in .bfbs),
or JSON files (conforming to preceding schema). FILEs after the -- must be
binary flatbuffer format files.
Output files are named using the base file name of the input,
and written to the current directory or the path given by -o.
example: ./flatc -c -b schema1.fbs schema2.fbs data.json
output json command = ./flatc -t --strict-json --defaults-json -o . ./schema.fbs -- pose_detection.tflite
Traceback (most recent call last):
  File "/usr/local/bin/tflite2tensorflow", line 6608, in <module>
    main()
  File "/usr/local/bin/tflite2tensorflow", line 5859, in main
    ops, json_tensor_details, op_types, full_json = parse_json(jsonfile_path)
  File "/usr/local/bin/tflite2tensorflow", line 247, in parse_json
    j = json.load(open(jsonfile_path))
FileNotFoundError: [Errno 2] No such file or directory: './pose_detection.json'

Source code for simple inference testing code

No response

Convert palm_detection_lite.tflite : ValueError: Tensor data is null. Run allocate_tensors() first

Issue Type

Bug

OS

Ubuntu

OS architecture

x86_64

Programming Language

Python

Framework

OpenVINO

Download URL for tflite file

https://github.com/google/mediapipe/blob/master/mediapipe/modules/palm_detection/palm_detection_lite.tflite
https://github.com/google/mediapipe/blob/master/mediapipe/modules/palm_detection/palm_detection_full.tflite

Description

Hi @PINTO0309 !
I have just noticed that Goggle has updated the palm detection model a few weeks ago (I missed them because does not appear in release notes). Now there are 2 models (lite and full). Naturally I would like to test them :-)
So after running docker pull ghcr.io/pinto0309/tflite2tensorflow:latest ,
when running:
tflite2tensorflow --model_path palm_detection_lite.tflite --model_output_path palm_detection_lite --flatc_path ../../flatc --schema_path ../../schema.fbs --output_pb --optimizing_for_openvino_and_myriad --rigorous_optimization_for_myriad
I get : ValueError: Tensor data is null. Run allocate_tensors() first

Relevant Log Output

...
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: DEQUANTIZE
{'builtin_options_type': 'NONE',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [68],
 'opcode_index': 9,
 'outputs': [319]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: PRELU
{'builtin_options_type': 'NONE',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [121, 319],
 'opcode_index': 1,
 'outputs': [122]}
Traceback (most recent call last):
  File "/usr/local/bin/tflite2tensorflow", line 6402, in <module>
    main()
  File "/usr/local/bin/tflite2tensorflow", line 5759, in main
    TFLite_Detection_PostProcess_flg = make_graph(
  File "/usr/local/bin/tflite2tensorflow", line 877, in make_graph
    alpha_array = interpreter.get_tensor(alpha_detail['index'])
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/interpreter.py", line 858, in get_tensor
    return self._interpreter.GetTensor(tensor_index)
ValueError: Tensor data is null. Run allocate_tensors() first

Source code for simple inference testing code

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.