Giter Club home page Giter Club logo

adventures-in-tensorflow-lite's Introduction

My name is Sayak Paul! ๐Ÿ‘พ

  • ๐Ÿ”ญ I work on ๐Ÿงจ diffusion models at Hugging Face ๐Ÿค—
  • ๐ŸŒฑ Iโ€™m interested in the area of representation learning (self-supervision, semi-supervision, model robustness).
  • ๐Ÿ‘ฏ Iโ€™m always open to meaningful collaborations.
  • โšก Fun fact: I love watching crime and action thrillers (The Silence of the Lambs being an all-time favorite one).
  • ๐Ÿ™ƒ Recepient of the Google Open Source Peer Bonus Award (2020, 2021, and 2022). Also received the TensorFlow Top Contributor Award 2021.
  • ๐Ÿ“ซ More details - sayak.dev.

Following are some of my favorite repositories that I have contributed to and/or contribute to.

adventures-in-tensorflow-lite's People

Contributors

aimbot6120 avatar sayakpaul avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

adventures-in-tensorflow-lite's Issues

Can not convert to TFLite

Hi Mr. @sayakpaul
Thank you for the example convert to TFLite model.
I have successfully converted the pretrained models of DeepLabv3 to TFLite model. I have this suggestion, if you use output_arrays=['ArgMax'] instead of ['ResizeBilinear_2'] the output_shape of the TFLite model is 513 instead of 65.

I ran the local_test.sh file from the original repo, and frozen_inference_graph.pb cannot be converted to TFLite by your method (The frozen_inference_graph.pb file has run successfully with deeplab_demo.ipynb). When converting to TFLite, an error occurs: "ValueError: Invalid tensors 'sub_7' were found."

Do you have any advice? Thank you.

different color for each instance

Hello.
Thank you for your great deeplap usage notebooks.
I would like to know if there is anyway I can color each instance separately?

Thank you in advances.
Bill

No model size reduction in Tflite model size with integer Quantisation

Hi @sayakpaul

Thanks for the detailed conversion examples.

I have trained a PyTorch model, which I converted to Keras using the pytorch2keras lib.

using this Keras model to convert to tflite. I want to run the tflite on coral devices.

things which i noticed :

  • Keras model size (57.6MB)
  • Using dynamic range quantization the generated tflite is of size(15MB)
  • Using integer only quantization the generated tflite is also of size(15MB)

Ideally, we should be able to reduce the model size even further as we convert fp32 to int8

Could you point me out, if i am missing anything here?

shared my conversion notebook and Keras modelkeras_model

data

Full Integer Deployment

Sorry, but I have taken a look at your example. It is good to see some complete examples like this. However, the title somehow sometimes seems misleading when the 8bit integer model is not actually 8 bit. When you let the optimizer default, it means some layers will not be quantized.

quantization_strategy=="int8" does not quantize the model

Hi Paul!

Have a question for MobileDet_Conversion_TFLite.ipynb, Quantize and serialize.

Using model: model_checkpoint_name = "ssd_mobiledet_cpu_coco"

Is my understanding correct: using
original_precision = "regular"
quantization_strategy = "int8"

We expect that the converted model is a quantized model, with input as uint8 (converter.inference_input_type = tf.uint8).
However, the resulting model input is not quantized:
input_type = interpreter.get_input_details()[0]['dtype']
print('input_type: ' + str(input_type))
input_type: <class 'numpy.float32'>

Moreover, checking the resulting tflite file using netron.app shows that the model is missing the Quantize step after normalized_input_image_tensor.

Is quantization_strategy=="int8" missing converter.representative_dataset = representative_dataset_gen?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.