Comments (59)
@feranick @gasgallo @alexanderfrey
The fix is to just upgrade the tflite_runtime package!!
$ pip3 install https://dl.google.com/coral/python/tflite_runtime-2.1.0-cp37-cp37m-linux_aarch64.whl
Collecting tflite-runtime==2.1.0 from https://dl.google.com/coral/python/tflite_runtime-2.1.0-cp37-cp37m-linux_aarch64.whl
Downloading https://dl.google.com/coral/python/tflite_runtime-2.1.0-cp37-cp37m-linux_aarch64.whl (1.9MB)
100% |████████████████████████████████| 1.9MB 202kB/s
Requirement already satisfied: numpy>=1.12.1 in /usr/lib/python3/dist-packages (from tflite-runtime==2.1.0) (1.16.2)
Installing collected packages: tflite-runtime
Found existing installation: tflite-runtime 1.15.0
Uninstalling tflite-runtime-1.15.0:
Successfully uninstalled tflite-runtime-1.15.0
Successfully installed tflite-runtime-2.1.0
$ python3 classify_image.py --model models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --labels models/inat_bird_labels.txt --input images/parrot.jpg
----INFERENCE TIME----
Note: The first inference on Edge TPU is slow because it includes loading the model into Edge TPU memory.
13.5ms
3.1ms
3.1ms
3.0ms
3.1ms
-------RESULTS--------
Ara macao (Scarlet Macaw): 0.76562
from edgetpu.
@feranick @gasgallo @alexanderfrey
Update: We'll fix the google-coral/tflite repo to be aligned with the new changes soon. Stay tuned for updates on new release @ our /news page!
from edgetpu.
@feranick I'm aware, we are working to get all these fix!
Thanks
from edgetpu.
Sorry guys for delaying responses, we've getting tons of issues, but if you are planning to build your own tflite_package
or even tensorflow pip package, please use this commit!
from edgetpu.
I confirm that version 13.0 or the runtime (libedgetpu1) fails on all models that previously worked on runtime version 12.1. This is regardless on the TF version the initial tflite models were generated and the edgetpu-compiler used for the conversion.
As a note: the older runtimes are no longer available so inference is currently broken unless someone does not update to the new runtimes. This is hard, as the new ones are currently pushed through the apt upgrade process.
from edgetpu.
I confirm @Namburger solution works. This should close the issue. Thanks very much!
from edgetpu.
@feranick @Namburger Problem is solved once you install the updated tflite_runtime package 2.1.0 as noted in the news: https://coral.ai/news/updates-01-2020/
That was an obvious one...
Thanks guys for the great work, can't wait to see the new hardware 👍
Alexander
from edgetpu.
Thanks. All seems to be working and this issue can be closed.
from edgetpu.
My fault, problem solved!
I was using:
tf.lite.Interpreter(model_path, experimental_delegates=[tf.lite.experimental.load_delegate("libedgetpu.so.1")])
instead of use tflite_runtime
:
from tflite_runtime.interpreter import load_delegate
from tflite_runtime.interpreter import Interpreter
Interpreter(model_path, experimental_delegates=[load_delegate("libedgetpu.so.1")])
from edgetpu.
@Namburger is there anything special about the way this tflite_runtime is built? I'm trying to build tflite_runtime
from master
because I need a recent fix (tensorflow/tensorflow#33691). I'm using the following:
tensorflow/lite/tools/pip_package/build_pip_package.sh
But the resulting pip package gives me the following error when running inference:
RuntimeError: Internal: Unsupported data type in custom op handler: 39898280Node number 7 (EdgeTpuDelegateForCustomOp) failed to prepare.
With the tflite_runtime
you mentioned, the one from here, it works but the resizing has the bug so the output is incorrect. I'm guessing the version you posted is based on branch v2.1.0
? How can I make a pip wheel based on master
?
from edgetpu.
@gasgallo which model are you using? Could you attach it here?
from edgetpu.
@Namburger it's the model in the classification example: https://github.com/google-coral/edgetpu/raw/master/test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite
from edgetpu.
Same here !
This is due to the new edge api version that got released last night. If I'm not mistaken its a major release (3.0) and it seems to have broken the current models....
Sad thing is that I cannot find anything about that version on the web but I think we will soon see some announcement.
mendel@lime-wasp:~/google-coral/examples-camera/opencv$ sudo dpkg -l | grep edge
ii edgetpudemo 3-1 all Edge TPU demo script
ii libedgetpu1-std:arm64 13.0 arm64 Support library for Edge TPU
ii python3-edgetpu 13.0 arm64 Edge TPU Python API
ii python3-edgetpuvision 6-1 arm64 EdgeTPU camera API
best
from edgetpu.
Same here !
This is due to the new edge api version that got released last night. If I'm not mistaken its a major release (3.0) and it seems to have broken the current models....
Sad thing is that I cannot find anything about that version on the web but I think we will soon see some announcement.
mendel@lime-wasp:~/google-coral/examples-camera/opencv$ sudo dpkg -l | grep edge
ii edgetpudemo 3-1 all Edge TPU demo script
ii libedgetpu1-std:arm64 13.0 arm64 Support library for Edge TPU
ii python3-edgetpu 13.0 arm64 Edge TPU Python API
ii python3-edgetpuvision 6-1 arm64 EdgeTPU camera API
best
So the same model runs fine on the dev board because I didn't update the system, while on my PC (where I did update the system) the USB accelerator fails on the same model?
from edgetpu.
@alexanderfrey thanks for the catch, the release notes is coming soon. This doesn't sound good though, I'll check with the team on this today.
@gasgallo Did you recently up an apt upgrade
on your pc? For now could you use this classify_image script to see if it works? Can you also share the output of this command?
$ dpkg -l | grep edgetpu
from edgetpu.
@Namburger I did upgrade, just today. I'll give the new script a try as soon as I'll get back to my desk. In the meanwhile, thanks for the help!
from edgetpu.
@alexanderfrey @gasgallo Just confirmed that the edgetpu api is still working:
$ python3 classify_image.py --model ../test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --label ../test_data/inat_bird_labels.txt --image ../test_data/parrot.jpg
---------------------------
Ara macao (Scarlet Macaw)
Score : 0.61328125
---------------------------
Platycercus elegans (Crimson Rosella)
Score : 0.15234375
It looks like the tflite_runtime
api is broken with the upcoming release, I was able to reproduce the issue. I'll check with the team today.
from edgetpu.
@Namburger Sorry for the bad formatting but here it is again:
ii edgetpudemo 3-1 all Edge TPU demo script
ii libedgetpu1-std:arm64 13.0 arm64 Support library for Edge TPU
ii python3-edgetpu 13.0 arm64 Edge TPU Python API
ii python3-edgetpuvision 6-1 arm64 EdgeTPU camera API
from edgetpu.
@feranick correct! However inference is still working with edgetpu api even with the upgrade. Some examples are here.
from edgetpu.
@Namburger Yes, sorry, you are correct. I don't use the edgetpu apis, but the tflite.delegate()
function to call directly the libedgetpu.so.1
library.
from edgetpu.
Thanks, @Namburger.
from edgetpu.
BTW, pypi default repositories still have tflite_runtime in version 1.14.0. FYI.
from edgetpu.
Thanks to @Namburger for the help and to @alexanderfrey @feranick for participating.
I confirm that using tflite-runtime-2.1.0
fixed the issue.
from edgetpu.
I'm having same error & updated to Runtime 2.1 version but that didn't fix it.
Error:
RuntimeError: Internal: Unsupported data type in custom op handler: 0Node number 2 (EdgeTpuDelegateForCustomOp) failed to prepare.
Link & detail
bitsy-ai/rpi-object-tracking#13
from edgetpu.
@Martin2kid can you share the output of
$ dpkg -l | grep edgetpu
from edgetpu.
Nam Vu,
Here is terminal showing;
pi@raspberrypi:~ $ dpkg -l | grep edgetpu
ii edgetpu-examples 13.0 all Example code for Edge TPU Python API
ii libedgetpu1-std:armhf 13.0 armhf Support library for Edge TPU
ii python3-edgetpu 13.0 armhf Edge TPU Python API
pi@raspberrypi:~ $
Pi4, Picam V2, 4GB, 64GB SD
Debian Buster, Clean install (Buster, Python 3.7 plus Tensorflow 2.0 & Runtime 2.1)
from edgetpu.
I'm getting the same error as well after following instructions here.
Hardware: Raspberry Pi 4 4GB
$ cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 10 (buster)"
NAME="Raspbian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
$ dpkg -l | grep edgetpu
rc libedgetpu1-max:armhf 13.0 armhf Support library for Edge TPU
ii libedgetpu1-std:armhf 13.0 armhf Support library for Edge TPU
$ pip install tflite-runtime==2.1.0
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Requirement already satisfied: tflite-runtime==2.1.0 in ./tflite1-env/lib/python3.7/site-packages (2.1.0)
Requirement already satisfied: numpy>=1.12.1 in ./tflite1-env/lib/python3.7/site-packages (from tflite-runtime==2.1.0) (1.18.1)
Exact error message:
$ sh run_model_tpu.sh
INFO: Initialized TensorFlow Lite runtime.
/home/pi/Documents/tflite2/tflite1/Sample_TFLite_model/edgetpu.tflite
Traceback (most recent call last):
File "TFLite_detection_webcam.py", line 140, in <module>
interpreter.allocate_tensors()
File "/home/pi/Documents/tflite2/tflite1/tflite1-env/lib/python3.7/site-packages/tensorflow_core/lite/python/interpreter.py", line 244, in allocate_tensors
return self._interpreter.AllocateTensors()
File "/home/pi/Documents/tflite2/tflite1/tflite1-env/lib/python3.7/site-packages/tensorflow_core/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py", line 106, in AllocateTensors
return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
RuntimeError: Internal: Unsupported data type in custom op handler: 6488064Node number 2 (EdgeTpuDelegateForCustomOp) failed to prepare.
from edgetpu.
Same error here!
Using a Raspberry Pi 4 4GB
I already installed the 2.1.0 version of tflite-runtime.
$ cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 10 (buster)"
NAME="Raspbian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
$ dpkg -l | grep edgetpu
ii libedgetpu1-std:armhf 13.0 armhf Support library for Edge TPU
ii python3-edgetpu 13.0 armhf Edge TPU Python API
And still get the error:
RuntimeError: Internal: Unsupported data type in custom op handler: 0Node number 1 (EdgeTpuDelegateForCustomOp) failed to prepare.
from edgetpu.
Hi @Martin2kid,
Your issue is similar to @antoniobertob. This repo is using a tf.lite.interpreter
(demonstrate here) instead of tflite_runtime
, and that's why upgrading tflite_runtime
didn't help. I've also answered here:
bitsy-ai/rpi-object-tracking#13 (comment)
@wnorris could you share me a location to the code? I'm suspecting that your issue is the same as above.
from edgetpu.
@hgaiser in that case, could you use the latest tensorflow pip package for your fix to do training and use tflite_runtime
for inference?
The tflite_runtime
build are all documented here: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/pip_package
from edgetpu.
@Namburger training has been done successfully, converting to tflite works and compiling for the tpu also works. If I use the tflite model and the tflite_runtime that I created then everything works fine. If I then delegate to the tpu I get the previously mentioned error. I'm using the script from the directory that you linked to compile tflite_runtime.
For some reason my compiled tflite_runtime doesn't work with the tpu device. I am using the latest tensorflow master to create the pip package. Do you know if they just run the build_pip_package.sh command to build the pip package? Or do they do something else? Note that I also had undefined references to libatomic so I added that in the makefile for tflite.
from edgetpu.
I have tried different versions with the parrot demo:
- The one online works with the demo, but gives incorrect output for my model (as expected).
- Building the pip package on a Raspberry Pi requires a modification to link to
-latomic
and gives the errorRuntimeError: Internal: :159 batches * single_input_size != input->bytes (150528 != 12544)Node number 1 (EdgeTpuDelegateForCustomOp) failed to prepare.
when running the parrot demo. - Building the pip package on my laptop using
make BASE_IMAGE=debian:buster PYTHON=python3 TENSORFLOW_TARGET=rpi docker-build
and running it on the Raspberry Pi gives the same error as when compiled on the Raspberry Pi, but I don't need to add a link to-latomic
strange enough..
from edgetpu.
I have the same error after running "python3 TFLite_detection_webcam.py --modeldir=Sample_TFLite_model --edgetpu" from https://github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/blob/master/Raspberry_Pi_Guide.md exampel.
Error: RuntimeError: Internal: Unsupported data type in custom op handler: 6488064Node number 2 (EdgeTpuDelegateForCustomOp) failed to prepare.
from edgetpu.
Hi @Namburger,
Can you tell the branch SHA1ID based on which this tflite_runtime wheel was built ?
Thank you,
from edgetpu.
@Namburger thank you for the information, I managed to (successfully) build my own pip package. Unfortunately it still has my bug, but I will create a new issue for that.
from edgetpu.
@Namburger thank you for the information. I succeeded to build my own pip package using using this commit tensorflow/tensorflow@d855adf and it worked fine for me.
Though, I have some question, I can see that this commit was pulled from the master branch. Do you have any idea what are the most important patches that I'll be needing if I want to base my commit on the release r2.1 By digging into the patches of the master branch, I managed to identify some interesting patches (SHA1ID = 73bb115c2215d30a8e21565aabd73d98eb4f0b8f) After adding it to the r2.1, I can build the pip package but can't run the inference because of the same error :
RuntimeError: Internal: Unsupported data type in custom op handler: 0Node number 1 (EdgeTpuDelegateForCustomOp) failed to prepare.
By adding this patch (SHA1ID = c09aa9b167dc477c803a28e10c9083b7e0378c84 ) I'm getting a different error
RuntimeError: Internal: Unsupported data type in custom op handler: 343146510Node number 1 (EdgeTpuDelegateForCustomOp) failed to prepare.
Thank you for your answer.
from edgetpu.
Hi, I am trying to get the EDGE TPU working on Raspberry Pi 4 and I am still having the same issue as above despite updating to tflite-runtime 2.1.0.post1
and making sure I use tflite_runtime instead of tf.lite.interpreter.
Error message:
(tflite1) pi@raspberrypi:~/tflite1 $ python3 TFLite_detection_webcam.py --modeldir=mobilenet_quantized_1205 --edgetpu
INFO: Initialized TensorFlow Lite runtime.
/home/pi/tflite1/mobilenet_quantized_1205/edgetpu.tflite
Traceback (most recent call last):
File "TFLite_detection_webcam.py", line 140, in <module>
interpreter.allocate_tensors()
File "/home/pi/.virtualenvs/tflite1/lib/python3.7/site-packages/tensorflow_core/lite/python/interpreter.py", line 244, in allocate_tensors
return self._interpreter.AllocateTensors()
File "/home/pi/.virtualenvs/tflite1/lib/python3.7/site-packages/tensorflow_core/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py", line 106, in AllocateTensors
return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
RuntimeError: Internal: Unsupported data type in custom op handler: 0Node number 2 (EdgeTpuDelegateForCustomOp) failed to prepare.
Output of grep edgetpu:
(tflite1) pi@raspberrypi:~/tflite1 $ dpkg -l | grep edgetpu
ii libedgetpu1-std:armhf 13.0 mhf Support library for Edge TPU
Any Advice welcome, I'm new to this so it could be I've made a blind error somewhere.
Thanks
*Following this tutorial: https://github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/blob/master/Raspberry_Pi_Guide.md#step-1e-run-the-tensorflow-lite-model
from edgetpu.
@Siggi1988, @tomarnison figured out the problem here, it's not an issue with the pi4 but rather the code. Essentially, this block of code is looking to load tensorflow.lite.interpreter
instead of tflite_runtime.interpreter
.
from edgetpu.
Hi,
I have the same issue when loading a model using the C++ API.
I build Tensorflow Lite from source on Raspberry Pi using build_rpi_lib.sh found in the Tensorflow repo.
The output of dpkg -l | grep edgetpu
is:
ii libedgetpu1-std:armhf 13.0 armhf Support library for Edge TPU
ii python3-edgetpu 13.0 armhf Edge TPU Python API
When I compile and run minimal.cc I get the following error:
ERROR: Internal: Unsupported data type in custom op handler: 0
ERROR: Node number 0 (edgetpu-custom-op) failed to prepare.
Failed to allocate tensors.
I tried compiling Tensorflow Lite from the commit @Namburger pointed here, but minimal.cc wouldn't compile due to other errors.
from edgetpu.
Same issue on Pi4:
File "/home/thys/.virtualenvs/tpuparty/lib/python3.7/site-packages/tflite_runtime/interpreter.py", line 242, in allocate_tensors
return self._interpreter.AllocateTensors()
File "/home/thys/.virtualenvs/tpuparty/lib/python3.7/site-packages/tflite_runtime/interpreter_wrapper.py", line 115, in AllocateTensors
return _interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
RuntimeError: Internal: Unsupported data type: 21607392Node number 2 (EdgeTpuDelegateForCustomOp) failed to prepare.
from edgetpu.
Any solution for C++ API @Namburger?
from edgetpu.
@sthysel could you give more details? This issue has been solved so please follow that and check all your installation packages. Especially since you are working in a virtual envs, I would check very carefully if you are importing the correct package.
@yoyomolinas sorry, have a lot going on and haven't been able to check out the issues with minimal. Could you use the the basic engine for now, I'll try to get around to minimal when I can.
The basic engine is a wrapper around basic_engine_native which wraps around the core tensorflow-lite c++ api, so the code in basic_engine_native must have the changes that makes it compatible with the current runtime.
from edgetpu.
@Namburger Yea so I repeated the test on my regular dev machine and had the same issue. This is my test code, so yes same issue as others have reported. Note I wrote that some months back so bitrot in the ever evolving tensorflow/tpu world is going to happen - I appreciate that.
So yes I can 'fix' the issue by uninstalling tensorflow and installing the platform specific tflite_runtime wheel from the google link. The code fails with tensorflow now though, and sometimes I need that. tflite_runtime is really fallback for me, it seems propper tensorflow 2.1.0 is broken at this stage ?
How is one supposed to write a common setup.py that will include both architectures in the wheel so the correct runtime is installed when targeting arm or x86 ? Especially as none of these wheels are available on PyPi. I seem to only find 'optional' support in setuptools. I understand this is a packaging problem, but is there a obvious way of going about this ?
from edgetpu.
@sthysel maybe adding a --tpu flag?
for instance:
pkg = importlib.util.find_spec('tensorflow')
if pkg is not None:
if flag.tpu: #have tensorflow but want to use tflite_runtime instead
from tflite_runtime.interpreter import Interpreter
from tflite_runtime.interpreter import load_delegate
else: #wants to use tensorflow no matter what
from tensorflow.lite.python.interpreter import Interpreter
from tensorflow.lite.python.interpreter import load_delegate
else: # don't have tensorflow installed
# just use tflite_runtime here or some other check
from edgetpu.
@sthysel could you give more details? This issue has been solved so please follow that and check all your installation packages. Especially since you are working in a virtual envs, I would check very carefully if you are importing the correct package.
@yoyomolinas sorry, have a lot going on and haven't been able to check out the issues with minimal. Could you use the the basic engine for now, I'll try to get around to minimal when I can.
The basic engine is a wrapper around basic_engine_native which wraps around the core tensorflow-lite c++ api, so the code in basic_engine_native must have the changes that makes it compatible with the current runtime.
Tried, still the same error: WARNING: Logging before InitGoogleLogging() is written to STDERR F0309 23:55:30.264824 12728 basic_engine.cc:8] Internal: Unsupported data type in custom op handler: 32101768Node number 0 (edgetpu-custom-op) failed to prepare. Failed to allocate tensors. *** Check failure stack trace: *** Aborted
Is GCC 8.3 a probable cause? I apparently cannot compile abseil-cpp with GCC 9.1 so had to revert back to 8.3.
The basic_engine is updated 4 months ago, I don't think it supports recent Tensorflow Lite.
from edgetpu.
Is Edgetpu shared library (.so file) with runtime version 12 accessible somewhere?
from edgetpu.
humn, I'm actually very confuse now, seems like it works for me... but I'm running on my x86 machine with a accelerator
./out/k8/examples/minimal test_data/inception_v1_224_quant_edgetpu.tflite test_data/resized_cat.bmp
[Image analysis] max value index: 286 value: 0.414062
./out/k8/examples/classify_image --model_path test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --labels_path test_data/inat_bird_labels.txt --image_path ./test_data/bird.bmp
scores.size() 965
top_k 3
threshold 0
---------------------------
Poecile atricapillus (Black-capped Chickadee)
Score: 0.535156
---------------------------
Poecile carolinensis (Carolina Chickadee)
Score: 0.121094
---------------------------
Poecile rufescens (Chestnut-backed Chickadee)
Score: 0.0273438
@yoyomolinas you can get back the runtime version 12 from this repo at an older commit, and use this install script
from edgetpu.
I'm on RPI4, it made no difference reverting back to runtime 12. How did you compile your tflite static library @Namburger?
from edgetpu.
@sthysel maybe adding a --tpu flag?
for instance:pkg = importlib.util.find_spec('tensorflow') if pkg is not None: if flag.tpu: #have tensorflow but want to use tflite_runtime instead from tflite_runtime.interpreter import Interpreter from tflite_runtime.interpreter import load_delegate else: #wants to use tensorflow no matter what from tensorflow.lite.python.interpreter import Interpreter from tensorflow.lite.python.interpreter import load_delegate else: # don't have tensorflow installed # just use tflite_runtime here or some other check
I guess my question was not clear, apologies. Above implies you already have the platform specific tflite_runtime installed. My question is how I would automatically install the platform correct tflite_runtime from here using standard setuptools. Say I have TPUParty (my toy app) packaged up as a PyPi tool installable from PyPi. A user installs it on arm, the TPUParty wheel now installs the arm wheel, installing on her intel linux box the x86 wheel gets installed - I don't care about other platforms but the same would hold for mac and windows. As I said, its a wheel packaging question. At this stage is not clear to me how I would productionize a tool depending on these platform specific TPU drivers using a standard: $ pip install tpuparty and have it 'just work'.
from edgetpu.
I can confirm that F0309 23:55:30.264824 12728 basic_engine.cc:8] Internal: Unsupported data type in custom op handler: 32101768Node number 0 (edgetpu-custom-op) failed to prepare. Failed to allocate tensors. *** Check failure stack trace: *** Aborted
is caused when compiling most recent tensorflow lite version (one found in the master branch).
My workaround is to revert back to this tensorflow commit manually patch it to match this and compile using build_rpi_lib.sh (probably cleaner to revert straight to this commit but didn't try it yet).
Thanks @Namburger!
from edgetpu.
Hello,
Where can I find the debian packages for EdgeTPU in version 12.1? Currently the debian repository contains only the packages in version 13.0 and the packages in version 12.1 are no longer available. I need both the runtime and the compiler.
Thanks
from edgetpu.
@stephldp debian package is no longer available, but you can download the tarball here, it comes with an install/uninstall script.
Compiler binary can be found here, just revert to an older commit
from edgetpu.
@Namburger, not so easy than the command "apt install <...>" but I will try to use the alternative packages. Thanks for help!
Note that it may be easier to append the debian repository definition instead of overriding it when releasing a new version of packages.
from edgetpu.
Hello everyone,
I am testing on darwin with tesnroflow2.0 and interpreter from tensorflow.lite as tflite
Still fails for me on nightly
tf-nightly==2.2.0.dev20200319
and it fail with a similar message from above
RuntimeError: Internal: Unsupported data type in custom op handler: 0Node number 1 (EdgeTpuDelegateForCustomOp) failed to prepare.
from edgetpu.
Will this be fixed in any upcoming stable release like 2.2.1 or 2.3.0?
Currently using tensorflow/tensorflow@d855adf + a small patch to make libtensorflow-lite.a
build as a workaround, but would be cool to have a stable release.
Need this to use edgetpu with C++.
from edgetpu.
@hardsetting yes it will, we'll be building libedgetpu with a newer version of tensorflow.
Here is a c++ build example:
https://github.com/Namburger/edgetpu-minimal-example
from edgetpu.
@lc0 Hope you already figured this out, but basically, please use tflite_runtime
package: https://www.tensorflow.org/lite/guide/python
from edgetpu.
Hey there!
Not sure if i should open a seperate issue but since the error message is the same i thought i should ask here first.
I´m using the Coral Mini PCIe Accelerator with an aarch64 debian buster based system. To make things complicated i´m running everything in an Ubuntu 20.04 LTS docker container.
I managed to install the coral drivers so ls /dev/apex_0 works in the docker container and i´ve build and installed a tflite_runtime wheel for my system according to the tensorflow repo like this
These examples work fine but when i try to run the classification example from this repo with the mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite model i get the same error:
``RuntimeError: Internal: Unsupported data type in custom op handler: 0Node number 1 (EdgeTpuDelegateForCustomOp) failed to prepare."
Running the mobilenet_v2_1.0_224_inat_bird_quant.tflite model works fine.
I tried using the latest tflite_runtime version as well as the 2.1.0 version.
The output of dpkg -l | grep edgetpu is
ii libedgetpu1-std:arm64 14.1 arm64 Support library for Edge TPU
I did create and install the wheel file for the Edge TPU Python API so not sure why it doesnt show up.
from edgetpu.
@ItsMeTheBee
Hi, yes, please open a new issue.
I believe this all to be the same problem but at this point the issue became too long for other users to look at and reference
Please show full logs of failures, plus these (inside the container)
uname -a
cat /etc/os-release
dpkg -l | grep edgetpu
python3 - c 'print(__import__("tflite_runtime").__version__)'
Also how you built and ran the docket container would be helpful.
from edgetpu.
Related Issues (20)
- Issue while converting ONNX to TFlite.
- Edgetpu Compiler Compile Error
- Help understanding edgetpu compiler optimizations
- Inference Slower than claimed and using too much CPU.
- ASPM support?
- Wifi options not available in network manager after using the nmtui command on the coral devboard
- Coral dev board IOMMU support
- IMX8PLUS need enable MSI-X for M2 TPU accelerator
- test_data folder models documentations ?
- CORAL TPU M.2 B+M MODULE ([212-G650-04686-01]) M2 SATA not detected in HAOS HOT 2
- Not working on MacOS 14. (Python 3.9.6)
- TensorFlow Lite Object Detection Models on the Raspberry Pi (with Optional Coral USB Accelerator) Segmentation Fault HOT 1
- Why isn't this being updated?
- Possible overheating while reported temperature is normal HOT 2
- U-boot command saveenv is not working for Google Coral Mini. Cannot set bootdelay 0
- ERROR: Didn't find op for builtin opcode 'TRANSPOSE_CONV' version '4'. An older version of this builtin might be supported.
- Error in device opening /dev/apex_0 on kernel 6.6.28 HOT 1
- Key not present on hopeful-pig -- pushing
- Coral USB device keeps resetting
- Debian apt error 501 Not Implemented
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from edgetpu.