Giter Club home page Giter Club logo

codelab-mlkit-android's Introduction

Codelabs for ML Kit

This repository contains the code for the ML Kit codelabs:

Introduction

In these codelabs, you will build an Android app that uses various features of ML Kit to recognize text and detect facial features. You will learn how to use the built in on-device Text Recognition API and the face contour API.

Pre-requisites

None.

Getting Started

Open the final/ folder in Android Studio to see the final product. Visit the Google codelabs site to follow along the guided steps.

Screenshots

Support

If you've found an error in this sample, please file an issue: https://github.com/googlecodelabs/mlkit-android/issues

Patches are encouraged, and may be submitted by forking this project and submitting a pull request through GitHub.

License

Copyright 2018 Google, Inc.

Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

codelab-mlkit-android's People

Contributors

aliazaz avatar calren avatar dh-- avatar gkaldev avatar khanhlvg avatar owahltinez avatar sheepmaster avatar ulukaya avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

codelab-mlkit-android's Issues

Step 8: Classification still not working

Hi, I just want to share my case that I have followed all the steps but the classification didn't work and there were no errors logged out.
Added this to build.gradle (app), inside android{} is an extra step which was not mentioned in the codelab instruction:
aaptOptions { noCompress "tflite" }
This solved my problem and maybe yours too :)

The picture text recognition library does not provide a non-gms version, please provide a non-gms version,More than a billion people in China need it very much.

I have checked all Google's ml-kit libraries, only the picture and text recognition library does not provide a non-gms version, and other libraries need gms, which is really unfriendly to millions of developers and more than a billion users in China, and these developers and users also need this library, so please provide a non-gms version, because Chinese users' mobile phones basically cannot use gms.

Error while setting up the model

Hey I am getting this error
Error while setting up the model
Is in manifest i should add some permission or something else Don't know what to do please guide

Package name com.google.firebase.codelab.mlkit_custommodel caused problem

The package name "com.google.firebase.codelab.mlkit_custommodel" we are asked to fill in at step 4 should just be "com.google.firebase.codelab.mlkit". Because in the beginning, in the source code we downloaded, there is no custommodel directory anymore, if we filled in com.google.firebase.codelab.mlkit_custommodel as the package name, the package name generated in the google_services.json caused compile error in the build process, because it couldn't find the custom dir.

Something wrong

I've tried by following codelab steps -
In Step 6 there is a problem
Screenshot (28)

I've also tried the solution given by android studio to change from textrecognizer to languageidentification but it doesnot work.

Please someone help me to solve this problem.

Step 7 - error on define our FirebaseModelInterpreter.

I'm working on getting this step completed but when coping the first code on this into the main activity, Android Studio is giving me "cannot resolve symbol 'FirebaseModelInterpreter'". I've opened the final directory up and it successfully loaded on my phone the completed app.

I do noticed the final app has more imports, so I tried to copy and paste them into the starter app without success. Any advice would be appreciated.

Pre-Download ML Models

Hi,
Is there any possibility to pre-download ML models when app install cause it's taking lot of time to download and showing error "Waiting for text recognition model to be downloaded".
Or is there any chance to initiate progress dialog while downloading ML model?

CODE ISSUE

ALL THE CODE ARE KOTLIN AND NOT JAVA!
ITS MENTIONED .java FILE and THE CODE IS IN KOTLIN?!

Repo and codelab drift

Hello Google Codelab team,

This repo has drifted a bit to the point where it needs a lot of fiddling to get functional It looks like some of the imports and library versions are dated as well. It would be super helpful for future explorers to make things a bit simpler to get up and running.

Won't build

I downloaded the MLKit and opened the final project and it won't execute because:

Execution failed for task ':app:processDebugGoogleServices'.

File google-services.json is missing. The Google Services Plugin cannot function without it.
Searched Location:
/Users/scott/Downloads/mlkit-android-master/custom-model/final/app/google-services.json
/Users/scott/Downloads/mlkit-android-master/custom-model/final/app/src/nullnull/google-services.json
/Users/scott/Downloads/mlkit-android-master/custom-model/final/app/src/debug/google-services.json
/Users/scott/Downloads/mlkit-android-master/custom-model/final/app/src/nullnullDebug/google-services.json
/Users/scott/Downloads/mlkit-android-master/custom-model/final/app/src/nullnull/debug/google-services.json
/Users/scott/Downloads/mlkit-android-master/custom-model/final/app/src/debug/nullnull/google-services.json

Looks like docs direct me to Firebase on how to get the .json and that was a "fail". I think this is a bad start for a newbie...

Step 4 - running on emulator shows multicoloured blocks

Hello - in Step 4 ('Run the starter app'), using Pixel_3a_API_30_x86 as emulator the screen displays multicoloured
moving blocks (attaching screenshot).

mlkit_error

I tried wiping data from the emulator but had the same result.

Android Studio version 4.1.1

Any advice much appreciated - thanks!

Final variables

custom-model folder not found in folder

Unpack the downloaded zip file. This will unpack a root folder (mlkit-android) with all of the resources you will need. For this codelab, you will only need the resources in the custom-model subdirectory.

custom-model not exist in folder (mlkit-android) is in mlkit-android-image_labeling that is in another codelab ^^

text-recognition didnt work

when i clicking in find text
android studio showing this error

W/DynamiteModule: Local module descriptor class for com.google.android.gms.vision.dynamite.ocr not found.
I/DynamiteModule: Considering local module com.google.android.gms.vision.dynamite.ocr:0 and remote module com.google.android.gms.vision.dynamite.ocr:0
D/TextNativeHandle: Cannot load feature, fall back to load dynamite module.
W/DynamiteModule: Local module descriptor class for com.google.android.gms.vision.ocr not found.
I/DynamiteModule: Considering local module com.google.android.gms.vision.ocr:0 and remote module com.google.android.gms.vision.ocr:0
E/Vision: Error loading module com.google.android.gms.vision.ocr optional module true: com.google.android.gms.dynamite.DynamiteModule$LoadingException: No acceptable module found. Local version is 0 and remote version is 0.
W/System.err: com.google.mlkit.common.MlKitException: Waiting for the text recognition model to be downloaded. Please wait.
W/System.err: at com.google.mlkit.vision.text.internal.zzb.zza(com.google.android.gms:play-services-mlkit-text-recognition@@16.0.0:20)
at com.google.mlkit.vision.text.internal.zzb.run(com.google.android.gms:play-services-mlkit-text-recognition@@16.0.0:51)
at com.google.mlkit.vision.common.internal.MobileVisionBase.zza(com.google.mlkit:vision-common@@16.0.0:23)
W/System.err: at com.google.mlkit.vision.common.internal.zzb.call(com.google.mlkit:vision-common@@16.0.0)
at com.google.mlkit.common.sdkinternal.ModelResource.zza(com.google.mlkit:common@@16.0.0:26)
at com.google.mlkit.common.sdkinternal.zzn.call(com.google.mlkit:common@@16.0.0)
at com.google.mlkit.common.sdkinternal.zzm.run(com.google.mlkit:common@@16.0.0:5)
at com.google.mlkit.common.sdkinternal.zzq.run(com.google.mlkit:common@@16.0.0:3)
at android.os.Handler.handleCallback(Handler.java:751)
at android.os.Handler.dispatchMessage(Handler.java:95)
W/System.err: at com.google.android.gms.internal.mlkit_common.zzb.dispatchMessage(com.google.mlkit:common@@16.0.0:6)
at android.os.Looper.loop(Looper.java:154)
at android.os.HandlerThread.run(HandlerThread.java:61)

Step 6 : Run on emulator doesn't work

When I try to run "FIND TEXT" on my emulator 10.1 WXGA (Tablet) API 29 I obtain the following issues :
W/DynamiteModule: Local module descriptor class for com.google.android.gms.vision.ocr not found. I/DynamiteModule: Considering local module com.google.android.gms.vision.ocr:0 and remote module com.google.android.gms.vision.ocr:0 E/Vision: Error loading module com.google.android.gms.vision.ocr optional module true: hg: No acceptable module found. Local version is 0 and remote version is 0. I/Vision: Request download for engine ocr is a no-op because rate limiting Fallback loading lorry ocr model while waiting for optional module to download. I/Vision: Loading library libocr.so I/Vision: libocr.so library load status: false Request download for engine ocr is a no-op because rate limiting

Build fails

I get this error when I attempt to build:

    TextRecognizer recognizer = TextRecognition.getClient();
                                               ^

required: TextRecognizerOptionsInterface
found: no arguments
reason: actual and formal argument lists differ in length

</androidx.constraintlayout.widget.ConstraintLayout> error

after i used this code i recieved an error, can someone fix this? i don't know what to fix. thanks...

<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    tools:context=".pdfview">
    
    <com.github.barteksc.pdfviewer.PDFView
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        android:id="@+id/pdfviewer"
        android:background="@color/white"

</androidx.constraintlayout.widget.ConstraintLayout>

the gradle i used :

implementation 'androidx.constraintlayout:constraintlayout:2.0.4'

Text recognition doesn't work on Xiaomi

Device model: Xiaomi MI 8 SE
Android version: 10 (MIUI 11.0.3)
Play Store version: 20.4.18
Google Play Services 20.21.17

2020-06-23 18:01:02.165 14211-14553/com.google.codelab.mlkit E/Vision: Error loading module com.google.android.gms.vision.ocr optional module true: com.google.android.gms.dynamite.DynamiteModule$LoadingException: No acceptable module found. Local version is 0 and remote version is 0.
2020-06-23 18:01:02.166 14211-14553/com.google.codelab.mlkit W/e.codelab.mlki: Accessing hidden method Lsun/misc/Unsafe;->getObject(Ljava/lang/Object;J)Ljava/lang/Object; (greylist, linking, allowed)
2020-06-23 18:01:02.168 14211-14211/com.google.codelab.mlkit W/System.err: com.google.mlkit.common.MlKitException: Waiting for the text recognition model to be downloaded. Please wait.
2020-06-23 18:01:02.168 14211-14211/com.google.codelab.mlkit W/System.err:     at com.google.mlkit.vision.text.internal.zzb.zza(com.google.android.gms:play-services-mlkit-text-recognition@@16.0.0:20)
2020-06-23 18:01:02.168 14211-14211/com.google.codelab.mlkit W/System.err:     at com.google.mlkit.vision.text.internal.zzb.run(com.google.android.gms:play-services-mlkit-text-recognition@@16.0.0:51)
2020-06-23 18:01:02.168 14211-14211/com.google.codelab.mlkit W/System.err:     at com.google.mlkit.vision.common.internal.MobileVisionBase.zza(com.google.mlkit:vision-common@@16.0.0:23)
2020-06-23 18:01:02.168 14211-14211/com.google.codelab.mlkit W/System.err:     at com.google.mlkit.vision.common.internal.zzb.call(Unknown Source:4)
2020-06-23 18:01:02.168 14211-14211/com.google.codelab.mlkit W/System.err:     at com.google.mlkit.common.sdkinternal.ModelResource.zza(com.google.mlkit:common@@16.0.0:26)
2020-06-23 18:01:02.168 14211-14211/com.google.codelab.mlkit W/System.err:     at com.google.mlkit.common.sdkinternal.zzn.call(Unknown Source:6)
2020-06-23 18:01:02.168 14211-14211/com.google.codelab.mlkit W/System.err:     at com.google.mlkit.common.sdkinternal.zzm.run(com.google.mlkit:common@@16.0.0:5)
2020-06-23 18:01:02.168 14211-14211/com.google.codelab.mlkit W/System.err:     at com.google.mlkit.common.sdkinternal.zzq.run(com.google.mlkit:common@@16.0.0:3)
2020-06-23 18:01:02.168 14211-14211/com.google.codelab.mlkit W/System.err:     at android.os.Handler.handleCallback(Handler.java:883)
2020-06-23 18:01:02.168 14211-14211/com.google.codelab.mlkit W/System.err:     at android.os.Handler.dispatchMessage(Handler.java:100)
2020-06-23 18:01:02.169 14211-14211/com.google.codelab.mlkit W/System.err:     at com.google.android.gms.internal.mlkit_common.zzb.dispatchMessage(com.google.mlkit:common@@16.0.0:6)
2020-06-23 18:01:02.169 14211-14211/com.google.codelab.mlkit W/System.err:     at android.os.Looper.loop(Looper.java:224)
2020-06-23 18:01:02.169 14211-14211/com.google.codelab.mlkit W/System.err:     at android.os.HandlerThread.run(HandlerThread.java:67)

Object detection and Labeling avoids people classification?

My observation from testing the ML kit is that it seems not to identify category/object Person. Is this a bug, model issue or deliberate filtering? It seems the kit was not aimed to classify object type person, since there are (only :-() 5 object categories defined , at least what I see in the class. BTW why the Object ID integer mapping is not documented in the tutorials? It took me while to identify the category names, since the API returns only integer at the moment

image

Regardless of the fact t( it is rather poor capability for object detection API to be useful), I am not sure why People are committed?

see the details of the issue I have with the ML kit here:

https://stackoverflow.com/questions/57078901/ml-kit-for-firebase-object-recognition-categories

Migrating to Androidx causing the layout to crash

when I applied Migrate to Androix, it changes the layout to this:
<androidx.constraintlayout.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
then compilation gave me these issues:
Caused by: android.view.InflateException: Binary XML file line #2: Binary XML file line #2: Error inflating class androidx.constraintlayout.ConstraintLayout
Caused by: android.view.InflateException: Binary XML file line #2: Error inflating class androidx.constraintlayout.ConstraintLayout
Caused by: java.lang.ClassNotFoundException: Didn't find class "androidx.constraintlayout.ConstraintLayout"

How to fix this?

Invalid app config

Hello,
Could you please guide me why I am getting bellow error in application when running on emulator? I am have tried start version as well as final version of your code I am getting same errors in both cases. I have attached some logs here in this post.

2019-05-07 18:23:48.734 26113-2411/? E/aNative: Invalid app config
2019-05-07 18:23:48.747 26113-2411/? E/aNative: Invalid app config
2019-05-07 18:23:48.762 26113-2411/? E/aNative: Invalid app config
2019-05-07 18:23:54.325 14089-14156/? E/MAL-RDS: ( rds_ru_3gpp_status_ind, 1820) [RDS-E][RU][EVENT_RU_DM_3GPP_STATUS_IND] Invalid u43gpp_status:0x8 sim:1

When I click on any button I am getting above mentioned error in logs app is not able to recognize anything.

mobilenet_v1_1.0_224_quant.tflite vs mobilenet_v1_1.0_224.tflite for inference

In "Identify objects in images using custom machine learning models with ML Kit for Firebase" tutorial https://codelabs.developers.google.com/codelabs/mlkit-android-custom-model/index.html?index=..%2F..index#1

There is a step to unpack the downloaded zip file. This will unpack a root folder (mobilenet_v1_1.0_224_quant) inside which you will find the Tensor Flow Lite custom model we will use in this codelab (mobilenet_v1_1.0_224_quant.tflite).

It looks like mobilenet_v1_1.0_224_quant.tflite can run inference with no problem. However, If I download mobilenet_v1_1.0_224.tflite from mobilenet_v1_1.0_224 https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md, Inference get stuck at
val inferenceOutput = it.result?.getOutput<Array>(0)!!

Is there a reason why mobilenet_v1_1.0_224.tflite does not work for the current code base for inference and how to update the current code base to get mobilenet_v1_1.0_224.tflite working?

Gradle DSL method not found: 'implementation()'

Gradle DSL method not found: 'implementation()'
Possible causes:
The project 'ML Kit Translate Codelab' may be using a version of the Android Gradle plug-in that does not contain the method (e.g. 'testCompile' was added in 1.1.0).
Upgrade plugin to version 4.0.1 and sync project

The project 'ML Kit Translate Codelab' may be using a version of Gradle that does not contain the method.
Open Gradle wrapper file

The build file may be missing a Gradle plugin.
Apply Gradle plugin

Face Detection Y Euler face orientation is inverse

In most if the devices that I have used, when you turn your head left the Y euler angle is positive and when turning right the angle is negative.
But in a Samsung Galaxy A3 Android 5.1.1 this is inverse, left negative and right positive.
Is there a way to know if the angles are inversed in an specific device?

Fatal Exception: java.lang.UnsatisfiedLinkError

Using MLKit's Android Barcode library (Bundled, v3)
com.google.mlkit:barcode-scanning:17.0.1
From the limited users(that I currently have), I am experiencing the following crash on Firebase (repetitive):

  1. Huawei 9A (MOA-LX9N)
  2. Huawei Y9A (FRL-L22)
Fatal Exception: java.lang.UnsatisfiedLinkError: dalvik.system.PathClassLoader[DexPathList[[zip file "/data/app/qrcodereader.barcodescanner.qrscanner.barcodereader.qrcode.barcode.qr.scanner.reader-28BTds1gUmGI_6g_4tY4uw==/base.apk"],nativeLibraryDirectories=[/data/app/qrcodereader.barcodescanner.qrscanner.barcodereader.qrcode.barcode.qr.scanner.reader-28BTds1gUmGI_6g_4tY4uw==/lib/arm64, /system/lib64, /hw_product/lib64, /system/product/lib64]]] couldn't find "libbarhopper_v3.so"
       at java.lang.Runtime.loadLibrary0(Runtime.java:1067)
       at java.lang.Runtime.loadLibrary0(Runtime.java:1007)
       at java.lang.System.loadLibrary(System.java:1668)
       at com.google.android.libraries.barhopper.BarhopperV3.<init>(BarhopperV3.java:5)
       at com.google.mlkit.vision.barcode.bundled.internal.zza.zzc(zza.java:36)
       at com.google.android.gms.internal.mlkit_vision_barcode_bundled.zzbk.zza(zzbk.java:36)
       at com.google.android.gms.internal.mlkit_vision_barcode_bundled.zzb.onTransact(zzb.java:20)
       at android.os.Binder.transact(Binder.java:921)
       at com.google.android.gms.internal.mlkit_vision_barcode.zza.zzc(zza.java:2)
       at com.google.android.gms.internal.measurement.zzbm.zzc$bridge(zzbm.java:2)
       at com.google.android.gms.internal.mlkit_vision_barcode.zznu.zze(zznu.java:3)
       at com.google.mlkit.vision.barcode.internal.zzl.zza(zzl.java:3)
       at com.google.mlkit.vision.barcode.internal.zzi.zzc(zzi.java:1)
       at com.google.mlkit.vision.common.internal.MobileVisionBase.zza(MobileVisionBase.java:18)
       at com.google.mlkit.vision.common.internal.zzd.call(zzd.java:18)
       at com.google.android.gms.measurement.internal.zzfh.call$bridge(zzfh.java:18)
       at com.google.mlkit.common.sdkinternal.ModelResource.zza(ModelResource.java:28)
       at com.google.mlkit.common.sdkinternal.zzl.run(zzl.java:28)
       at com.google.android.gms.measurement.internal.zzjf.run$bridge(zzjf.java:28)
       at com.google.mlkit.common.sdkinternal.zzp.run(zzp.java:137)
       at com.google.android.gms.common.api.internal.zacm.run$bridge(zacm.java:137)
       at com.google.mlkit.common.sdkinternal.MlKitThreadPool.zze(MlKitThreadPool.java:2)
       at com.google.mlkit.common.sdkinternal.MlKitThreadPool.zzc(MlKitThreadPool.java:2)
       at com.google.mlkit.common.sdkinternal.zzi.run(zzi.java:2)
       at com.google.mlkit.common.sdkinternal.zzi.run$bridge(zzi.java:2)
       at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
       at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
       at com.google.mlkit.common.sdkinternal.MlKitThreadPool.zzd(MlKitThreadPool.java:4)
       at com.google.mlkit.common.sdkinternal.zzj.run(zzj.java:4)
       at com.google.mlkit.common.sdkinternal.zzi.run$bridge(zzi.java:4)
       at java.lang.Thread.run(Thread.java:929)

I am not experienced enough with ndk, but I found a stackoverflow link that might be useful.

Can you please look into this, if it is a OEM device specific error or a bug with mlkit itself?...
Thanks

Step 6 MainActivity is in kotlin not java

when we imported the starter custom model, for some reason the MainActivity.java was actually MainActivity.kt while the code we are supposed to add is in java not kotlin.

The up-to-date dependencies do not work with the code

Updating these dependencies:

  • implementation 'com.google.firebase:firebase-ml-vision:19.0.3'
  • implementation 'com.google.firebase:firebase-ml-vision-image-label-model:17.0.2'
  • implementation 'com.google.firebase:firebase-ml-vision-face-model:17.0.2'
  • implementation 'com.google.firebase:firebase-ml-model-interpreter:18.0.0'

To the up-to-date dependencies:

  • implementation 'com.google.firebase:firebase-ml-vision:24.0.2'
  • implementation 'com.google.firebase:firebase-ml-vision-image-label-model:20.0.0'
  • implementation 'com.google.firebase:firebase-ml-vision-face-model:20.0.0'
  • implementation 'com.google.firebase:firebase-ml-model-interpreter:22.0.2'

Stops the code from running

I get theses error messages:

MainActivity.java:49: error: cannot find symbol
import com.google.firebase.ml.custom.FirebaseModelOptions;
symbol: class FirebaseModelOptions

MainActivity.java:286: error: cannot find symbol
FirebaseRemoteModel remoteModel = new FirebaseRemoteModel.Builder
symbol: class Builder

MainActivity.java:298: error: cannot find symbol
manager.registerRemoteModel(remoteModel);
symbol: method registerRemoteModel(FirebaseRemoteModel)

MainActivity.java:299: error: cannot find symbol
manager.registerLocalModel(localModel);
symbol: method registerLocalModel(FirebaseLocalModel)

MainActivity.java:300: error: cannot find symbol
FirebaseModelOptions modelOptions =
symbol: class FirebaseModelOptions

MainActivity.java:301: error: package FirebaseModelOptions does not exist
new FirebaseModelOptions.Builder()

Need to create RealTime support for Text being captured is good quality or not

I want to integreate the same functions to my college project and I need to add functionality in App as such it should tell me whether I am Capturing The image, perfectly such that the text is appearing clearly while capturing or else it must tell to go right or left or up or down or 5 degree clockwise etc

can always refer to KNFB reader functionality on youtube, I was kind of creating simmilear code

Step 6. Error in code

                Graphic textGraphic = new TextGraphic(mGraphicOverlay, elements.get(k));
                mGraphicOverlay.add(textGraphic);

Should be...

                GraphicOverlay.Graphic textGraphic = new TextGraphic(mGraphicOverlay, elements.get(k));
                mGraphicOverlay.add(textGraphic);

Changes required for using non-quantized tflite files in MainActivity.java

The code was written for quantised models and I'm trying to use unquantised model. I made the following changes as mentioned here to MainActivity.java to overcome Cannot convert an TensorFlowLite tensor with type FLOAT32 to a Java object of type [[B (which is compatible with the TensorFlowLite type UINT8).

**Added these two lines**
    private static final int IMAGE_MEAN = 128;
    private static final float IMAGE_STD = 128.0f;

**Changes made**
Line 212   :byte[][] labelProbArray = task.getResult().<byte[][]>getOutput(0);
.............................................................
float[][] labelProbArray = task.getResult().<float[][]>getOutput(0);


Line 235   :new AbstractMap.SimpleEntry<>(mLabelList.get(i), (labelProbArray[0][i]&0xff) / 255.0f));
.............................................................
new AbstractMap.SimpleEntry<>(mLabelList.get(i), (labelProbArray[0][i])));

Line 288   :imgData.put((byte) ((val >> 16) & 0xFF));
imgData.put((byte) ((val >> 8) & 0xFF));
imgData.put((byte) (val & 0xFF));
.............................................................
imgData.putFloat((((val >> 16) & 0xFF)-IMAGE_MEAN)/IMAGE_STD);
imgData.putFloat((((val >> 8) & 0xFF)-IMAGE_MEAN)/IMAGE_STD);
imgData.putFloat(((val & 0xFF)-IMAGE_MEAN)/IMAGE_STD);

But now I'm getting the following error: Note that MainActivity.java:292 is imgData.putFloat((((val >> 16) & 0xFF)-IMAGE_MEAN)/IMAGE_STD);

    --------- beginning of crash
06-18 17:56:35.168 19375-19375/com.google.firebase.codelab.mlkit_custommodel E/AndroidRuntime: FATAL EXCEPTION: main
    Process: com.google.firebase.codelab.mlkit_custommodel, PID: 19375
    java.nio.BufferOverflowException
        at java.nio.Buffer.nextPutIndex(Buffer.java:514)
        at java.nio.DirectByteBuffer.putFloat(DirectByteBuffer.java:802)
        at com.google.firebase.codelab.mlkit_custommodel.MainActivity.convertBitmapToByteBuffer(MainActivity.java:292)
        at com.google.firebase.codelab.mlkit_custommodel.MainActivity.runModelInference(MainActivity.java:195)
        at com.google.firebase.codelab.mlkit_custommodel.MainActivity.access$000(MainActivity.java:63)
        at com.google.firebase.codelab.mlkit_custommodel.MainActivity$2.onClick(MainActivity.java:146)
        at android.view.View.performClick(View.java:6367)
        at android.view.View$PerformClick.run(View.java:25032)
        at android.os.Handler.handleCallback(Handler.java:790)
        at android.os.Handler.dispatchMessage(Handler.java:99)
        at android.os.Looper.loop(Looper.java:164)
        at android.app.ActivityThread.main(ActivityThread.java:6753)
        at java.lang.reflect.Method.invoke(Native Method)
        at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:482)
        at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:807)
06-18 17:56:35.173 19375-19375/com.google.firebase.codelab.mlkit_custommodel W/OPDiagnose: getService:OPDiagnoseService NULL
06-18 17:56:35.177 19375-19454/com.google.firebase.codelab.mlkit_custommodel D/OSTracker: OS Event: crash

When I'm changing DIM_BATCH_SIZE = 1 to 4 (reference) including the above changes, I'm getting
Cannot convert an TensorFlowLite tensor with type FLOAT32 to a Java object of type [[B (which is compatible with the TensorFlowLite type UINT8)

When I'm multiplying imgData size by 4(reference), I'm getting
Input 0 should have 150528 bytes, but found 602112 bytes.

lateinit property outputFileUri has not been initialized

Hi,
I am referring the tutorial of codelabs to work with ML Kit and at step 5 when I run the application I get the below error.
I tried to run the final module as well, faced the same issue.

Caused by: java.lang.RuntimeException: Failure delivering result ResultInfo{who=null, request=1, result=-1, data=null} to activity {com.google.firebase.mlkit.codelab.objectdetection/com.google.firebase.mlkit.codelab.objectdetection.MainActivity}: kotlin.UninitializedPropertyAccessException: lateinit property outputFileUri has not been initialized
        at android.app.ActivityThread.deliverResults(ActivityThread.java:4179)
        at android.app.ActivityThread.performResumeActivity(ActivityThread.java:3476)
        at android.app.ActivityThread.handleResumeActivity(ActivityThread.java:3542) 
        at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2790) 
        at android.app.ActivityThread.-wrap12(ActivityThread.java) 
        at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1523) 
        at android.os.Handler.dispatchMessage(Handler.java:102) 
        at android.os.Looper.loop(Looper.java:163) 
        at android.app.ActivityThread.main(ActivityThread.java:6238) 
        at java.lang.reflect.Method.invoke(Native Method) 
        at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:933) 
        at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:823) 
     Caused by: kotlin.UninitializedPropertyAccessException: lateinit property outputFileUri has not been initialized
        at com.google.firebase.mlkit.codelab.objectdetection.MainActivity.getCapturedImage(MainActivity.kt:136)
        at com.google.firebase.mlkit.codelab.objectdetection.MainActivity.onActivityResult(MainActivity.kt:84)

When I debug the app, I can see the var is initialized and the value is stored, but when at onactivityresult this exception is thrown.
any quick solution for the same would be great.

TextBoxes(GraphicOverlay) are not Proper

The GraphicOverlay Textboxes are not Proper and overlapped one another in ->text-recognition\Starter
[On-device Recognition].
And also it cannot detect most texts when i tried to do on images captured in device.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.