Giter Club home page Giter Club logo

natmlx / natml-unity Goto Github PK

View Code? Open in Web Editor NEW
220.0 8.0 23.0 177.71 MB

High performance, cross-platform machine learning for Unity Engine. Register at https://hub.natml.ai

Home Page: https://docs.natml.ai/unity

License: Apache License 2.0

C# 80.40% C 15.34% C++ 4.26%
machine-learning deeo-learning unity computer-vision natural-language-processing augmented-reality virtual-reality tensorflow pytorch natml android ios macos mediapipe tensorflow-lite webgl windows webassembly coreml webassemby

natml-unity's Introduction

NatML

NatML

NatML allows developers to integrate machine learning into their Unity applications in under five lines of code with zero infrastructure. NatML completely removes the need to have any experience with machine learning in order to take advantage of the features it can provide. Features include:

  • Universal Machine Learning. With NatML, you can drop CoreML (.mlmodel), TensorFlow Lite (.tflite), and ONNX (.onnx) models directly into your Unity project and run them.

  • Bare Metal Performance. NatML takes advantage of hardware machine learning accelerators, like CoreML on iOS and macOS, NNAPI on Android, and DirectML on Windows. As a result, it is multiple times faster than Unity's own Barracuda engine.

  • Cross Platform. NatML supports Android, iOS, macOS, WebGL, and Windows alike. As a result, you can build your app once, test it in the Editor, and deploy it various platforms and devices all in one seamless workflow.

  • Extremely Easy to Use. NatML exposes machine learning models with simple classes that return familiar data types. These are called "Predictors", and they handle all of the heavy lifting for you. No need to write pre-processing scripts or shaders, wrangle tensors, or anything of that sort.

  • Growing Catalog. NatML is designed with a singular focus on applications. As such, we maintain a growing catalog of predictors that developers can quickly discover and deploy in their applications. Check out NatML Hub.

  • Lightweight Package. NatML is distributed in a self-contained package, with no external dependencies. As a result, you can simply import the package and get going--no setup necessary.

Installing NatML

Add the following items to your Unity project's Packages/manifest.json:

{
  "scopedRegistries": [
    {
      "name": "NatML",
      "url": "https://registry.npmjs.com",
      "scopes": ["ai.natml"]
    }
  ],
  "dependencies": {
    "ai.natml.natml": "1.1.16"
  }
}

Using ML Models

drag and drop

If you have a CoreML, ONNX, or TensorFlow Lite model, you can simply drag and drop it into your project. See the documentation for more details.

Note that specific model formats can only be used on specific platforms. CoreML models can only be used on iOS and macOS; ONNX can only be used on Windows; and TensorFlow Lite can only be used on Android. Use NatML Hub to convert your model to different ML formats.

Discover ML Models on NatML Hub

Create an account on NatML Hub to find and download ML predictors to use in your project!

NatML Hub

You can also upload your models to Hub and make them private or public. Check out the documentation for information on writing predictors for your models.

Using ML Models in Two Simple Steps

You will always use NatML in two steps. First, create a predictor by fetching model data from NatML Hub or by loading a local ML model file in your project (.mlmodel, .tflite, and .onnx):

// Create the MobileNet v2 predictor
var predictor = await MobileNetv2Predictor.Create();

Then make predictions with the predictor:

// Make prediction on an image
Texture2D image = ...;
var (label, score) = predictor.Predict(image);

Different predictors accept and produce different data types, but the usage pattern will always be the same.


Requirements

  • Unity 2022.3+

Supported Platforms

  • Android API Level 24+
  • iOS 14+
  • macOS 10.15+ (Apple Silicon and Intel)
  • Windows 10+ (64-bit only)
  • WebGL:
    • Chrome 91+
    • Firefox 90+
    • Safari 16.4+

Resources

Thank you very much!

natml-unity's People

Contributors

olokobayusuf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

natml-unity's Issues

Pixel buffers from `MLVideoFeature` have zero alpha

Reproducible on Windows Unity Editor

var tex = new Texture2D(pixelData.width, pixelData.height, TextureFormat.RGBA32, false);
var reader = new MLVideoFeature(pathWithProtocol);
foreach (var frame in reader)
{
    frame.feature.CopyTo(tex);    
    #if UNITY_EDITOR_WIN
    var ptr = (byte*)tex.GetRawTextureData<byte>().GetUnsafePtr();
    for (var iii = 3; iii < tex.width * tex.height * 4; iii += 4)
        ptr[iii] = 255;
    #endif
    tex.Apply(false);
}

If you log the pixel buffers out you will see that the alpha byte is zero when decoding mp4 h264 video generated by NatCorder.

Hello. It works but show this message endless loop line.

What can I do to fix it?

Warn NdkImageReader Unable to acquire a lockedBuffer, very likely client tries to lock more than maxImages buffers
Warn NdkImageReader Unable to acquire a lockedBuffer, very likely client tries to lock more than maxImages buffers
Warn NdkImageReader Unable to acquire a lockedBuffer, very likely client tries to lock more than maxImages buffers
...

Is Pytorch really included?

Hello! I am looking for lib can run .pt on my NodeJS server. Found your project but I cannot see any info about .pt.
Can you make it clear?

Deserializing models can take a long time on some Android devices

These models are taking a long time to load on a Google Pixel 6 phone, even though they are embedded into the build with MLModelDataEmbed:

We have found by profiling that almost all of this time is spent in NatML.CreateModel.

On a Samsung Galazy S21 Ultra the same models take only ~1.5 seconds to load in total.

Is there a format conversion or any expensive computation happening here? If so, is it possible to do this conversion ahead of time (such as when the app starts) without having to keep the model in memory for the entire lifetime of the app?

Random failure in Android Unity Cloud Builds

We are seeing a random failure in ~25% of Android builds created using Unity Cloud Build with the following stack trace:

[error] [warning] [2023-04-26T20:52:31Z - Unity] ExecutionEngineException: String conversion error: Illegal byte sequence encounted in the input.
[2023-04-26T20:52:31Z - Unity]   at (wrapper managed-to-native) System.Runtime.InteropServices.Marshal.PtrToStringAnsi(intptr)
[2023-04-26T20:52:31Z - Unity]   at System.Runtime.InteropServices.Marshal.PtrToStringUTF8 (System.IntPtr ptr) [0x00000] in <88e4733ac7bc4ae1b496735e6b83bbd3>:0 
[2023-04-26T20:52:31Z - Unity]   at NatML.MLEdgeModel.OnCreateSecret (System.IntPtr context, System.IntPtr secret) [0x00028] in BUILD_PATH/p\o3h-client\Library\PackageCache\[email protected]\Runtime\MLEdgeModel.cs:324 
[2023-04-26T20:52:31Z - Unity]   at (wrapper native-to-managed) NatML.MLEdgeModel.OnCreateSecret(intptr,intptr)
[2023-04-26T20:52:31Z - Unity]   at (wrapper managed-to-native) NatML.Internal.NatML.CreateSecret(NatML.Internal.NatML/SecretCreationHandler,intptr)
[2023-04-26T20:52:31Z - Unity]   at NatML.MLEdgeModel.CreateSecret () [0x00012] in BUILD_PATH/p\o3h-client\Library\PackageCache\[email protected]\Runtime\MLEdgeModel.cs:292 
[2023-04-26T20:52:31Z - Unity]   at NatML.Editor.NatMLSettingsEmbed.CreateEmbeds (UnityEditor.Build.Reporting.BuildReport report) [0x00079] in BUILD_PATH/p\o3h-client\Library\PackageCache\[email protected]\Editor\NatMLSettingsEmbed.cs:50 
[2023-04-26T20:52:31Z - Unity]   at NatML.Hub.Editor.BuildEmbedHelper`1[T].UnityEditor.Build.IPreprocessBuildWithReport.OnPreprocessBuild (UnityEditor.Build.Reporting.BuildReport report) [0x0002b] in BUILD_PATH/p\o3h-client\Library\PackageCache\[email protected]\Editor\BuildEmbedHelper.cs:89 
[2023-04-26T20:52:31Z - Unity]   at UnityEditor.Build.BuildPipelineInterfaces+<>c__DisplayClass16_0.<OnBuildPreProcess>b__1 (UnityEditor.Build.IPreprocessBuildWithReport bpp) [0x00000] in <6e08e61cfda04255b3972ba7f0515fae>:0 
[2023-04-26T20:52:31Z - Unity]   at UnityEditor.Build.BuildPipelineInterfaces.InvokeCallbackInterfacesPair[T1,T2] (System.Collections.Generic.List`1[T] oneInterfaces, System.Action`1[T] invocationOne, System.Collections.Generic.List`1[T] twoInterfaces, System.Action`1[T] invocationTwo, System.Boolean exitOnFailure) [0x000ff] in <6e08e61cfda04255b3972ba7f0515fae>:0 

using NatML version 1.1.4.

Which identifies native code passing an invalid string pointer to C# in a pre process build step.

This has been reproduced in both clean and incremental builds in UCB, but presumably there's something about that environment as we haven't seen this locally.

Looking for any ideas on what could be the cause of this issue.

Unity - Android make WebException : NameResolutionFailure

image

Hi.
I tried movenet-3d mode samplel, but it makes WebException : NameResolutionFailure when request MLModeldata.
In Unity_Editor, this error isn't issued.

I think this exception means url link is inaccurate.

my Environment
: Unity 2019.4.28f1
Window 10, 64bit
Android 11 (API Level 30)

Mapping Hand Tracking anchors to input image

Hi, I'm experimenting with HandPose model and wanted to ask if it is possible to overlay and map the predicted anchors on the input image. I explored the codes but couldn't find how it may be done. Would really appreciate it if you can help me with this.
Thanks

Unable to build for Android with version 1.1.4+

I have been trying to build a unity project for Android, but whenever I use version 1.1.3, 1.1.4 or 1.1.5 the build will fail

DllNotFoundException: NatML assembly:<unknown assembly> type:<unknown type> member:(null)
NatML.MLEdgeModel.CreateSecret () (at ./Library/PackageCache/[email protected]/Runtime/MLEdgeModel.cs:292)
NatML.Editor.NatMLSettingsEmbed.CreateEmbeds (UnityEditor.Build.Reporting.BuildReport report) (at ./Library/PackageCache/[email protected]/Editor/NatMLSettingsEmbed.cs:50)
NatML.Hub.Editor.BuildEmbedHelper`1[T].UnityEditor.Build.IPreprocessBuildWithReport.OnPreprocessBuild (UnityEditor.Build.Reporting.BuildReport report) (at ./Library/PackageCache/[email protected]/Editor/BuildEmbedHelper.cs:89)
UnityEditor.Build.BuildPipelineInterfaces+<>c__DisplayClass16_0.<OnBuildPreProcess>b__1 (UnityEditor.Build.IPreprocessBuildWithReport bpp) (at /home/bokken/build/output/unity/unity/Editor/Mono/BuildPipeline/BuildPipelineInterfaces.cs:499)
UnityEditor.Build.BuildPipelineInterfaces.InvokeCallbackInterfacesPair[T1,T2] (System.Collections.Generic.List`1[T] oneInterfaces, System.Action`1[T] invocationOne, System.Collections.Generic.List`1[T] twoInterfaces, System.Action`1[T] invocationTwo, System.Boolean exitOnFailure) (at /home/bokken/build/output/unity/unity/Editor/Mono/BuildPipeline/BuildPipelineInterfaces.cs:465)
UnityEditor.BuildPlayerWindow:BuildPlayerAndRun() (at /home/bokken/build/output/unity/unity/Editor/Mono/BuildPlayerWindow.cs:188)

I have tried building with both Mono and IL2CP and with several editor versions in the range 2021.3-2023.2.

I am building on linux.

I am new to Unity, so I might be doing something wrong. I am hoping you can give me some pointers and if you need me to dig out more information, I'll be happy to help.

Also, I noted there seem to be some issues with the inter-dependencies between packages that also seem to nag. Some packages, e.g. yolox-unity, depend on earlier versions or have dependencies that do. In the unlikely case it's not me misunderstanding something fundamental, this will cause problems using the latest versions.

Edge Model Embedding Issue on iOS

Issue Outline: Unable to embed MLEdgeModel on iOS. According to to build logs, the model does get downloaded and embedded into the app binary. Seems that NatML isn't picking up the embedded model at runtime

Notes:

  1. Works on Windows build, not on iOS
  2. Using "@natsuite/movenet"
  3. NatML version: 1.1.3
    NatDevice version: 1.3.3
    Hub version: 1.0.20

iOS log
message.txt

Build log on Unity Editor
Editor.log

namespace does not exist

The type or namespace name 'Features' does not exist in the namespace 'NatSuite.ML' (are you missing an assembly reference?)
The type or namespace name 'Vision' does not exist in the namespace 'NatSuite.ML' (are you missing an assembly reference?)
The type or namespace name 'MLModel' could not be found (are you missing a using directive or an assembly reference?)
The type or namespace name 'MLClassificationPredictor' could not be found (are you missing a using directive or an assembly reference?)

Error Unity ArgumentException: Cannot create video type because file does not exist

I got error that message on Android app that build from "MoveNet - Realtime Pose Tracking Asset" on Pure project with NatML.

2022-07-25 16:02:31.976 3635 3841 Error Unity ArgumentException: Cannot create video type because file does not exist
2022-07-25 16:02:31.976 3635 3841 Error Unity Parameter name: path
2022-07-25 16:02:31.976 3635 3841 Error Unity at NatML.Types.MLVideoType.FromFile (System.String path) [0x00008] in <71dcb528a4714f368de49dbc80a1eac2>:0
2022-07-25 16:02:31.976 3635 3841 Error Unity at NatML.Features.MLVideoFeature..ctor (System.String path) [0x0001c] in <71dcb528a4714f368de49dbc80a1eac2>:0
2022-07-25 16:02:31.976 3635 3841 Error Unity at NatML.Examples.MoveNetSample.Start () [0x000bb] in <1fb9cbf6e39241a08e13d815d2d204fc>:0
2022-07-25 16:02:31.976 3635 3841 Error Unity at System.Runtime.CompilerServices.AsyncMethodBuilderCore+<>c.b__7_0 (System.Object state) [0x00000] in <0bfb382d99114c52bcae2561abca6423>:0
2022-07-25 16:02:31.976 3635 3841 Error Unity at UnityEngine.UnitySynchronizationContext+WorkRequest.Invoke () [0x00002] in :0
2022-07-25 16:02:31.976 3635 3841 Error Unity at UnityEngine.UnitySynchronizationContext.Exec () [0x00056] in :0
2022-07-25 16:02:31.976 3635 3841 Error Unity at UnityEngine.UnitySynchronizationContext.ExecuteTasks () [0x00014] in :0

Memory Leak on iOS

We have come across a memory leak when using movenet and movenet-3D on iOS. For some reason, the model isn't being disposed of properly (even though we have the model?.Dispose() in the OnDisable method. To recreate - build the MoveNet-3D sample project for iOS from here: https://github.com/natml-hub/MoveNet-3D. Then open the app and take note of the Documents and Data storage (Settings-> General->Storage->App Name. Then hard quit the app and reopen. You should see the Documents and Data storage jump up each time by 10-12mb. Have tried both embedded ML method and runtime. Happens with both.

Poor fps in android.

Average fps is 14fps.
And It falls lower over times.
It is also confirmed in the latest smartphones in the published sample version.
According to the profiler, the cpu usage of the script is taking up almost the whole.
Is there any way to improve it?

Load model from local disk?

I have tried load my model from unity asset in my local disk.
The code like this :

using var model = modelData.Deserialize();
Debug.Log(model);
#var predictor = ?

the debug log give me that:

MLEdgeModel
Input: input.1: (1, 448, 448, 4) System.Byte
Input: input.4: (1, 448, 448, 4) System.Byte
Input: input.7: (1, 448, 448, 4) System.Byte
Output: 504: (1, 48, 28, 28) System.Single
Output: 499: (1, 24, 28, 28) System.Single
Output: 530: (1, 2016, 28, 28) System.Single
Output: 516: (1, 672, 28, 28) System.Single
UnityEngine.Debug:Log (object)

But how to write the predictor? Look for some help~~~ thanks ^_^

`Predictor` could not be found

Hi!

I'm getting:

The type or namespace name 'Predictor' could not be found (are you missing a using directive or an assembly reference?)

using

"api.natsuite.natml": "1.0.10" and
"scopes": ["api.natsuite"]

or

because with "ai.natml.natml": "1.0.19", and
"scopes": ["ai.natml"]

with my model update in private way (MLModelData.FromHub) and locally (.onnx model).

Any idea?

I think that the question is:

how to properly use my own model?

Thanks!

YOLOXPredictor memory leak when using async functions.

When creating an application I am able to use the Predict method without problems, this predicts like expected. But predicting, blocking the main thread of execution, isn't ideal. That's why I looked into "threading" it. Wrapping this Predict function in a Task, causes it to leak memory at a very fast rate. Remarkable this is only present on iOS (haven't tested Android) and not when used inside the simulated environment. Then it runs like it's supposed to.

image

The following code shows my first way of doing things:

// Causes memory leak
private async void PredictAsync(){
    await Task.Run(() => {
        Predict(currentFeature, (d) => {
            // Do something with the detections
            this.detections = d;
            isDetecting = false;
        });
    });
}

private void Predict(MLImageFeature feature, Action<(Rect rect, string label, float score)[]> callback){

    (Rect rect, string label, float score)[] detections = predictor.Predict(feature);

    if(callback != null)
        callback(detections);
}

Calling await PredictAsync() causes problems.
Then I got told there was a stock implementation for this, so I decided to give this a try.

var predictor = new YOLOXPredictor(...).ToAsync();

...
...

// Triggered on event by XRCpuImageFetcher
private async void OnImageAvailable(XRCpuImage image){
    if(predictor.readyForPrediction && predictor != null){  
        // Might be moved to the image fetcher
        if(currentFeature == null){
            Debug.Log("Creating new feature");
            var conversionParams = new XRCpuImage.ConversionParams(image, TextureFormat.RGBA32, XRCpuImage.Transformation.None);
            currentFeature = new MLImageFeature(conversionParams.outputDimensions.x, conversionParams.outputDimensions.y);
            Debug.Log("Feature created");
        }

        if(cameraTexture == null){
            XRCpuImageToTexture(image);
            visualizer.image = cameraTexture;
        }

        Debug.Log("Copying image to feature");
        MLXRExtensions.CopyFrom(currentFeature, image);

        this.detections = await predictor.Predict(currentFeature);
    }
}

This results in the exact same behaviour.
I also verified it is probably not the XRCpuImage memory management since it works with the syncronous Predict method and it get's disposed by the event that provides it.

using UnityEngine;
using UnityEngine.XR.ARFoundation;
using UnityEngine.XR.ARSubsystems;

public class XRCpuImageFetcher : MonoBehaviour
{
    public ARCameraManager cameraManager;

    void OnEnable()
    {
        cameraManager.frameReceived += FrameChanged;
    }

    void OnDisable()
    {
        cameraManager.frameReceived -= FrameChanged;
    }

    private void FrameChanged(ARCameraFrameEventArgs args){

        if (cameraManager.TryAcquireLatestCpuImage(out XRCpuImage image)){
            ImageAvailable?.Invoke(image);
            image.Dispose();
        }
    }

    public delegate void ImageAvailableDelegate(XRCpuImage image);
    public event ImageAvailableDelegate ImageAvailable;
}

If you need any more information, let me know and I'll look into it!

Implement `computeTarget` fallback in case specified target cannot run model

We've been seeing a few developers having to apply this workaround to get an MLModel to load:

var modelData = ...
modelData.computeTarget = MLModelData.ComputeTarget.CPUOnly; // <-- add this
var model = new MLEdgeModel(modelData);

We should automatically perform this fallback in native code, and log a message informing the developer of said fallback. This prevents nasty try { ... } catch statements in code.

Failure to create secret when building Android on OSX in Unity Cloud Build

Thanks to the updates in 1.1.6 here #55 I now see an error message when the build fails instead of simply crashing.

[error] [2023-05-11T23:59:41Z - Unity] 2023-05-11 16:59:41.853 Unity[9988:76191] NatML Error: Failed to create secret with status: -25300
[2023-05-11T23:59:42Z - Unity] 2023-05-11 16:59:41.959 Unity[9988:76191] NatML Error: Failed to create secret with error: Error Domain=NSOSStatusErrorDomain Code=-25308 "failed to generate asymmetric keypair" (errKCInteractionNotAllowed / errSecInteractionNotAllowed:  / Interaction is not allowed with the Security Server.) UserInfo={numberOfErrorsDeep=0, NSDescription=failed to generate asymmetric keypair}
[2023-05-11T23:59:42Z - Unity] Obtained 256 stack frames.
[2023-05-11T23:59:42Z - Unity] #0  0x007fff227aa0bd in SecKeyCopyPublicKey
[2023-05-11T23:59:42Z - Unity] #1  0x000001609204a2 in NMLModelConfigurationCreateSecret
[2023-05-11T23:59:42Z - Unity] #2  0x00000197f38818 in  (wrapper managed-to-native) NatML.Internal.NatML:CreateSecret (NatML.Internal.NatML/SecretCreationHandler,intptr) [{0x7f8562e1c0b8} + 0xa8]  (0x197f38770 0x197f388bd) [0x118926a80 - Unity Child Domain]

So far I've only seen this with Android builds running on OSX in the Unity Cloud Build environment. I haven't run enough builds on Windows to rule out a possible failure there as well, but I'm sure the error will be different if it occurs.

Assuming this is a build environment issue if you have any details on what could be the issue I'll bounce the issue over to Unity.

Apple Silicon Support

As instructed opening an issue to track support for Apple Silicon (arm64)

DllNotFoundException: NatML assembly:<unknown assembly> type:<unknown type> member:(null)

Async `MLEdgeModel.Create` is still blocking the main thread on Android

Screen Shot 2023-03-14 at 20 04 27

These are the models being loaded (both are embedded):

  • @natml/blazepose-detector
  • @natml/blazepose-landmark

This cannot be solved by calling MLEdgeModel.Create on a non-main thread as it makes a call to PlayerPrefs.GetString which is a main thread only API.

Happens on latest NatML (1.1.3)

Error tflite tensorflow/lite/kernels/gather_nd.cc:135 indices_has_only_positive_elements was not true.

I upgraded Natdevice, Nathub and Natml as your advice.
I only changed the deprecated instructions and didn't modify the rest, but what was working fine is now not working with an error.
It works on Windows and gives the error below on Android, I haven't checked IOS yet.

My code

void Start(){
            var query = new MediaDeviceQuery(MediaDeviceCriteria.FrontCamera);
            cameraDevice = query.current as CameraDevice
            cameraDevice.StartRunning(previewTextureOutput);
            model = await MLEdgeModel.Create("@natsuite/movenet");
            predictor = new MoveNetPredictor(model, smoothing);
}

private void Update(){
           var previewTexture = previewTextureOutput.texture;
            var inputFeature = new MLImageFeature(previewTexture.GetRawTextureData<byte>(), previewTexture.width, previewTexture.height);
            (inputFeature.mean, inputFeature.std) = model.normalization;
            // Detect
            var pose = predictor.Predict(inputFeature);
            // Visualize
            visualizer.Render(previewTexture, pose);
}

logcat.txt

Hello. It works but show this log endless.

2022-08-17 16:47:47.774 31226 31374 Warn NdkImageReader Unable to acquire a lockedBuffer, very likely client tries to lock more than maxImages buffers
2022-08-17 16:47:47.774 31226 31374 Error NatML CameraDevice 1 failed to read preview image
2022-08-17 16:47:47.774 31226 31374 Warn NdkImageReader Unable to acquire a lockedBuffer, very likely client tries to lock more than maxImages buffers
2022-08-17 16:47:47.774 31226 31374 Error NatML CameraDevice 1 failed to read preview image
2022-08-17 16:47:47.774 31226 31374 Warn NdkImageReader Unable to acquire a lockedBuffer, very likely client tries to lock more than maxImages buffers
2022-08-17 16:47:47.774 31226 31374 Error NatML CameraDevice 1 failed to read preview image
2022-08-17 16:47:47.774 31226 31374 Warn NdkImageReader Unable to acquire a lockedBuffer, very likely client tries to lock more than maxImages buffers
2022-08-17 16:47:47.774 31226 31374 Error NatML CameraDevice 1 failed to read preview image
2022-08-17 16:47:47.774 31226 31374 Warn NdkImageReader Unable to acquire a lockedBuffer, very likely client tries to lock more than maxImages buffers
2022-08-17 16:47:47.774 31226 31374 Error NatML CameraDevice 1 failed to read preview image

GPU prediction does not work on Android

I get these errors in logcat when running a prediction with a model that uses the GPU

E/tflite: TfLiteGpuDelegate Invoke: GpuDelegate must run on the same thread where it was initialized.
E/tflite: Node number 374 (TfLiteGpuDelegateV2) failed to invoke.

This probably means that Predict must be called on the same thread that created the GpuDelegate, but with the async MLEdgeModel.Create it is not known on which thread it is created.

There should be a non-async variant of this method that can be run on a background thread controlled by the app so that the predictions can also be run on that thread.

Removal of deprecated interfaces for 1.0.9 may break some hub example code

I've been trying to implement usage of the Hand-Pose model (https://hub.natml.ai/@natsuite/hand-pose), and the code that I downloaded had some errors in HandPosePredictor.cs because it used some interfaces which I saw in the change-log were removed.

"Removed IMLModel interface as it has long been deprecated.
Removed IMLFeature interface as it has long been deprecated."

I started replacing the uses by following instructions in the change logs, but I'm afraid I'm not particularly familiar with this codebase and I'm a little lost on how to fix marshalling so I can use the example code.

image

MLARImageFeature not available

The documentation states that MLARImageFeature can be used to easily convert an XRCpuImage to a image feature. But it seems this method is not yet available for use.

Is there any ETA for this?

I'm currently on 1.0.19

NatML Unity installation bug

I`m trying to install NatML package and get such script error: "Library\PackageCache\[email protected]\Runtime\Features\MLImageFeature.cs(189,50): error CS0117: 'ImageConversion' does not contain a definition for 'EncodeNativeArrayToJPG' "
What had I done wrong?

Add a RegionOfInterest method to MLImageFeature that takes an existing native array

The RegionOfInterest method of MLImageFeature allocates a new array every time it is called, which increases GC pressure when called frequently. For better performance there should be a variant of this method that takes an existing array (byte[] and NativeArray<byte>) and fills it with the extracted image.

We have been able to do this in version 1.0.13 by creating a custom subclass of MLImageFeature, but that no longer works in the latest version (1.0.19).

Adding support for pixelBuffer input for CoreML models

For a model accepting byte image pixelBuffer:

MLEdgeModel
Input: input: (1, 256, 256, 4) System.Byte
Output: activation_out: (1, 256, 256, 4) System.Byte

At the moment model.Predict() fails with:

Unity[2451:70453] NatML Error: Failed to make prediction with error: Error Domain=com.apple.CoreML Code=0 "The model expects input feature input to be an image, but the input (MultiArray : Float32 1 × 256 × 256 × 4 array) is of type 5." UserInfo={NSLocalizedDescription=The model expects input feature input to be an image, but the input (MultiArray : Float32 1 × 256 × 256 × 4 array) is of type 5.}

when submitting an MLArrayFeature containing raw pixels.

To test you can use, for example: https://github.com/john-rocky/CoreML-Models#photo2cartoon

MLXRCpuImageFeature causing memory leak on IOS

Hi, it's me again!

I got everything working alongside AR-Foundation and I got to say I'm really impressed, by far the best NN solution for Unity! But I unfortunately I found a problem with the MLXRCpuImageFeature class. It causes a memory leak on IOS and I suspect it will on Android too. This causes the application to crash after a minute or so (depending on the available RAM). I checked all other things that could be the case for this memory leak but I can reliably narrow it down to creating a new MLXRCpuImageFeature.

image

For context this is a simplified version of how I'm using it:

public ARCameraManager cameraManager;
private MLImageFeature currentFeature;

void Start(){
   cameraManager.frameReceived += FrameChanged;
}

void Update(){
    // Function that uses the current feature to do predictions
    // Does not cause the memory leak (verified)
    PredictAsync(currentFeature);
}

private void FrameChanged(ARCameraFrameEventArgs args){
    if (cameraManager.TryAcquireLatestCpuImage(out XRCpuImage image) && !Application.isEditor){
        // Commenting out this line get's rid of the memory leak
        currentFeature = new MLXRCpuImageFeature(image);
        image.Dispose();
    }    
}

In the meanwhile I'll figure out something to try and get rid of the memory leak but this is probably something that should be known about and fixed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.