natmlx / natml-unity Goto Github PK
View Code? Open in Web Editor NEWHigh performance, cross-platform machine learning for Unity Engine. Register at https://hub.natml.ai
Home Page: https://docs.natml.ai/unity
License: Apache License 2.0
High performance, cross-platform machine learning for Unity Engine. Register at https://hub.natml.ai
Home Page: https://docs.natml.ai/unity
License: Apache License 2.0
The type or namespace name 'Features' does not exist in the namespace 'NatSuite.ML' (are you missing an assembly reference?)
The type or namespace name 'Vision' does not exist in the namespace 'NatSuite.ML' (are you missing an assembly reference?)
The type or namespace name 'MLModel' could not be found (are you missing a using directive or an assembly reference?)
The type or namespace name 'MLClassificationPredictor' could not be found (are you missing a using directive or an assembly reference?)
Reproducible on Windows Unity Editor
var tex = new Texture2D(pixelData.width, pixelData.height, TextureFormat.RGBA32, false);
var reader = new MLVideoFeature(pathWithProtocol);
foreach (var frame in reader)
{
frame.feature.CopyTo(tex);
#if UNITY_EDITOR_WIN
var ptr = (byte*)tex.GetRawTextureData<byte>().GetUnsafePtr();
for (var iii = 3; iii < tex.width * tex.height * 4; iii += 4)
ptr[iii] = 255;
#endif
tex.Apply(false);
}
If you log the pixel buffers out you will see that the alpha byte is zero when decoding mp4 h264 video generated by NatCorder.
When creating an application I am able to use the Predict
method without problems, this predicts like expected. But predicting, blocking the main thread of execution, isn't ideal. That's why I looked into "threading" it. Wrapping this Predict
function in a Task, causes it to leak memory at a very fast rate. Remarkable this is only present on iOS (haven't tested Android) and not when used inside the simulated environment. Then it runs like it's supposed to.
The following code shows my first way of doing things:
// Causes memory leak
private async void PredictAsync(){
await Task.Run(() => {
Predict(currentFeature, (d) => {
// Do something with the detections
this.detections = d;
isDetecting = false;
});
});
}
private void Predict(MLImageFeature feature, Action<(Rect rect, string label, float score)[]> callback){
(Rect rect, string label, float score)[] detections = predictor.Predict(feature);
if(callback != null)
callback(detections);
}
Calling await PredictAsync()
causes problems.
Then I got told there was a stock implementation for this, so I decided to give this a try.
var predictor = new YOLOXPredictor(...).ToAsync();
...
...
// Triggered on event by XRCpuImageFetcher
private async void OnImageAvailable(XRCpuImage image){
if(predictor.readyForPrediction && predictor != null){
// Might be moved to the image fetcher
if(currentFeature == null){
Debug.Log("Creating new feature");
var conversionParams = new XRCpuImage.ConversionParams(image, TextureFormat.RGBA32, XRCpuImage.Transformation.None);
currentFeature = new MLImageFeature(conversionParams.outputDimensions.x, conversionParams.outputDimensions.y);
Debug.Log("Feature created");
}
if(cameraTexture == null){
XRCpuImageToTexture(image);
visualizer.image = cameraTexture;
}
Debug.Log("Copying image to feature");
MLXRExtensions.CopyFrom(currentFeature, image);
this.detections = await predictor.Predict(currentFeature);
}
}
This results in the exact same behaviour.
I also verified it is probably not the XRCpuImage memory management since it works with the syncronous Predict
method and it get's disposed by the event that provides it.
using UnityEngine;
using UnityEngine.XR.ARFoundation;
using UnityEngine.XR.ARSubsystems;
public class XRCpuImageFetcher : MonoBehaviour
{
public ARCameraManager cameraManager;
void OnEnable()
{
cameraManager.frameReceived += FrameChanged;
}
void OnDisable()
{
cameraManager.frameReceived -= FrameChanged;
}
private void FrameChanged(ARCameraFrameEventArgs args){
if (cameraManager.TryAcquireLatestCpuImage(out XRCpuImage image)){
ImageAvailable?.Invoke(image);
image.Dispose();
}
}
public delegate void ImageAvailableDelegate(XRCpuImage image);
public event ImageAvailableDelegate ImageAvailable;
}
If you need any more information, let me know and I'll look into it!
I've been trying to implement usage of the Hand-Pose model (https://hub.natml.ai/@natsuite/hand-pose), and the code that I downloaded had some errors in HandPosePredictor.cs because it used some interfaces which I saw in the change-log were removed.
"Removed IMLModel interface as it has long been deprecated.
Removed IMLFeature interface as it has long been deprecated."
I started replacing the uses by following instructions in the change logs, but I'm afraid I'm not particularly familiar with this codebase and I'm a little lost on how to fix marshalling so I can use the example code.
I accidentally downgraded my plan for natml.ai and I need to upgrade again on the expiration date, but I upgraded to today's date and now my plan is working double. Please refund the plan I paid for today.
Average fps is 14fps.
And It falls lower over times.
It is also confirmed in the latest smartphones in the published sample version.
According to the profiler, the cpu usage of the script is taking up almost the whole.
Is there any way to improve it?
I have been trying to build a unity project for Android, but whenever I use version 1.1.3, 1.1.4 or 1.1.5 the build will fail
DllNotFoundException: NatML assembly:<unknown assembly> type:<unknown type> member:(null)
NatML.MLEdgeModel.CreateSecret () (at ./Library/PackageCache/[email protected]/Runtime/MLEdgeModel.cs:292)
NatML.Editor.NatMLSettingsEmbed.CreateEmbeds (UnityEditor.Build.Reporting.BuildReport report) (at ./Library/PackageCache/[email protected]/Editor/NatMLSettingsEmbed.cs:50)
NatML.Hub.Editor.BuildEmbedHelper`1[T].UnityEditor.Build.IPreprocessBuildWithReport.OnPreprocessBuild (UnityEditor.Build.Reporting.BuildReport report) (at ./Library/PackageCache/[email protected]/Editor/BuildEmbedHelper.cs:89)
UnityEditor.Build.BuildPipelineInterfaces+<>c__DisplayClass16_0.<OnBuildPreProcess>b__1 (UnityEditor.Build.IPreprocessBuildWithReport bpp) (at /home/bokken/build/output/unity/unity/Editor/Mono/BuildPipeline/BuildPipelineInterfaces.cs:499)
UnityEditor.Build.BuildPipelineInterfaces.InvokeCallbackInterfacesPair[T1,T2] (System.Collections.Generic.List`1[T] oneInterfaces, System.Action`1[T] invocationOne, System.Collections.Generic.List`1[T] twoInterfaces, System.Action`1[T] invocationTwo, System.Boolean exitOnFailure) (at /home/bokken/build/output/unity/unity/Editor/Mono/BuildPipeline/BuildPipelineInterfaces.cs:465)
UnityEditor.BuildPlayerWindow:BuildPlayerAndRun() (at /home/bokken/build/output/unity/unity/Editor/Mono/BuildPlayerWindow.cs:188)
I have tried building with both Mono and IL2CP and with several editor versions in the range 2021.3-2023.2.
I am building on linux.
I am new to Unity, so I might be doing something wrong. I am hoping you can give me some pointers and if you need me to dig out more information, I'll be happy to help.
Also, I noted there seem to be some issues with the inter-dependencies between packages that also seem to nag. Some packages, e.g. yolox-unity, depend on earlier versions or have dependencies that do. In the unlikely case it's not me misunderstanding something fundamental, this will cause problems using the latest versions.
The error content is the same as below.
I have upgraded to 1.1.2. But the error occur still.
I upgraded Natdevice, Nathub and Natml as your advice.
I only changed the deprecated instructions and didn't modify the rest, but what was working fine is now not working with an error.
It works on Windows and gives the error below on Android, I haven't checked IOS yet.
My code
void Start(){
var query = new MediaDeviceQuery(MediaDeviceCriteria.FrontCamera);
cameraDevice = query.current as CameraDevice
cameraDevice.StartRunning(previewTextureOutput);
model = await MLEdgeModel.Create("@natsuite/movenet");
predictor = new MoveNetPredictor(model, smoothing);
}
private void Update(){
var previewTexture = previewTextureOutput.texture;
var inputFeature = new MLImageFeature(previewTexture.GetRawTextureData<byte>(), previewTexture.width, previewTexture.height);
(inputFeature.mean, inputFeature.std) = model.normalization;
// Detect
var pose = predictor.Predict(inputFeature);
// Visualize
visualizer.Render(previewTexture, pose);
}
The RegionOfInterest method of MLImageFeature allocates a new array every time it is called, which increases GC pressure when called frequently. For better performance there should be a variant of this method that takes an existing array (byte[]
and NativeArray<byte>
) and fills it with the extracted image.
We have been able to do this in version 1.0.13 by creating a custom subclass of MLImageFeature, but that no longer works in the latest version (1.0.19).
Hi,
NatML package is no longer available in Unity Asset Store - please tell me, how to get and install basic package?
What can I do to fix it?
Warn NdkImageReader Unable to acquire a lockedBuffer, very likely client tries to lock more than maxImages buffers
Warn NdkImageReader Unable to acquire a lockedBuffer, very likely client tries to lock more than maxImages buffers
Warn NdkImageReader Unable to acquire a lockedBuffer, very likely client tries to lock more than maxImages buffers
...
Using the MLVideoFeature
API.
These models are taking a long time to load on a Google Pixel 6 phone, even though they are embedded into the build with MLModelDataEmbed
:
We have found by profiling that almost all of this time is spent in NatML.CreateModel
.
On a Samsung Galazy S21 Ultra the same models take only ~1.5 seconds to load in total.
Is there a format conversion or any expensive computation happening here? If so, is it possible to do this conversion ahead of time (such as when the app starts) without having to keep the model in memory for the entire lifetime of the app?
Reported by byfen on Unity Forums.
I get these errors in logcat when running a prediction with a model that uses the GPU
E/tflite: TfLiteGpuDelegate Invoke: GpuDelegate must run on the same thread where it was initialized.
E/tflite: Node number 374 (TfLiteGpuDelegateV2) failed to invoke.
This probably means that Predict
must be called on the same thread that created the GpuDelegate, but with the async MLEdgeModel.Create
it is not known on which thread it is created.
There should be a non-async variant of this method that can be run on a background thread controlled by the app so that the predictions can also be run on that thread.
Android logcat show error like this.
2022-09-19 11:23:09.336 26251 26728 Error NatML CameraDevice 1 failed to read preview image with error: -30002
For a model accepting byte image pixelBuffer:
MLEdgeModel
Input: input: (1, 256, 256, 4) System.Byte
Output: activation_out: (1, 256, 256, 4) System.Byte
At the moment model.Predict() fails with:
Unity[2451:70453] NatML Error: Failed to make prediction with error: Error Domain=com.apple.CoreML Code=0 "The model expects input feature input to be an image, but the input (MultiArray : Float32 1 × 256 × 256 × 4 array) is of type 5." UserInfo={NSLocalizedDescription=The model expects input feature input to be an image, but the input (MultiArray : Float32 1 × 256 × 256 × 4 array) is of type 5.}
when submitting an MLArrayFeature containing raw pixels.
To test you can use, for example: https://github.com/john-rocky/CoreML-Models#photo2cartoon
I got error that message on Android app that build from "MoveNet - Realtime Pose Tracking Asset" on Pure project with NatML.
2022-07-25 16:02:31.976 3635 3841 Error Unity ArgumentException: Cannot create video type because file does not exist
2022-07-25 16:02:31.976 3635 3841 Error Unity Parameter name: path
2022-07-25 16:02:31.976 3635 3841 Error Unity at NatML.Types.MLVideoType.FromFile (System.String path) [0x00008] in <71dcb528a4714f368de49dbc80a1eac2>:0
2022-07-25 16:02:31.976 3635 3841 Error Unity at NatML.Features.MLVideoFeature..ctor (System.String path) [0x0001c] in <71dcb528a4714f368de49dbc80a1eac2>:0
2022-07-25 16:02:31.976 3635 3841 Error Unity at NatML.Examples.MoveNetSample.Start () [0x000bb] in <1fb9cbf6e39241a08e13d815d2d204fc>:0
2022-07-25 16:02:31.976 3635 3841 Error Unity at System.Runtime.CompilerServices.AsyncMethodBuilderCore+<>c.b__7_0 (System.Object state) [0x00000] in <0bfb382d99114c52bcae2561abca6423>:0
2022-07-25 16:02:31.976 3635 3841 Error Unity at UnityEngine.UnitySynchronizationContext+WorkRequest.Invoke () [0x00002] in :0
2022-07-25 16:02:31.976 3635 3841 Error Unity at UnityEngine.UnitySynchronizationContext.Exec () [0x00056] in :0
2022-07-25 16:02:31.976 3635 3841 Error Unity at UnityEngine.UnitySynchronizationContext.ExecuteTasks () [0x00014] in :0
Thanks to the updates in 1.1.6 here #55 I now see an error message when the build fails instead of simply crashing.
[error] [2023-05-11T23:59:41Z - Unity] 2023-05-11 16:59:41.853 Unity[9988:76191] NatML Error: Failed to create secret with status: -25300
[2023-05-11T23:59:42Z - Unity] 2023-05-11 16:59:41.959 Unity[9988:76191] NatML Error: Failed to create secret with error: Error Domain=NSOSStatusErrorDomain Code=-25308 "failed to generate asymmetric keypair" (errKCInteractionNotAllowed / errSecInteractionNotAllowed: / Interaction is not allowed with the Security Server.) UserInfo={numberOfErrorsDeep=0, NSDescription=failed to generate asymmetric keypair}
[2023-05-11T23:59:42Z - Unity] Obtained 256 stack frames.
[2023-05-11T23:59:42Z - Unity] #0 0x007fff227aa0bd in SecKeyCopyPublicKey
[2023-05-11T23:59:42Z - Unity] #1 0x000001609204a2 in NMLModelConfigurationCreateSecret
[2023-05-11T23:59:42Z - Unity] #2 0x00000197f38818 in (wrapper managed-to-native) NatML.Internal.NatML:CreateSecret (NatML.Internal.NatML/SecretCreationHandler,intptr) [{0x7f8562e1c0b8} + 0xa8] (0x197f38770 0x197f388bd) [0x118926a80 - Unity Child Domain]
So far I've only seen this with Android builds running on OSX in the Unity Cloud Build environment. I haven't run enough builds on Windows to rule out a possible failure there as well, but I'm sure the error will be different if it occurs.
Assuming this is a build environment issue if you have any details on what could be the issue I'll bounce the issue over to Unity.
Issue Outline: Unable to embed MLEdgeModel on iOS. According to to build logs, the model does get downloaded and embedded into the app binary. Seems that NatML isn't picking up the embedded model at runtime
Notes:
iOS log
message.txt
Build log on Unity Editor
Editor.log
2022-08-17 16:47:47.774 31226 31374 Warn NdkImageReader Unable to acquire a lockedBuffer, very likely client tries to lock more than maxImages buffers
2022-08-17 16:47:47.774 31226 31374 Error NatML CameraDevice 1 failed to read preview image
2022-08-17 16:47:47.774 31226 31374 Warn NdkImageReader Unable to acquire a lockedBuffer, very likely client tries to lock more than maxImages buffers
2022-08-17 16:47:47.774 31226 31374 Error NatML CameraDevice 1 failed to read preview image
2022-08-17 16:47:47.774 31226 31374 Warn NdkImageReader Unable to acquire a lockedBuffer, very likely client tries to lock more than maxImages buffers
2022-08-17 16:47:47.774 31226 31374 Error NatML CameraDevice 1 failed to read preview image
2022-08-17 16:47:47.774 31226 31374 Warn NdkImageReader Unable to acquire a lockedBuffer, very likely client tries to lock more than maxImages buffers
2022-08-17 16:47:47.774 31226 31374 Error NatML CameraDevice 1 failed to read preview image
2022-08-17 16:47:47.774 31226 31374 Warn NdkImageReader Unable to acquire a lockedBuffer, very likely client tries to lock more than maxImages buffers
2022-08-17 16:47:47.774 31226 31374 Error NatML CameraDevice 1 failed to read preview image
Hi, it's me again!
I got everything working alongside AR-Foundation and I got to say I'm really impressed, by far the best NN solution for Unity! But I unfortunately I found a problem with the MLXRCpuImageFeature class. It causes a memory leak on IOS and I suspect it will on Android too. This causes the application to crash after a minute or so (depending on the available RAM). I checked all other things that could be the case for this memory leak but I can reliably narrow it down to creating a new MLXRCpuImageFeature.
For context this is a simplified version of how I'm using it:
public ARCameraManager cameraManager;
private MLImageFeature currentFeature;
void Start(){
cameraManager.frameReceived += FrameChanged;
}
void Update(){
// Function that uses the current feature to do predictions
// Does not cause the memory leak (verified)
PredictAsync(currentFeature);
}
private void FrameChanged(ARCameraFrameEventArgs args){
if (cameraManager.TryAcquireLatestCpuImage(out XRCpuImage image) && !Application.isEditor){
// Commenting out this line get's rid of the memory leak
currentFeature = new MLXRCpuImageFeature(image);
image.Dispose();
}
}
In the meanwhile I'll figure out something to try and get rid of the memory leak but this is probably something that should be known about and fixed.
As instructed opening an issue to track support for Apple Silicon (arm64)
DllNotFoundException: NatML assembly:<unknown assembly> type:<unknown type> member:(null)
The documentation states that MLARImageFeature can be used to easily convert an XRCpuImage to a image feature. But it seems this method is not yet available for use.
Is there any ETA for this?
I'm currently on 1.0.19
Hi, I'm experimenting with HandPose model and wanted to ask if it is possible to overlay and map the predicted anchors on the input image. I explored the codes but couldn't find how it may be done. Would really appreciate it if you can help me with this.
Thanks
Hi,
When Fetching data starts on Android, I get NullReferenceException: Object reference not set to an instance of an object
But in Editor it works ok.
Thanks in advance for help
I have tried load my model from unity asset in my local disk.
The code like this :
using var model = modelData.Deserialize();
Debug.Log(model);
#var predictor = ?
the debug log give me that:
MLEdgeModel
Input: input.1: (1, 448, 448, 4) System.Byte
Input: input.4: (1, 448, 448, 4) System.Byte
Input: input.7: (1, 448, 448, 4) System.Byte
Output: 504: (1, 48, 28, 28) System.Single
Output: 499: (1, 24, 28, 28) System.Single
Output: 530: (1, 2016, 28, 28) System.Single
Output: 516: (1, 672, 28, 28) System.Single
UnityEngine.Debug:Log (object)
But how to write the predictor? Look for some help~~~ thanks ^_^
I have upgraded Natml,Nathub and Natdevice as your advice.
I changed only the deprecated commands and didn't modify the rest, but what was working fine is now not working with an error.
It works on Windows, but on Android I get an error like below.
We've been seeing a few developers having to apply this workaround to get an MLModel
to load:
var modelData = ...
modelData.computeTarget = MLModelData.ComputeTarget.CPUOnly; // <-- add this
var model = new MLEdgeModel(modelData);
We should automatically perform this fallback in native code, and log a message informing the developer of said fallback. This prevents nasty try { ... } catch
statements in code.
Hi!
I'm getting:
The type or namespace name 'Predictor' could not be found (are you missing a using directive or an assembly reference?)
using
"api.natsuite.natml": "1.0.10" and
"scopes": ["api.natsuite"]
or
because with "ai.natml.natml": "1.0.19", and
"scopes": ["ai.natml"]
with my model update in private way (MLModelData.FromHub) and locally (.onnx model).
Any idea?
I think that the question is:
how to properly use my own model?
Thanks!
I`m trying to install NatML package and get such script error: "Library\PackageCache\[email protected]\Runtime\Features\MLImageFeature.cs(189,50): error CS0117: 'ImageConversion' does not contain a definition for 'EncodeNativeArrayToJPG' "
What had I done wrong?
We are seeing a random failure in ~25% of Android builds created using Unity Cloud Build with the following stack trace:
[error] [warning] [2023-04-26T20:52:31Z - Unity] ExecutionEngineException: String conversion error: Illegal byte sequence encounted in the input.
[2023-04-26T20:52:31Z - Unity] at (wrapper managed-to-native) System.Runtime.InteropServices.Marshal.PtrToStringAnsi(intptr)
[2023-04-26T20:52:31Z - Unity] at System.Runtime.InteropServices.Marshal.PtrToStringUTF8 (System.IntPtr ptr) [0x00000] in <88e4733ac7bc4ae1b496735e6b83bbd3>:0
[2023-04-26T20:52:31Z - Unity] at NatML.MLEdgeModel.OnCreateSecret (System.IntPtr context, System.IntPtr secret) [0x00028] in BUILD_PATH/p\o3h-client\Library\PackageCache\[email protected]\Runtime\MLEdgeModel.cs:324
[2023-04-26T20:52:31Z - Unity] at (wrapper native-to-managed) NatML.MLEdgeModel.OnCreateSecret(intptr,intptr)
[2023-04-26T20:52:31Z - Unity] at (wrapper managed-to-native) NatML.Internal.NatML.CreateSecret(NatML.Internal.NatML/SecretCreationHandler,intptr)
[2023-04-26T20:52:31Z - Unity] at NatML.MLEdgeModel.CreateSecret () [0x00012] in BUILD_PATH/p\o3h-client\Library\PackageCache\[email protected]\Runtime\MLEdgeModel.cs:292
[2023-04-26T20:52:31Z - Unity] at NatML.Editor.NatMLSettingsEmbed.CreateEmbeds (UnityEditor.Build.Reporting.BuildReport report) [0x00079] in BUILD_PATH/p\o3h-client\Library\PackageCache\[email protected]\Editor\NatMLSettingsEmbed.cs:50
[2023-04-26T20:52:31Z - Unity] at NatML.Hub.Editor.BuildEmbedHelper`1[T].UnityEditor.Build.IPreprocessBuildWithReport.OnPreprocessBuild (UnityEditor.Build.Reporting.BuildReport report) [0x0002b] in BUILD_PATH/p\o3h-client\Library\PackageCache\[email protected]\Editor\BuildEmbedHelper.cs:89
[2023-04-26T20:52:31Z - Unity] at UnityEditor.Build.BuildPipelineInterfaces+<>c__DisplayClass16_0.<OnBuildPreProcess>b__1 (UnityEditor.Build.IPreprocessBuildWithReport bpp) [0x00000] in <6e08e61cfda04255b3972ba7f0515fae>:0
[2023-04-26T20:52:31Z - Unity] at UnityEditor.Build.BuildPipelineInterfaces.InvokeCallbackInterfacesPair[T1,T2] (System.Collections.Generic.List`1[T] oneInterfaces, System.Action`1[T] invocationOne, System.Collections.Generic.List`1[T] twoInterfaces, System.Action`1[T] invocationTwo, System.Boolean exitOnFailure) [0x000ff] in <6e08e61cfda04255b3972ba7f0515fae>:0
using NatML version 1.1.4.
Which identifies native code passing an invalid string pointer to C# in a pre process build step.
This has been reproduced in both clean and incremental builds in UCB, but presumably there's something about that environment as we haven't seen this locally.
Looking for any ideas on what could be the cause of this issue.
Hello. I use "VideoKit Core" plan on 'natml.ai'.
I've been using it for the last 3 months with no issues, but a few days ago it started to happen.
Hello! I am looking for lib can run .pt on my NodeJS server. Found your project but I cannot see any info about .pt.
Can you make it clear?
TinyYolov3 crashes Unity as soon as the scene starts. Looks like there's a problem with actually pulling the model from Hub.
We have come across a memory leak when using movenet and movenet-3D on iOS. For some reason, the model isn't being disposed of properly (even though we have the model?.Dispose() in the OnDisable method. To recreate - build the MoveNet-3D sample project for iOS from here: https://github.com/natml-hub/MoveNet-3D. Then open the app and take note of the Documents and Data storage (Settings-> General->Storage->App Name. Then hard quit the app and reopen. You should see the Documents and Data storage jump up each time by 10-12mb. Have tried both embedded ML method and runtime. Happens with both.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.