Comments (31)
@fs-eire I can verify that using 1.19.0-dev.20240621-69d522f4e9
loading a model using webgpu
in a service worker works - even in a web extension. The necessary code is:
import * as ONNX_WEBGPU from "onnxruntime-web/webgpu";
// any Blob that contains a valid ORT model would work
// I'm using Xenova/multilingual-e5-small/onnx/model_quantized.with_runtime_opt.ort
const buffer = await mlModel.blob.arrayBuffer();
const sessionwebGpu = await ONNX_WEBGPU.InferenceSession.create(buffer, {
executionProviders: ["webgpu"],
});
console.log("Loading embedding model using sessionwebGpu", sessionwebGpu);
Results in a successful execution, yay! 💯 :)
![Bildschirmfoto 2024-07-06 um 19 41 18](https://private-user-images.githubusercontent.com/454817/346289490-935c943e-d68d-4684-8dae-15f11d472dc2.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjEzMTExNTMsIm5iZiI6MTcyMTMxMDg1MywicGF0aCI6Ii80NTQ4MTcvMzQ2Mjg5NDkwLTkzNWM5NDNlLWQ2OGQtNDY4NC04ZGFlLTE1ZjExZDQ3MmRjMi5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQwNzE4JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MDcxOFQxMzU0MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT04ODkxNWViYjc5YTA4Zjg3YjZiZGEzMjFjMGY5MjI0NDZiN2Q5M2U5MDYzMTg3ZGVhMTY1OGUxN2Q1ODdiNjc2JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCZhY3Rvcl9pZD0wJmtleV9pZD0wJnJlcG9faWQ9MCJ9.TAOyalvYcHCu5A2LRCC3Y6UH-K72wH9TOKC-wUWL1c4)
I think we can ignore the warning, printed as an error, as the session loads.
WebAssembly would work in a Service Worker. Just because Service Workers are limited in their ability to load external resources such as WASM runtime files as Blob
or ArrayBuffer
doesn't mean you can't get such data transferred into the Service Worker context. In fact, you can transfer Gigabytes instantly using MessageChannel
and the concept of Transferable objects.
Passing down a Blob/ArrayBuffer
from a content script to a background worker/service worker even works, standard-compliant, with Web Extensions, as I demonstrate here: w3c/webextensions#293 (comment)
It's even much simpler for non-Web-Extension use cases as you simply only use the self.onmessage
API in a service worker to receive a MessageChannel
object and via a port
of it, receive the Blob
or ArrayBuffer
.
I'm aware that the current implementation hard-codes a few things. Like importWasmModule()
is trying to import the Emscripten runtime JS and by default, Emscripten is trying to import the WASM binary. But this isn't something that needs to be engraved into stone...
- The Emscripten runtime code can be imported in userland code like this:
import ortWasmRuntime from "onnxruntime-web/dist/ort-wasm-simd-threaded"
The runtime exports a default runtime function:
- You can easily override the default Emscripten WASM binary module loader by a custom Emscripten WASM module loader, that allows for an
ArrayBuffer
to be passed by reference:
Module['instantiateWasm'] = async(imports, onSuccess) => {
let result;
if (WebAssembly.instantiateStreaming) {
result = WebAssembly.instantiateStreaming(Module["wasmModule"], imports);
} else {
result = await WebAssembly.instantiate(Module["wasmModule"], imports)
}
return onSuccess(result.instance, result.module)
};
Of course, we don't want it that way, but I mention it as this is the "documented way".
- But as you know, you can also set
Module["$option"]
by passing these options as an object to the runtime factory function. In this case, the passed down runtime function, imported by userland code, exactly as you already do here:
{
numThreads,
// just conditionally merge in:
instantiateWasm: ONNX_WASM.env.wasm.instantiateWasm
}
- A proposal on how WASM could work as a PoC in service workers (and web extensions):
import * as ONNX_WASM from "onnxruntime-web/wasm";
// the difference is, that this will be bundled in by the user-land bundler,
// while the conditional dynamic import that happens in the ONNX runtime would not
// as the trenary operator here: https://github.com/microsoft/onnxruntime/blob/83e0c6b96e77634dd648e890cead598b6e065cde/js/web/lib/wasm/wasm-utils-import.ts#L157
// and all it's following code cannot be statically analyzed by bundlers; tree-shaking and inline cannot happen,
// so bundler will be forced to generate dynamic import() code
// this could also lead to downstream issues with the transformersjs package and other packages / bundler combinations,
// while this is explicit and inlined
import ortWasmRuntime from "onnxruntime-web/dist/ort-wasm-simd-threaded"
// could maybe be passed a Blob via https://emscripten.org/docs/api_reference/module.html#Module.mainScriptUrlOrBlob
ONNX_WASM.env.wasm.proxy = false;
// instead of always calling importWasmModule() in wasm-factory.ts, allow to pass down the callback of the Emscripten JS runtime
ONNX_WASM.env.wasm.wasmRuntime = ortWasmRuntime;
// allow to also set a custom Emscripten loader
ONNX_WASM.env.wasm.instantiateWasm = async(imports, onSuccess) => {
let result;
if (WebAssembly.instantiateStreaming) {
// please note that wasmRuntimeBlob comes from user-land code. It may be passed via a MessageChannel
result = WebAssembly.instantiateStreaming(await wasmRuntimeBlob.arrayBuffer(), imports);
} else {
// please note that wasmRuntimeBlob comes from user-land code. It may be passed via a MessageChannel
result = await WebAssembly.instantiate(await wasmRuntimeBlob.arrayBuffer(), imports)
}
return onSuccess(result.instance, result.module)
}
// then continuing as usual
// please note that mlModel comes from user-land code. It may have been passed via a MessageChannel
const modelBuffer = await mlModel.blob.arrayBuffer();
const sessionWasm = await ONNX_WASM.InferenceSession.create(buffer, {
executionProviders: ["wasm"],
});
console.log("Loading embedding model using sessionWasm", sessionWasm);
So with a 1 LoC change (using passed down runtime callback) here, and 1 LoC change here, (add the instantiateWasm
callback reference), the WebAssembly backend should work as well in Service Workers, if I'm not mistaken in this 4D chess pseudo-code, reverse engineering game.
Currently, when I call the WASM implementation:
import * as ONNX_WASM from "onnxruntime-web/wasm";
const sessionWasm = await ONNX_WASM.InferenceSession.create(buffer, {
executionProviders: ["wasm"],
});
console.log("Loading embedding model using sessionWasm", sessionWasm);
Thank you for your help!
from onnxruntime.
Hey, I also need this. I am struggling with importing this version. So far I have been importing ONNX using
import * as ort from "https://cdn.jsdelivr.net/npm/onnxruntime-web/dist/esm/ort.webgpu.min.js"
. However, when I change toimport * as ort from "https://cdn.jsdelivr.net/npm/[email protected]/dist/esm/ort.webgpu.min.js"
it seems not to have an.../esm/
folder. Do you know why that is and how to import it then?
just replace .../esm/ort.webgpu.min.js
to .../ort.webgpu.min.mjs
should work. If you are also using service worker, use ort.webgpu.bundle.min.mjs
instead of ort.webgpu.min.mjs
.
from onnxruntime.
from onnxruntime.
@ChTiSh You're welcome 🫶 Always happy to help :)
from onnxruntime.
Than you for reporting this issue. I will try to figure out how to fix this problem.
from onnxruntime.
#20991 makes default ESM import to use non-dynamic-import and hope this change may fix this problem. PR is still in progress
from onnxruntime.
I can confirm Web GPU is working for my little chrome extension app as well, but I'm having a problem disabling the warning.
You can numb it using a brittle monkey patch...
// store original reference
const originalConsole = self.console;
// override function reference with a new arrow function that does nothing
self.console.error = () => {}
// code will internally call the function that does nothing...
const sessionwebGpu = await ONNX_WEBGPU.InferenceSession.create(buffer, {
executionProviders: ["webgpu"],
});
// still works, we did only replace the reference for the .error() function
console.log("Loading embedding model using sessionwebGpu", sessionwebGpu);
// restore the original function reference, so that console.error() works just as before
self.console.error = originalConsole.error;
But I agree.. it should probably be a console.warning()
call if it is intended to be a warning
from onnxruntime.
So it turns out to be that dynamic import (ie. import()
) and top-level await
is not supported in current service worker. I was not expecting that import()
is banned in SW.
Currently, the WebAssembly factory (wasm-factory.ts) uses dynamic import to load the JS glue. This does not work in service worker. A few potential solutions are also not available:
- Modifying it to import statement: won't work, because the JS glue includes top-level
await
. - Using
importScripts
: won't work, because the JS glue is ESM - Using
eval
: won't work; same toimportScripts
I am now trying to make a JS bundle that does not use dynamic import for usage of service worker specifically. Still working on it
from onnxruntime.
Thanks, I appreciate your efforts around this. It does seem like some special-case bundle will need to be built after all; you might need iife
or umd
for the bundler output format
from onnxruntime.
Thanks, I appreciate your efforts around this. It does seem like some special-case bundle will need to be built after all; you might need
iife
orumd
for the bundler output format
I have considered this option. However, Emscripten does not offer an option to output both UMD(IIFE+CJS) & ESM for JS glue (emscripten-core/emscripten#21899). I have to choose either. I choose the ES6 format output for the JS glue, because of a couple of problems when import UMD from ESM, and import()
is a standard way to import ESM from both ESM and UMD. ( Until I know its not working in service worker by this issue)
I found a way to make ORT web working, - yes this need the build script to do some special handling. And this will only work for ESM, because the JS glue is ESM and it seems no way to import ESM from UMD in service worker.
from onnxruntime.
@ggaabe Could you please help to try import * as ort from “./ort.webgpu.bundle.min.js”
from version 1.19.0-dev.20240604-3dd6fcc089 ?
from onnxruntime.
@fs-eire my project is dependent on transformersjs, which imports onnxruntime webgpu backend like this here:
https://github.com/xenova/transformers.js/blob/v3/src/backends/onnx.js#L24
Is this the right usage? In my project I've added this to my package.json to resolve onnx-runtime to this new version though the issue is still occurring:
"overrides": {
"onnxruntime-web": "1.19.0-dev.20240604-3dd6fcc089"
}
from onnxruntime.
Maybe also important: The same error is still occurring in same spot in inference session in the onnx package and not from transformersjs. Do I need to add a resolver for onnxruntime-common as well?
from onnxruntime.
Hi @fs-eire, is the newly-merged fix in a released build I can try?
from onnxruntime.
Please try 1.19.0-dev.20240612-94aa21c3dd
from onnxruntime.
@fs-eire EDIT: Nvm the comment I just deleted, that error was because I didn't set the webpack target
to webworker
.
However, I'm getting a new error now (progress!):
Error: no available backend found. ERR: [webgpu] RuntimeError: null function or function signature mismatch
from onnxruntime.
Update: Found the error is happening in here:
onnxruntime/js/common/lib/backend-impl.ts
Lines 83 to 86 in fff68c3
For some reason the webgpu backend.init promise is rejecting due to the null function or function signature mismatch
error. This is much further along than we were before though.
from onnxruntime.
Update: Found the error is happening in here:
onnxruntime/js/common/lib/backend-impl.ts
Lines 83 to 86 in fff68c3
For some reason the webgpu backend.init promise is rejecting due to the
null function or function signature mismatch
error. This is much further along than we were before though.
Could you share me the reproduce steps?
from onnxruntime.
@fs-eire You'll need to run the webGPU setup in a chrome extension.
- You can use my code I just published here: https://github.com/ggaabe/extension
- run
npm install
- run
npm run build
- open the chrome manage extensions
![Screenshot 2024-06-14 at 9 37 14 AM](https://private-user-images.githubusercontent.com/5441185/339803607-e9ac0f03-1379-49b4-b5d4-a1bfe1842da8.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjEzMTExNTMsIm5iZiI6MTcyMTMxMDg1MywicGF0aCI6Ii81NDQxMTg1LzMzOTgwMzYwNy1lOWFjMGYwMy0xMzc5LTQ5YjQtYjVkNC1hMWJmZTE4NDJkYTgucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MDcxOCUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDA3MThUMTM1NDEzWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9MzdiZWVlNTE3NGRlMGI1NzI4NGMxZWEyNjU3NjJkYjEwM2ZhYjM1NGQxMDZhYTNiMDYwMzM3NTljMjIwOWYzYSZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QmYWN0b3JfaWQ9MCZrZXlfaWQ9MCZyZXBvX2lkPTAifQ.967IJSXLvLP5ozrBJeWahlN_9tYb5YRjSBQcZaRqyhY)
- load unpacked
![Screenshot 2024-06-14 at 9 37 52 AM](https://private-user-images.githubusercontent.com/5441185/339803816-73da6d0a-de1c-4e4f-9eaa-24a2936b886b.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjEzMTExNTMsIm5iZiI6MTcyMTMxMDg1MywicGF0aCI6Ii81NDQxMTg1LzMzOTgwMzgxNi03M2RhNmQwYS1kZTFjLTRlNGYtOWVhYS0yNGEyOTM2Yjg4NmIucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MDcxOCUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDA3MThUMTM1NDEzWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9NWIyYjVjMzdhOTM2ZjAxYjU3OTE5YWQxYmJlOWU4MTE0YjNiYTEyYjYyMDY0YzdmZTBiZmY0ZThmMmI3N2M1OSZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QmYWN0b3JfaWQ9MCZrZXlfaWQ9MCZyZXBvX2lkPTAifQ.WNM_XiyJoqKO2ZRJmlmCzRpFQjJbg1ZpH3tgipk8bwg)
- select the
build
folder from the repo. - open the
AI WebGPU Extension
extension - type some text in the text input. it will load Phi-3 mini and after finishing loading this error will occur
- if you view the extension in the extension in the extension manager and select the "Inspect views
service worker" link before opening the extension it will bring up an inspection window to view the errors as they occur. A little "errors" bubble link also shows up here after they occur.
![Screenshot 2024-06-14 at 9 40 48 AM](https://private-user-images.githubusercontent.com/5441185/339804747-8f6cd5a4-d9df-4c84-8436-51b289cfcc47.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjEzMTExNTMsIm5iZiI6MTcyMTMxMDg1MywicGF0aCI6Ii81NDQxMTg1LzMzOTgwNDc0Ny04ZjZjZDVhNC1kOWRmLTRjODQtODQzNi01MWIyODljZmNjNDcucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MDcxOCUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDA3MThUMTM1NDEzWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9Zjc3MzIwMjA1MzgyZjY2OTJlYmVmNDNhZDFjMzhhMDQ0Yjc3MmVmMmIwOWE2YTM2OTYwYzI0NDIxMzM2ZjI4NiZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QmYWN0b3JfaWQ9MCZrZXlfaWQ9MCZyZXBvX2lkPTAifQ.MzWaM-0WGK6zERm2rDkuRcRYhFoxLQFk5cTszgjgHEw)
- You will need to click the "Refresh" button on the extension in the extension manager to rerun the error because it does not attempt reloading the model after the first attempt until another refresh
from onnxruntime.
@ggaabe I did some debug on my box and made some fixes -
-
Changes to ONNXRuntime Web:
#21073 is created to make sure the web assembly file can be loaded correctly when
env.wasm.wasmPaths
is not specified. -
Changes to https://github.com/ggaabe/extension
ggaabe/extension#1 need to be made to the extension example, to make it load the model correctly. Please note:
- The onnxruntime-web version need to be updated to consume changes from (1) (after it get merged and published for dev channel)
- There are still errors in background.js, which looks like incorrect params passed to
tokenizer.apply_chat_template()
. However, the WebAssembly is initialized and the model loaded successfully.
-
Other issues:
- Transformerjs overrides
env.wasm.wasmPaths
to a CDN URL internally. At least for this example, we don't want this behavior so we need to reset it toundefined
to keep the default behavior. - Multi-threaded CPU EP is not supported because
Worker
is not accessible in service worker. Issue tracking: whatwg/html#8362
- Transformerjs overrides
from onnxruntime.
Awesome, thank you for your thoroughness in explaining this and tackling this head on. Is there a dev channel version I can test out?
from onnxruntime.
Not yet. Will update here once it is ready.
from onnxruntime.
sorry to bug; is there any dev build number? wasn't sure how often a release runs
from onnxruntime.
sorry to bug; is there any dev build number? wasn't sure how often a release runs
Please try 1.19.0-dev.20240621-69d522f4e9
from onnxruntime.
@fs-eire I'm getting one new error:
ort.webgpu.bundle.min.mjs:6 Uncaught (in promise) Error: The data is not on CPU. Use `getData()` to download GPU data to CPU, or use `texture` or `gpuBuffer` property to access the GPU data directly.
at get data (ort.webgpu.bundle.min.mjs:6:13062)
at get data (tensor.js:62:1)
I pushed the code changes to my repo and fixed the call to the tokenizer. To reproduce, just type 1 letter in the chrome extension’s text input and wait
from onnxruntime.
Hey, I also need this. I am struggling with importing this version. So far I have been importing ONNX using
import * as ort from "https://cdn.jsdelivr.net/npm/onnxruntime-web/dist/esm/ort.webgpu.min.js"
.
However, when I change to import * as ort from "https://cdn.jsdelivr.net/npm/[email protected]/dist/esm/ort.webgpu.min.js"
it seems not to have an .../esm/
folder. Do you know why that is and how to import it then?
from onnxruntime.
@fs-eire I'm getting one new error:
ort.webgpu.bundle.min.mjs:6 Uncaught (in promise) Error: The data is not on CPU. Use `getData()` to download GPU data to CPU, or use `texture` or `gpuBuffer` property to access the GPU data directly. at get data (ort.webgpu.bundle.min.mjs:6:13062) at get data (tensor.js:62:1)
I pushed the code changes to my repo and fixed the call to the tokenizer. To reproduce, just type 1 letter in the chrome extension’s text input and wait
This may be a problem of transformerjs. Could you try whether this problem happen in a normal page? If so, can report the issue to transformerjs. If it's only happening in service worker, I can take a closer look
from onnxruntime.
I can confirm Web GPU is working for my little chrome extension app as well, but I'm having a problem disabling the warning.
from onnxruntime.
@fs-eire I'm getting one new error:
ort.webgpu.bundle.min.mjs:6 Uncaught (in promise) Error: The data is not on CPU. Use `getData()` to download GPU data to CPU, or use `texture` or `gpuBuffer` property to access the GPU data directly. at get data (ort.webgpu.bundle.min.mjs:6:13062) at get data (tensor.js:62:1)
I pushed the code changes to my repo and fixed the call to the tokenizer. To reproduce, just type 1 letter in the chrome extension’s text input and wait
This may be a problem of transformerjs. Could you try whether this problem happen in a normal page? If so, can report the issue to transformerjs. If it's only happening in service worker, I can take a closer look
Did data structures change for the Tensor
class? Specifically dataLocation
vs location
? And if so, did it change consistently? I'm facing issues with data
being undefined
but cpuData
being set (tokenizer result). But when I pass the data down to a BERT model, onnxruntime-web
seems to expect a different data structure and checks location
and data
. Am I missing something or has something changed? Could this lead to downstream issues of code checking for the location
and data
properties mistakenly believing the data isn't there or not in the right place? I linked a downstream issue..
from onnxruntime.
@fs-eire Is there any way how I could help push this forward? Thank you :)
from onnxruntime.
@kyr0 thank you a lot for your willing to help. I am currently in vacation but I will pick up this thread when I am back by end of this month.
from onnxruntime.
Related Issues (20)
- An error occurred when I installed onnxruntime-qnn in an Arm environment HOT 3
- [Performance] Multiple Sessions on Same GPU is very slow
- [Models larger than 2GB :(] Specify mid-graph.output after initializing InferenceSession HOT 2
- [Error] [ONNXRuntimeError] : 1 : FAIL : CUDA failure 3: initialization error HOT 4
- [Build] long paths in NuGet package breaking build on Windows
- [Feature Request] Missing optimization of DequantizeLinear ∘ Flatten ∘ QuantizeLinear?
- Missing onnxruntime_perf_test.exe in Release Assets (or what actually is "Build Drop"?) HOT 2
- [Build]: cmake', '--build', '/temp/liz/onnxruntime/build/Linux/RelWithDebInfo', '--config', 'RelWithDebInfo', '--', '-j64'] HOT 1
- [Feature Request] Request grid_sample 5D support 🌟 HOT 1
- [Build][Bug] The compiler doesn't support BFLOAT16!!! HOT 2
- [WebGPU] `Error: [WebGPU] Kernel "[MaxPool] /sincnet/pool1d.0/MaxPool" failed. Error: length of specified kernel shapes should be 2 less than length of input dimensions` HOT 2
- Error Instantiating EmbeddingModel with ONNX Model intfloat/multilingual-e5-large HOT 1
- [Documentation] Community blog post contribution HOT 1
- [ARM][CPU] Unit test and onnx_runtime_perf test gives cpuinfo error for new Windows ARM chips HOT 2
- [Feature Request] Mark as negative tests for minimal CUDA build
- New restricted asymmetric quantization mode in QDQ mode with zero_point restricted to either 128 or 0
- Trilu op still not work with INT32 input HOT 3
- [WebNN EP] Support int64 output data type for CoreML backend HOT 1
- [Web] where is the demo of object detection on web HOT 2
- An error occurred: Number of unique image tags does not match the number of images in WebAPI by ProcessImages(prompt, images); HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from onnxruntime.