Giter Club home page Giter Club logo

Comments (31)

kyr0 avatar kyr0 commented on July 24, 2024 3

@fs-eire I can verify that using 1.19.0-dev.20240621-69d522f4e9 loading a model using webgpu in a service worker works - even in a web extension. The necessary code is:

import * as ONNX_WEBGPU from "onnxruntime-web/webgpu";

// any Blob that contains a valid ORT model would work
// I'm using Xenova/multilingual-e5-small/onnx/model_quantized.with_runtime_opt.ort
const buffer = await mlModel.blob.arrayBuffer();  
  
const sessionwebGpu = await ONNX_WEBGPU.InferenceSession.create(buffer, {
  executionProviders: ["webgpu"],
});
console.log("Loading embedding model using sessionwebGpu", sessionwebGpu);

Results in a successful execution, yay! 💯 :)

Bildschirmfoto 2024-07-06 um 19 41 18

I think we can ignore the warning, printed as an error, as the session loads.

WebAssembly would work in a Service Worker. Just because Service Workers are limited in their ability to load external resources such as WASM runtime files as Blob or ArrayBuffer doesn't mean you can't get such data transferred into the Service Worker context. In fact, you can transfer Gigabytes instantly using MessageChannel and the concept of Transferable objects.

Passing down a Blob/ArrayBuffer from a content script to a background worker/service worker even works, standard-compliant, with Web Extensions, as I demonstrate here: w3c/webextensions#293 (comment)

It's even much simpler for non-Web-Extension use cases as you simply only use the self.onmessage API in a service worker to receive a MessageChannel object and via a port of it, receive the Blob or ArrayBuffer.

I'm aware that the current implementation hard-codes a few things. Like importWasmModule() is trying to import the Emscripten runtime JS and by default, Emscripten is trying to import the WASM binary. But this isn't something that needs to be engraved into stone...

  1. The Emscripten runtime code can be imported in userland code like this:
import ortWasmRuntime from "onnxruntime-web/dist/ort-wasm-simd-threaded"

As the node_modules show:
Bildschirmfoto 2024-07-06 um 22 08 17

The runtime exports a default runtime function:
Bildschirmfoto 2024-07-06 um 21 59 28

  1. You can easily override the default Emscripten WASM binary module loader by a custom Emscripten WASM module loader, that allows for an ArrayBuffer to be passed by reference:
Module['instantiateWasm'] = async(imports, onSuccess) => {
  let result;
  if (WebAssembly.instantiateStreaming) {
    result = WebAssembly.instantiateStreaming(Module["wasmModule"], imports);
  } else {
    result = await WebAssembly.instantiate(Module["wasmModule"], imports)
  }
  return onSuccess(result.instance, result.module)
};

Of course, we don't want it that way, but I mention it as this is the "documented way".

  1. But as you know, you can also set Module["$option"] by passing these options as an object to the runtime factory function. In this case, the passed down runtime function, imported by userland code, exactly as you already do here:
{
    numThreads,
    // just conditionally merge in:
    instantiateWasm: ONNX_WASM.env.wasm.instantiateWasm
}
  1. A proposal on how WASM could work as a PoC in service workers (and web extensions):
import * as ONNX_WASM from "onnxruntime-web/wasm";

// the difference is, that this will be bundled in by the user-land bundler, 
// while the conditional dynamic import that happens in the  ONNX runtime would not 
// as the trenary operator here: https://github.com/microsoft/onnxruntime/blob/83e0c6b96e77634dd648e890cead598b6e065cde/js/web/lib/wasm/wasm-utils-import.ts#L157 
// and all it's following code cannot be statically analyzed by bundlers; tree-shaking and inline cannot happen,
// so bundler will be forced to generate dynamic import() code
// this could also lead to downstream issues with the transformersjs package and other packages / bundler combinations,
// while this is explicit and inlined
import ortWasmRuntime from "onnxruntime-web/dist/ort-wasm-simd-threaded"

// could maybe be passed a Blob via https://emscripten.org/docs/api_reference/module.html#Module.mainScriptUrlOrBlob
ONNX_WASM.env.wasm.proxy = false;

// instead of always calling importWasmModule() in wasm-factory.ts, allow to pass down the callback of the Emscripten JS runtime
ONNX_WASM.env.wasm.wasmRuntime = ortWasmRuntime;

// allow to also set a custom Emscripten loader
ONNX_WASM.env.wasm.instantiateWasm = async(imports, onSuccess) => {
  let result;
  if (WebAssembly.instantiateStreaming) {
    // please note that wasmRuntimeBlob comes from user-land code. It may be passed via a MessageChannel
    result = WebAssembly.instantiateStreaming(await wasmRuntimeBlob.arrayBuffer(), imports);
  } else {
    // please note that wasmRuntimeBlob comes from user-land code. It may be passed via a MessageChannel
    result = await WebAssembly.instantiate(await wasmRuntimeBlob.arrayBuffer(), imports)
  }
  return onSuccess(result.instance, result.module)
}

// then continuing as usual
// please note that mlModel comes from user-land code. It may have been passed via a MessageChannel
const modelBuffer = await mlModel.blob.arrayBuffer();
const sessionWasm = await ONNX_WASM.InferenceSession.create(buffer, {
  executionProviders: ["wasm"],
});
console.log("Loading embedding model using sessionWasm", sessionWasm);

So with a 1 LoC change (using passed down runtime callback) here, and 1 LoC change here, (add the instantiateWasm callback reference), the WebAssembly backend should work as well in Service Workers, if I'm not mistaken in this 4D chess pseudo-code, reverse engineering game.

Currently, when I call the WASM implementation:

import * as ONNX_WASM from "onnxruntime-web/wasm";

const sessionWasm = await ONNX_WASM.InferenceSession.create(buffer, {
  executionProviders: ["wasm"],
});
console.log("Loading embedding model using sessionWasm", sessionWasm);

Result:
Bildschirmfoto 2024-07-06 um 19 57 46

Thank you for your help!

from onnxruntime.

fs-eire avatar fs-eire commented on July 24, 2024 2

Hey, I also need this. I am struggling with importing this version. So far I have been importing ONNX using import * as ort from "https://cdn.jsdelivr.net/npm/onnxruntime-web/dist/esm/ort.webgpu.min.js". However, when I change to import * as ort from "https://cdn.jsdelivr.net/npm/[email protected]/dist/esm/ort.webgpu.min.js" it seems not to have an .../esm/ folder. Do you know why that is and how to import it then?

just replace .../esm/ort.webgpu.min.js to .../ort.webgpu.min.mjs should work. If you are also using service worker, use ort.webgpu.bundle.min.mjs instead of ort.webgpu.min.mjs.

from onnxruntime.

ChTiSh avatar ChTiSh commented on July 24, 2024 2

from onnxruntime.

kyr0 avatar kyr0 commented on July 24, 2024 2

@ChTiSh You're welcome 🫶 Always happy to help :)

from onnxruntime.

fs-eire avatar fs-eire commented on July 24, 2024 1

Than you for reporting this issue. I will try to figure out how to fix this problem.

from onnxruntime.

fs-eire avatar fs-eire commented on July 24, 2024 1

#20991 makes default ESM import to use non-dynamic-import and hope this change may fix this problem. PR is still in progress

from onnxruntime.

kyr0 avatar kyr0 commented on July 24, 2024 1

@ChTiSh

I can confirm Web GPU is working for my little chrome extension app as well, but I'm having a problem disabling the warning.

You can numb it using a brittle monkey patch...

// store original reference
const originalConsole = self.console;

// override function reference with a new arrow function that does nothing
self.console.error = () => {}

// code will internally call the function that does nothing...
const sessionwebGpu = await ONNX_WEBGPU.InferenceSession.create(buffer, {
  executionProviders: ["webgpu"],
});

// still works, we did only replace the reference for the .error() function
console.log("Loading embedding model using sessionwebGpu", sessionwebGpu);

// restore the original function reference, so that console.error() works just as before
self.console.error = originalConsole.error;

But I agree.. it should probably be a console.warning() call if it is intended to be a warning

from onnxruntime.

fs-eire avatar fs-eire commented on July 24, 2024

So it turns out to be that dynamic import (ie. import()) and top-level await is not supported in current service worker. I was not expecting that import() is banned in SW.

Currently, the WebAssembly factory (wasm-factory.ts) uses dynamic import to load the JS glue. This does not work in service worker. A few potential solutions are also not available:

  • Modifying it to import statement: won't work, because the JS glue includes top-level await.
  • Using importScripts: won't work, because the JS glue is ESM
  • Using eval: won't work; same to importScripts

I am now trying to make a JS bundle that does not use dynamic import for usage of service worker specifically. Still working on it

from onnxruntime.

ggaabe avatar ggaabe commented on July 24, 2024

Thanks, I appreciate your efforts around this. It does seem like some special-case bundle will need to be built after all; you might need iife or umd for the bundler output format

from onnxruntime.

fs-eire avatar fs-eire commented on July 24, 2024

Thanks, I appreciate your efforts around this. It does seem like some special-case bundle will need to be built after all; you might need iife or umd for the bundler output format

I have considered this option. However, Emscripten does not offer an option to output both UMD(IIFE+CJS) & ESM for JS glue (emscripten-core/emscripten#21899). I have to choose either. I choose the ES6 format output for the JS glue, because of a couple of problems when import UMD from ESM, and import() is a standard way to import ESM from both ESM and UMD. ( Until I know its not working in service worker by this issue)

I found a way to make ORT web working, - yes this need the build script to do some special handling. And this will only work for ESM, because the JS glue is ESM and it seems no way to import ESM from UMD in service worker.

from onnxruntime.

fs-eire avatar fs-eire commented on July 24, 2024

@ggaabe Could you please help to try import * as ort from “./ort.webgpu.bundle.min.js” from version 1.19.0-dev.20240604-3dd6fcc089 ?

from onnxruntime.

ggaabe avatar ggaabe commented on July 24, 2024

@fs-eire my project is dependent on transformersjs, which imports onnxruntime webgpu backend like this here:

https://github.com/xenova/transformers.js/blob/v3/src/backends/onnx.js#L24

Is this the right usage? In my project I've added this to my package.json to resolve onnx-runtime to this new version though the issue is still occurring:

  "overrides": {
    "onnxruntime-web": "1.19.0-dev.20240604-3dd6fcc089"
  }

from onnxruntime.

ggaabe avatar ggaabe commented on July 24, 2024

Maybe also important: The same error is still occurring in same spot in inference session in the onnx package and not from transformersjs. Do I need to add a resolver for onnxruntime-common as well?

from onnxruntime.

ggaabe avatar ggaabe commented on July 24, 2024

Hi @fs-eire, is the newly-merged fix in a released build I can try?

from onnxruntime.

fs-eire avatar fs-eire commented on July 24, 2024

Please try 1.19.0-dev.20240612-94aa21c3dd

from onnxruntime.

ggaabe avatar ggaabe commented on July 24, 2024

@fs-eire EDIT: Nvm the comment I just deleted, that error was because I didn't set the webpack target to webworker.

However, I'm getting a new error now (progress!):

Error: no available backend found. ERR: [webgpu] RuntimeError: null function or function signature mismatch

from onnxruntime.

ggaabe avatar ggaabe commented on July 24, 2024

Update: Found the error is happening in here:

if (!isInitializing) {
backendInfo.initPromise = backendInfo.backend.init(backendName);
}
await backendInfo.initPromise;

For some reason the webgpu backend.init promise is rejecting due to the null function or function signature mismatch error. This is much further along than we were before though.

from onnxruntime.

fs-eire avatar fs-eire commented on July 24, 2024

Update: Found the error is happening in here:

if (!isInitializing) {
backendInfo.initPromise = backendInfo.backend.init(backendName);
}
await backendInfo.initPromise;

For some reason the webgpu backend.init promise is rejecting due to the null function or function signature mismatch error. This is much further along than we were before though.

Could you share me the reproduce steps?

from onnxruntime.

ggaabe avatar ggaabe commented on July 24, 2024

@fs-eire You'll need to run the webGPU setup in a chrome extension.

  1. You can use my code I just published here: https://github.com/ggaabe/extension
  2. run npm install
  3. run npm run build
  4. open the chrome manage extensions
Screenshot 2024-06-14 at 9 37 14 AM
  1. load unpacked
Screenshot 2024-06-14 at 9 37 52 AM
  1. select the build folder from the repo.
  2. open the AI WebGPU Extension extension
  3. type some text in the text input. it will load Phi-3 mini and after finishing loading this error will occur
  4. if you view the extension in the extension in the extension manager and select the "Inspect views
    service worker" link before opening the extension it will bring up an inspection window to view the errors as they occur. A little "errors" bubble link also shows up here after they occur.
Screenshot 2024-06-14 at 9 40 48 AM
  1. You will need to click the "Refresh" button on the extension in the extension manager to rerun the error because it does not attempt reloading the model after the first attempt until another refresh

from onnxruntime.

fs-eire avatar fs-eire commented on July 24, 2024

@ggaabe I did some debug on my box and made some fixes -

  1. Changes to ONNXRuntime Web:

    #21073 is created to make sure the web assembly file can be loaded correctly when env.wasm.wasmPaths is not specified.

  2. Changes to https://github.com/ggaabe/extension

    ggaabe/extension#1 need to be made to the extension example, to make it load the model correctly. Please note:

    • The onnxruntime-web version need to be updated to consume changes from (1) (after it get merged and published for dev channel)
    • There are still errors in background.js, which looks like incorrect params passed to tokenizer.apply_chat_template(). However, the WebAssembly is initialized and the model loaded successfully.
  3. Other issues:

    • Transformerjs overrides env.wasm.wasmPaths to a CDN URL internally. At least for this example, we don't want this behavior so we need to reset it to undefined to keep the default behavior.
    • Multi-threaded CPU EP is not supported because Worker is not accessible in service worker. Issue tracking: whatwg/html#8362

from onnxruntime.

ggaabe avatar ggaabe commented on July 24, 2024

Awesome, thank you for your thoroughness in explaining this and tackling this head on. Is there a dev channel version I can test out?

from onnxruntime.

fs-eire avatar fs-eire commented on July 24, 2024

Not yet. Will update here once it is ready.

from onnxruntime.

ggaabe avatar ggaabe commented on July 24, 2024

sorry to bug; is there any dev build number? wasn't sure how often a release runs

from onnxruntime.

fs-eire avatar fs-eire commented on July 24, 2024

sorry to bug; is there any dev build number? wasn't sure how often a release runs

Please try 1.19.0-dev.20240621-69d522f4e9

from onnxruntime.

ggaabe avatar ggaabe commented on July 24, 2024

@fs-eire I'm getting one new error:

ort.webgpu.bundle.min.mjs:6 Uncaught (in promise) Error: The data is not on CPU. Use `getData()` to download GPU data to CPU, or use `texture` or `gpuBuffer` property to access the GPU data directly.
    at get data (ort.webgpu.bundle.min.mjs:6:13062)
    at get data (tensor.js:62:1)

I pushed the code changes to my repo and fixed the call to the tokenizer. To reproduce, just type 1 letter in the chrome extension’s text input and wait

from onnxruntime.

nickl1234567 avatar nickl1234567 commented on July 24, 2024

Hey, I also need this. I am struggling with importing this version. So far I have been importing ONNX using
import * as ort from "https://cdn.jsdelivr.net/npm/onnxruntime-web/dist/esm/ort.webgpu.min.js".
However, when I change to import * as ort from "https://cdn.jsdelivr.net/npm/[email protected]/dist/esm/ort.webgpu.min.js" it seems not to have an .../esm/ folder. Do you know why that is and how to import it then?

from onnxruntime.

fs-eire avatar fs-eire commented on July 24, 2024

@fs-eire I'm getting one new error:

ort.webgpu.bundle.min.mjs:6 Uncaught (in promise) Error: The data is not on CPU. Use `getData()` to download GPU data to CPU, or use `texture` or `gpuBuffer` property to access the GPU data directly.
    at get data (ort.webgpu.bundle.min.mjs:6:13062)
    at get data (tensor.js:62:1)

I pushed the code changes to my repo and fixed the call to the tokenizer. To reproduce, just type 1 letter in the chrome extension’s text input and wait

This may be a problem of transformerjs. Could you try whether this problem happen in a normal page? If so, can report the issue to transformerjs. If it's only happening in service worker, I can take a closer look

from onnxruntime.

ChTiSh avatar ChTiSh commented on July 24, 2024

I can confirm Web GPU is working for my little chrome extension app as well, but I'm having a problem disabling the warning.

from onnxruntime.

kyr0 avatar kyr0 commented on July 24, 2024

@fs-eire I'm getting one new error:

ort.webgpu.bundle.min.mjs:6 Uncaught (in promise) Error: The data is not on CPU. Use `getData()` to download GPU data to CPU, or use `texture` or `gpuBuffer` property to access the GPU data directly.
    at get data (ort.webgpu.bundle.min.mjs:6:13062)
    at get data (tensor.js:62:1)

I pushed the code changes to my repo and fixed the call to the tokenizer. To reproduce, just type 1 letter in the chrome extension’s text input and wait

This may be a problem of transformerjs. Could you try whether this problem happen in a normal page? If so, can report the issue to transformerjs. If it's only happening in service worker, I can take a closer look

Did data structures change for the Tensor class? Specifically dataLocationvs location? And if so, did it change consistently? I'm facing issues with data being undefined but cpuData being set (tokenizer result). But when I pass the data down to a BERT model, onnxruntime-web seems to expect a different data structure and checks location and data. Am I missing something or has something changed? Could this lead to downstream issues of code checking for the location and data properties mistakenly believing the data isn't there or not in the right place? I linked a downstream issue..

from onnxruntime.

kyr0 avatar kyr0 commented on July 24, 2024

@fs-eire Is there any way how I could help push this forward? Thank you :)

from onnxruntime.

fs-eire avatar fs-eire commented on July 24, 2024

@kyr0 thank you a lot for your willing to help. I am currently in vacation but I will pick up this thread when I am back by end of this month.

from onnxruntime.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.