Giter Club home page Giter Club logo

Comments (7)

Manishearth avatar Manishearth commented on July 28, 2024

cc @blairmacintyre

from administrivia.

blairmacintyre avatar blairmacintyre commented on July 28, 2024

Hi Alex, thanks for sharing this.

One immediate comment is that you should consider including blendshapes with the mesh data. Most face detection platforms either supply them (e.g., ARKit), or if they need to be calculated, are probably best done at the native level. Since a common use is to animate a 3D model, not just draw the mesh, they will be essential.

I think a big question with this is a meta question: how (and if) to integrate WebXR and WebRTC features.

To be useful to WebXR, the information coming out of the camera API needs to include

  • a time stamp that associates the time the video frame was captured with the timestamps used on WebXR frames
  • the intrinsics of the camera (so that we know field of view, etc., and can relate things seen in the camera view, like faces, with the same coordinates/measures as used in WebXR)
  • the extrinsics of the camera (so that we know where the camera was in 3D space relative to the WebXR spaces or frame coordinates).

A "bonus" would be

  • control over the format of the video frames, so that as little extra work as done as possible converting formats. (e.g., If a camera is going to be used for computer vision, luminance might be best format, and if the camera can product that directly, doing extra work convert to RGBA and back, and passing 2x or 4x the size of data, would be massively wasteful, esp on mobile devices).

Getting access to cameras is an oft repeated request from folks who use webxr, so they can do computer vision, use video in reflection maps, and so on. There are really two ways to do it, either by exposing the frames directly via WebXR, or by somehow associating the cameras on device with WebRTC/gUM cameras. Some of the info (e.g., camera device intrinsics and extrinsics, which probably don't change over a session) could be exposed through WebXR apis. Timestamps and video format big questions.

from administrivia.

alcooper91 avatar alcooper91 commented on July 28, 2024

Thanks for the feedback Blair,

As far as blendshapes, from what I can tell ARCore doesn't really support those in the same way as ARKit does. ARCore exposes the ability to request three specific region poses, but we don't really have any data beyond that (My quick read is that blendshapes include not just the region pose of facial landmarks but also have some notion of data describing a "gesture" associated, please correct me if I'm wrong). Given that it's not something natively supported on ARCore, I haven't really looked too much into it. I did however, decide to not expose those region poses. Given that the face mesh will need to be standardized for textures (and indeed while the textureCoordinates and indices are part of the API, they'll likely also need to be specified somewhere, and this is really just so developers don't need to hard-code a very large and error prone blob of data), a given set of vertices will always represent the same region, so these can also be statically defined. However, given that different pages may be interested in different regions, it didn't feel right for the spec to more-or-less say "here are poses for the regions we thought were important", when others could still be calculated by the page, and indeed there is some disparity in the regions that the underlying runtimes would natively expose.

I don't really expand on this in the explainer, since it's out of the scope of the initial implementation that I'm targeting, but the way that I would imagine integrating FaceMesh with WebXR would be that it would re-use the FaceMesh type and then be extra data exposed on the corresponding WebXR frame, which I think should ensure most of the data that you want (e.g. the timestamp, FOV, coordinate system, etc.) would all be available. There are some restrictions on the current runtimes (which I address a little bit), that may require some further tweaks (e.g. an ARCore backed implementation can only support the Viewer reference space).

More general integration with WebRTC sounds like you're asking about Raw Camera Access, which is a separate feature entirely from what I'm discussing here. @bialpio has started to do some work on that here: https://github.com/bialpio/webxr-raw-camera-access/blob/master/explainer.md

from administrivia.

blairmacintyre avatar blairmacintyre commented on July 28, 2024

A few comments.

First, I'm not advocating for exposing what ARKit or ARCore do. Both have different features, and both will change over time. I was merely suggesting that blendshapes are much more useful than meshes for some uses (e.g., putting an animated head model where the face is, which would have it's rig animated directly by the blend shapes), which is why ARKit supports them. Forcing individual apps to compute these seems to raise the bar for use quite a bit. I completely agree this would need a standardized mesh spec; otherwise, it's "just a mesh". I assume that Snapchat-like-filter apps would want to attach things "too the face" and the easy way to do that would be via specific places on the mesh.

Second, I'm not talking about Raw Camera Access specifically, although that is the use case most people want. I discussed these different options about a year and a half ago in https://github.com/immersive-web/computer-vision. If you are going to return anything relative to the camera, it needs to be related to the coordinate system in WebXR. For just "trackables" (like the face), if they are integrated with webxr and return something like a WebXR anchor, then none of the details of the camera are required. But if it just returns a mesh in camera coordinates, some additional information (extrinsics, timestamps, etc) will absolutely be needed, to relate the pose of the camera when the face was detected to the timestamps in WebXR.

It will be VERY important in any API like this to NOT assume that there is an 1:1 relationship between camera frames and webxr frames. While phone-based AR has this property, other AR devices (i.e., head-worn displays) do not. Cameras and head-pose trackers will run at different rates, and the camera frames will not match a webxr frame. It is absolutely essential that we assume that there is a common monotonically increasing timestamp that is used by webxr and camera capture, so that the data returned by both can be related. This assumes that the exact timestamp is provided for each camera frame (i.e., when it was captured, not some arbitrary time that it was accessed by the native library or javascript).

Any API that assumes 1:1 mapping between camera and webxr frames isn't going to work on Hololens2 or ML1, for example, and is thus DOA.

from administrivia.

alcooper91 avatar alcooper91 commented on July 28, 2024

Thanks Blair,
I think I may not be understanding your suggestion as far as blendshapes go, I'll try to do some research, but if you have a good pointer for me to read up on, that may help to ensure that we're on the same page as far as understanding what you're suggesting.

For your other points, I think those are outside of the scope of the work that I'm currently doing, which is integrating FaceMesh extraction into something that can be consumed from getUserMedia (and not yet integrated with WebXR). However, there's obviously significant overlap and interest from the group here, hence why I wanted to share my thinking of why the initial integration was not with WebXR. If/When we do begin working on WebXR integration, your points are definitely something that we should keep in mind. However, I suspect that those points will likely (hopefully) be raised/tackled as part of the Raw Camera Access API (or others) before I begin working on a WebXR integration.

from administrivia.

alcooper91 avatar alcooper91 commented on July 28, 2024

I've filed alcooper91/face-mesh#2 to track the proposal to add Blendshapes; and I believe that most of the other points made are out of scope for my proposal (or at the very least this issue). Let's continue discussion in that issue (or new issues).

from administrivia.

TrevorFSmith avatar TrevorFSmith commented on July 28, 2024

I'd like to +1 the recommendation to provide blendshapes. For most of applications I have for facial tracking, blendshapes are by far the better solution than the mesh alone. Platforms that don't currently provide them can to some extent calculate them at native speeds and then authors will be able to rely on them in addition to the mesh.

from administrivia.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.