immersive-web / webxr Goto Github PK
View Code? Open in Web Editor NEWRepository for the WebXR Device API Specification.
Home Page: https://immersive-web.github.io/webxr/
License: Other
Repository for the WebXR Device API Specification.
Home Page: https://immersive-web.github.io/webxr/
License: Other
Issue by toji
Monday Mar 07, 2016 at 22:51 GMT
Originally opened as MozillaReality/webvr-spec#19
toji included the following code: https://github.com/MozVR/webvr-spec/pull/19/commits
Issue by brianchirls
Thursday Mar 17, 2016 at 15:09 GMT
Originally opened as MozillaReality/webvr-spec#22
I seem to recall some discussion of enabling linking directly from one VR web site to another without having to stop presenting. The spec as it is doesn't seem to handle that.
This is a potentially tricky case, so I understand wanting to hold off on implementation until more progress is made in the field. But has there been any discussion about what this might look like in an API? Are we confident that the API as it currently stands leaves room for that in the future?
Right now the spec does not state how to handle a VRLayer
passed into requestPresent
without a source
property or what to do if that source
is not valid. According to @toji, Chrome resolves the promise and does nothing.
I think it makes sense to reject the promise, which would be more helpful for debugging that failing silently. Either way, the behavior should be documented.
Issue by brianchirls
Thursday May 19, 2016 at 02:35 GMT
Originally opened as MozillaReality/webvr-spec#34
Over at webvr-polyfill, there's been a request for events that would fire when the Cardboard interstitial appears and disappears. I think I read @toji say somewhere that the interstitial would be built into Chrome on Android. It occurs to me that this is analogous to the Oculus safety screen and that it would be useful to know when these screens are blocking the display so the user doesn't miss any content.
Does anybody know if the Oculus SDK allows detection of this sort of thing?
It seems that there should be a boolean property on the VRDisplay
that tells whether the screen is blocked by a warning and an event that fires when this property changes. Maybe something analogous to document.hidden
. Alternatively, onvrdisplaypresentchange
could wait to fire until the interstitial is cleared, but that might not make sense if different platforms vary enough in their behavior. For example, if Chrome/Android/Cardboard (or the polyfill) shows an interstitial when you tilt the device too close to vertical, then the latter approach wouldn't work.
I could imagine platforms in the future popping up warnings for various safety and comfort reasons, like: an epilepsy blocker if it detects bright flashing light, or an interruption if the frame rate drops too low.
First, take a look at https://developer.microsoft.com/en-us/windows/holographic/rendering_in_directx and the section on processing camera updates with respect to the back buffer, Back buffers can change from frame to frame. Your app needs to validate the back buffer for each camera, and release and recreate resource views and depth buffers as needed.
I think the current specification allows for an indirect ability to optimize rendering through to the device, but the intricacies of various devices mean that each will have to jump through a different set of hoops to make it happen in a compatible and interoperable way.
We should discuss mechanisms for allowing devices to create optimized surfaces that don't require intermediate copies and perhaps further optimizations such as disabling or denying any sort of texture read-back. For this I think we want to continue using the "canvas" as the currency and then allowing a developer to get a rendering context back from said canvas.
enum CanvasThreading {
"default",
"threaded"; // "offscreen" ?
}
partial interface VRDevice {
// Option #1 - device creation
// More flexible since allows binding to go through device specific paths
// Enables creation of devices and surfaces optimized to cross process rendering, etc...
VRSource? createDeviceLayer(CanvasThreading canvasType = "default");
}
// Option #2 - device replacement of back-end resources
// Challenging depending on the current state of the VRSource which may already be part
// of a normal rendering pipeline.
dictionary VRLayer {
bool? allowDeviceOptimizations = false;
}
In Primrose, it worked really well for my purposes to polyfill in the 2D monitor as a "VR Display" that had only one eye parameter. I returned those parameters when calling getEyeParameters
with "left", and I passed null when calling with "right" (though see note on Magic Window configurations). I collect all non-null eye parameters into an array and then I loop over that array as I render. The requestPresent
method polyfilled to calling the standard Full-Screen workflow.
Because of this, I didn't have to create special code paths to handle the technically-only-marginally-different types of displays. There's only one path of code for displays. And to support spectator mode--showing a mono rendering on the monitor--I could theoretically only have to create an array of displays over which to loop. Such a setup should also be more future-proof against CAVE setups.
As opposed to how webvr-polyfill works, I also didn't implement touch-panning or mouse/keyboard movement in that display for the same reason: it would create two code paths for handling input. I already support mouse/keyboard for desktop users on tethered HMDs, and touch-panning is also useful for non-stereo use-cases with tablets in Magic Window configurations.
On Magic Window configurations: technically, a smartphone is capable of both Magic Window and Google Cardboard, but it's not multiple displays, it's just multiple rendering configurations for one, common display. So instead of returning a mono-display configuration through one-of "left" or "right" from getEyeParameters, it probably makes sense to have a completely separate "eye parameter" for "center" or "mono".
I'm using webvr-polyfill now because I got tired of trying to keep device fusion working in my configuration, so I don't have that feature in my current DEV branch anymore, but you can see it in the master branch (and thus, on primrosevr.com) right now as I don't have the new deployment ready yet. I will probably end up polyfilling the concept back in, either by trying to get webvr-polyfill to adopt the idea, forking webvr-polyfill, or pre-wrapping navigator.getVRDisplays before webvr-polyfill can get to it.
Issue by cvan
Tuesday Apr 12, 2016 at 00:46 GMT
Originally opened as MozillaReality/webvr-spec#31
In @toji's latest Chromium builds (see patch diff), there's a property pose
now exposed on the Gamepad
objects. (See example of usage.)
We should document the WebIDL changes in the WebVR spec here.
We ought to also file an issue against the Gamepad API spec.
Issue by Codes4Fun
Saturday Sep 26, 2015 at 06:49 GMT
Originally opened as MozillaReality/webvr-spec#8
I've noticed this issue with current implementations of webvr, they have eyeTranslation.w set to zero. There are two problems with this:
w=1 is meant for positional vectors, while w=0 is meant for direction vectors that you don't want to translate like surface normals, velocities, etc.
I would think this means that either eyeTranslation not have a w or require w to always be one.
We have a better description of the hand position api. PR is #5
Issue by toji
Wednesday Mar 02, 2016 at 21:45 GMT
Originally opened as MozillaReality/webvr-spec#18
A bit of feedback I've received is that the concepts of "sitting space" and "standing space" that we've borrowed from OpenVR may not be accurate representations of the values we're providing. For example: It's definitely possible to stand in place while using an Oculus Rift, and you probably want to when using controllers like Touch. Since there's no sense of where you are in relation to the room, this would still be reported in "sitting space", though. Similarly, you may want to sit down with a Vive but still have the scene oriented to your room using "standing space." The actual values reported right now in either case should be fine, but the verbiage is weird.
I propose that we run with the verbiage of "stage" that we've already defined. So sittingToStandingTransform
becomes stageTransform
. We'd also have to come up with a term for the default space ("relative" comes to mind but may be too vague) and change some verbiage in the spec, but otherwise everything continues to work the way it does now.
Or we could just decide that this is a silly thing to worry about and move on. :)
Please refer to https://developer.microsoft.com/en-us/windows/holographic/rendering_in_directx and specifically the section on "Render to each camera".
The current VRLayer setup is optimized for rendering to a combined left/right eye surface. This means that a developer can too easily take a dependency on being able to render to both eyes at the same time simply by crossing over the texture center line. While this may not produce amazing results, it could be a dependency.
I'd like to extend VRLayer to take 2 layers, with the restriction that the VRSource objects for those layers are not the same and perhaps even that the bounds are the entire contents of the layer.
In combination with #51 this might mean adding a createDeviceLayers equivalent,
Note: These aren't final proposals, but mean to spur conversations.
Issue by andreasplesch
Saturday Mar 19, 2016 at 05:25 GMT
Originally opened as MozillaReality/webvr-spec#23
While the plural in getLayers as well as the word layer itself indicate that more than one Layer can be presented, presentRequest and the wording of the explanation of VRLayer indicate that only a single Layer can be presented.
This ambiguity can lead to confusion and should be removed.
Although implementations currently only allow for a single Layer, the spec. may be more demanding and allow multiple Layers.
Issue by toji
Monday Mar 21, 2016 at 19:39 GMT
Originally opened as MozillaReality/webvr-spec#25
There's a rumor that the Oculus 1.0 SDK will include the ability to get coordinates relative to the floor. That's great news if true, but I don't think it will include chaperone-style room bounds. Even if not true this sort of capability is something we should plan for.
It's not clear how this sort of capability should map to our StageParameters
. I suppose the easy way out is to simply set sizeX
and sizeZ
to 0, but there's also a difference in how it would interact with resetPose()
. With a Vive the transform will always update the pose to be oriented to the room, but with a Rift resetPose()
will still re-orient you.
I'd appreciate any thoughts on the matter!
Issue by cvan
Tuesday Mar 01, 2016 at 21:38 GMT
Originally opened as MozillaReality/webvr-spec#17
We want to add a new attribute allowvr
that can be set on <iframe>
(ร la allowfullscreen
).
This attribute can be set to true
if the frame is allowed to access VRDisplay
objects. When false
, navigator.getVRDisplays()
and navigator.activeVRDisplays
will resolve and return empty sequences.
The last remaining todo in the Charter draft is the Dependencies or Liaisons section. Do folks foresee any significant dependencies on other groups (inside or outside W3C) we should mention in this section of the Charter? For examples from other Community Groups, see e.g. the Web NFC Community Group Charter and Web Bluetooth Community Group Charter.
I suggest we add at least the W3C Device and Sensors Working Group that is defining the Generic Sensor API, a framework for exposing sensor data to the Web in a consistent way, to be used by concrete sensors (e.g. ambient light, proximity, accelerometer, gyroscope, magnetometer). This could be used to improve the webvr-polyfill, and/or enable better code reuse in browsers (Chromium implementation is in progress, starting with ALS, but moving to other concrete sensors soon). There are also reuse opportunities on the spec-level.
I can submit a PR with proposed changes after getting everyone's feedback.
Issue by cvan
Monday Feb 29, 2016 at 00:31 GMT
Originally opened as MozillaReality/webvr-spec#13
cvan included the following code: https://github.com/MozVR/webvr-spec/pull/13/commits
Issue by cvan
Thursday May 14, 2015 at 07:19 GMT
Originally opened as MozillaReality/webvr-spec#4
So calling make
will always run without needing to first call make clean
.
cvan included the following code: https://github.com/MozVR/webvr-spec/pull/4/commits
The spec uses [Constant]
, [Cached]
, [Throws]
but these aren't defined in the WebIDL spec
Issue by mkeblx
Wednesday Apr 06, 2016 at 21:39 GMT
Originally opened as MozillaReality/webvr-spec#30
What is the use case of these VRDisplayCapabilities
flags on VRDisplay
? Especially hasOrientation
as hard to imagine HMD not having orientation tracking.
Also note specifically renamed from VRDevice
to VRDisplay
to focus spec on covering use case of HMDs (mainly) and not generic tracked VR devices.
Additionally can be figured out by null or not state of orientation and position data.
Issue by cvan
Monday Feb 29, 2016 at 00:32 GMT
Originally opened as MozillaReality/webvr-spec#15
vrdisplayconnected
vrdisplaydisconnected
vrdisplaypresentchange
Issue by mkeblx
Friday Apr 15, 2016 at 18:06 GMT
Originally opened as MozillaReality/webvr-spec#32
Minor inconsistency.
mkeblx included the following code: https://github.com/MozVR/webvr-spec/pull/32/commits
I'm not an admin of the repo, so I cannot. Please link to https://w3c.github.io/webvr/. Thanks ๐
To avoid confusion, we should document the ordering of the matrix components returned by sittingToStandingTransform. It should be described as a 4x4 affine transformation matrix in column-major order.
Issue by toji
Tuesday Dec 15, 2015 at 21:09 GMT
Originally opened as MozillaReality/webvr-spec#9
Implementing the spec change discussed here: https://mail.mozilla.org/pipermail/web-vr-discuss/2015-December/000936.html
I'll leave it to someone on the Mozilla side of things to merge so that we have a chance to address any concerns first.
toji included the following code: https://github.com/MozVR/webvr-spec/pull/9/commits
Issue by dmarcos
Friday Jul 10, 2015 at 20:10 GMT
Originally opened as MozillaReality/webvr-spec#7
dmarcos included the following code: https://github.com/MozVR/webvr-spec/pull/7/commits
Issue by brianchirls
Thursday Mar 17, 2016 at 15:05 GMT
Originally opened as MozillaReality/webvr-spec#20
What should happen when you switch to another tab or application while presenting in VR to an external display?
The spec says this:
In the event that the content process terminates unexpectedly, the browser will not exit VR mode. The VR compositor will destroy the content layer while continuing to present the trusted UI elements of the browser.
...which seems to indicate a desire not to abuse the user with a jarring experience. I think it's the right call.
It also says this:
The HMD pose and other VR inputs are only updated for the focused WebVR page. This can be implemented in the same manner as keyboard and mouse input.
The former seems to imply that there will be some pose-tracked 3D browser UI elements, so maybe the solution is to display that in place of the formerly active tab. So would you just resume presenting the web page as soon as it regains focus? What if you're switching between two tabs that are both VR enabled?
Any ideas?
Currently, we have the following TBD text in the Test Suites and Other Software section of the charter draft inherited from the CG charter template:
Test Suites and Other Software
{TBD: State whether test suites or any other software will be created for any Specifications and list the relevant licenses. For information about contributions to W3C test suites, please see Test the Web Forward, and take note of the W3C's test suite contribution policy and licenses for W3C test suites. If there are no plans to create a test suite or other software, please state that.}
I'm aware of people who indicated interest in help with test suite creation, so unless I hear concerns, I'd suggest we update this section to say:
Test Suites and Other Software
The group MAY produce test suites to support the Specifications. The W3C's test suite contribution policy and licenses for W3C test suites apply.
Issue by cslroot
Wednesday May 25, 2016 at 12:22 GMT
Originally opened as MozillaReality/webvr-spec#35
cslroot included the following code: https://github.com/MozVR/webvr-spec/pull/35/commits
Issue by Sneagan
Wednesday Jul 08, 2015 at 18:17 GMT
Originally opened as MozillaReality/webvr-spec#6
Brief mention of Non-Dedicated HMDs so as not to lose visibility of that area of WebVR.
Sneagan included the following code: https://github.com/MozVR/webvr-spec/pull/6/commits
Suggestion: replace "offset" by "translation" (here), or change type the "offset" attribute (Float32Array => float).
http://heycam.github.io/webidl/#idl-attributes
" The type of the attribute, after resolving typedefs, MUST NOT be a nullable or non-nullable version of any of the following types:
a sequence type
a dictionary
a union type that has a nullable or non-nullable sequence type or dictionary as one of its flattened member types
"
Issue by cvan
Thursday May 14, 2015 at 07:22 GMT
Originally opened as MozillaReality/webvr-spec#5
Fixed some typos and inconsistencies I found. Let me know if there's anything you'd like me to change/avoid changing.
cvan included the following code: https://github.com/MozVR/webvr-spec/pull/5/commits
I went through all the archives, plus my inbox, and here are the discussions I thought were relevant to API topics:
[gamepad] Missing VRPose for tracked controllers
https://lists.w3.org/Archives/Public/public-webapps/2016AprJun/0078.html
[webvr] [gamepad] Missing VRPose for tracked controllers
https://mail.mozilla.org/pipermail/web-vr-discuss/2016-May/001108.html
[webvr] Adding a VRPose to the Gamepad API
https://mail.mozilla.org/pipermail/web-vr-discuss/2016-May/001109.html
[gamepad] New feature proposals: pose, touchpad, vibration
https://lists.w3.org/Archives/Public/public-webapps/2016AprJun/0052.html
[web-vr] Render Targets and WebVR
https://mail.mozilla.org/pipermail/web-vr-discuss/2015-March/000608.html
[webvr] Send multiple canvas to HMD
https://mail.mozilla.org/pipermail/web-vr-discuss/2016-April/001080.html
[webvr] Event to indicate hardware has requested VR presentation?
https://mail.mozilla.org/pipermail/web-vr-discuss/2016-April/001096.html
[web-vr] Proposing WebStereo
https://mail.mozilla.org/pipermail/web-vr-discuss/2016-February/000990.html
[web-vr] Proposed Verbiage change: Position => Pose
https://mail.mozilla.org/pipermail/web-vr-discuss/2015-December/000936.html
[web-vr] Spec questions
https://mail.mozilla.org/pipermail/web-vr-discuss/2015-March/000610.html
@anssiko recommends the following:
I would probably just do that manually with credits to the original author. A new issue per mail, with a link to the mozvr mailing list archive. If there's a mail thread with relevant content, I might manually cherry-pick the substantial content from the thread to the issue. Anything goes, as long as we're clearly crediting the original author.
The WebIDL spec forbids the usage of sequences as attributes, but activeVRDisplays
is declared as a sequence. It should probably use FrozenArray
instead.
Issue by toji
Wednesday Mar 30, 2016 at 04:00 GMT
Originally opened as MozillaReality/webvr-spec#26
I brought up some questions about VRPose.frameID
earlier in the spec development, but I don't feel they were addressed due to time constraints. Today I received an email from developers at Samsung that effectively brought up the same points. Specifically:
getImmediatePose()
submitFrame
I don't know what they'd be expected to put in the frameID
.timestamp
field is more valuable for determining things like latency.If this field has proven useful for the MozVR team I'd love to hear more about it, but lacking further information it feels unnecessary and I think it should be dropped.
Issue by cvan
Monday May 02, 2016 at 21:28 GMT
Originally opened as MozillaReality/webvr-spec#33
in preparation for moving mozvr/webvr-spec to w3c/webvr
cvan included the following code: https://github.com/MozVR/webvr-spec/pull/33/commits
#1 added text from Mozilla WebVR draft and some other text. Proposed starting point to edit.
Issue by cvan
Tuesday Feb 23, 2016 at 08:27 GMT
Originally opened as MozillaReality/webvr-spec#11
cvan included the following code: https://github.com/MozVR/webvr-spec/pull/11/commits
Issue by toji
Thursday Mar 31, 2016 at 17:29 GMT
Originally opened as MozillaReality/webvr-spec#28
Also added maxLayers
capability. As per conversations with @vvuk. Still restricts the v1 spec to only accepting a single layer, but makes future changes to allows for more complex compositing more sensible.
toji included the following code: https://github.com/MozVR/webvr-spec/pull/28/commits
Issue by andreasplesch
Saturday Mar 19, 2016 at 05:32 GMT
Originally opened as MozillaReality/webvr-spec#24
Layering requires multiple layers. However, currently only a single VRLayer can be presented by the display.
Therefore VRSource would be a better fit as a suggestion. There are probably better names to be discovered as well.
Issue by borismus
Thursday Mar 26, 2015 at 21:57 GMT
Originally opened as MozillaReality/webvr-spec#3
The current approach of going into VR mode via requestFullScreen({vrDisplay: hmd})
seems a little bit limiting. Why should full-screen be associated with VR? Does it still make sense for direct-to-rift mode, where only the rift display should be affected? How about for Cardboard-style uses?
What if we had something like hmd.startVR()
instead?
/cc: @dmarcos
Issue by cvan
Monday Feb 29, 2016 at 13:25 GMT
Originally opened as MozillaReality/webvr-spec#16
Issue by cvan
Tuesday Feb 23, 2016 at 08:32 GMT
Originally opened as MozillaReality/webvr-spec#12
Here's the initial draft. There are omissions and issues. I'll continue work on this, but don't hesitate to say something when you see something.
cvan included the following code: https://github.com/MozVR/webvr-spec/pull/12/commits
Chrome WebVR will be made available only on secure origins, so we should consider making this normative in the WebVR spec, unless someone has concerns.
The Secure Contexts spec gives practical advise on how to guard sensitive APIs with checks against secure contexts.
Issue by cvan
Monday Feb 29, 2016 at 00:32 GMT
Originally opened as MozillaReality/webvr-spec#14
Issue by toji
Thursday Feb 12, 2015 at 22:44 GMT
Originally opened as MozillaReality/webvr-spec#1
Also added a couple of example code snippets for how to create a
projection matrix from an FOV and how to calculate the optimal canvas
resolution from the eye renderRects.
toji included the following code: https://github.com/MozVR/webvr-spec/pull/1/commits
Issue by brianchirls
Thursday Mar 17, 2016 at 15:07 GMT
Originally opened as MozillaReality/webvr-spec#21
For example, if it's a 3D TV? Have any of the browser vendors discussed implementing something like that?
Issue by cvan
Saturday Feb 20, 2016 at 00:41 GMT
Originally opened as MozillaReality/webvr-spec#10
Can borrow this script from https://github.com/whatwg/loader.
Issue by toji
Thursday Mar 31, 2016 at 17:12 GMT
Originally opened as MozillaReality/webvr-spec#27
As discussed in Issue #26
toji included the following code: https://github.com/MozVR/webvr-spec/pull/27/commits
Issue by borismus
Monday Mar 23, 2015 at 18:53 GMT
Originally opened as MozillaReality/webvr-spec#2
Multiple position sensors controlling the observer seems to be a common situation.
Most commonly, you may be using the Rift's gyroscope for rotation, but also want to support mouse look and maybe arrow keys to look around.
Also, I'd like to build a 6DOF system for desktop where we use headtrackr for the positional DOFs, but also be able to mouse look. This case is easier since one of the devices only provides rotation, and the other provides only position.
Looks like the spec intends VRPositionState to report in absolute coordinates, which makes it hard to combine multiple states (eg. the rift case). Maybe switching to relative would be helpful for this? Otherwise how do we handle the multiple position sensor case?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.