Comments (3)
Some questions
- How are applications supposed to derive those values if the renderer cannot? Or is this meant as a manual override when internal heuristics produce terrible results, i.e., in the end the user needs to tweak them? What is with animations (camera or objects) – a user cannot update those values in realtime.
- Does it need to be a "plane"? This is in particular odd with the omnidirectional camera. We could leave it undefined which distance measure is used (Euclidean or perpendicular).
- Are those parameters understood as hint? I.e., is it an error if objects are visible that are closer than
near
or farer thanfar
, or is this undefined?
from anari-docs.
How are applications supposed to derive those values if the renderer cannot? Or is this meant as a manual override when internal heuristics produce terrible results, i.e., in the end the user needs to tweak them?
Some applications straight up want to be able to control the near/far region to be rendered, which is a bespoke feature in practically any rendering engine. I think that's enough to justify having them as standard camera parameters independent of the issue of rasterization-based devices.
What is with animations (camera or objects) – a user cannot update those values in realtime
I'm not sure there's a unique problem to animations needing to recalculate this on the application side. It's totally common for real time applications to set a generic camera near/far that, while not tight for any given frame, is still very close to approximate the entire world's rendering region.
Are those parameters understood as hint? I.e., is it an error if objects are visible that are closer than near or farer than far, or is this undefined?
I think they should actually clip, because it's done in camera space having a straightforward/unambiguous interpretation for primary visibility no matter the rendering technique used. How lighting is affected, though, would be undefined and isn't important to define.
from anari-docs.
How are applications supposed to derive those values if the renderer cannot? Or is this meant as a manual override when internal heuristics produce terrible results, i.e., in the end the user needs to tweak them?
The way I did this in the devices was to use the scene bounds to derive them. You can always derive a far plane from the finite bounding box of the scene. That doesn't work for the near plane in case the camera is inside the bounding box though since zero can't be used. So I have to guess an acceptable near plane which I usually do with a "heuristic" like near = far/1000
. In that case an optional override makes sense.
Also if someone has a scene with a massive ground plane that negatively affects depth precision it might be easier to just set the far plane on the camera instead of changing the scene content.
Are those parameters understood as hint? I.e., is it an error if objects are visible that are closer than near or farer than far, or is this undefined?
I think they should actually clip, because it's done in camera space having a straightforward/unambiguous interpretation for primary visibility no matter the rendering technique used. How lighting is affected, though, would be undefined and isn't important to define.
In the very least they should be understood as maximum near and minimum far value. I'm not sure they need to be clipping since then they would have a required effect and we also need to be much more prescriptive with respect to default behavior.
Does it need to be a "plane"? This is in particular odd with the omnidirectional camera. We could leave it undefined which distance measure is used (Euclidean or perpendicular).
I think maybe there should be language that says which cameras are affected by this. specifically ortho and perspective. Other camera types can still opt in and document how they interact with this, If we try to generalize to a "near value" or so we have to define the boundary shape for every case. For some omni cameras this would be a sphere but for a cubemap projection for example it would be a "near cube".
from anari-docs.
Related Issues (20)
- Per-instance materials HOT 5
- Uniform geometry attributes
- The double use of primitiveID HOT 1
- Incorrect table caption in section 5.10.5. (Transform sampler) HOT 1
- Define implicit iso-surface Geometry
- The ANARI_KHR_GEOMETRY_GLYPH extension for glyphs/oriented shapes HOT 3
- Two-sided Surface extension HOT 3
- Device target spec version property
- Massive spatial fields HOT 3
- Compressed Texture Formats
- Add visibility parameter to Surface
- Add colormap sampler
- Combine transferFunction1D color + opacity arrays HOT 1
- Confusing description of transform sampler HOT 1
- Unclear/missing description of primitive sampler HOT 1
- depth of field own extension?
- Generate latest spec via GitHub actions
- Sampler Transform Offset
- Combine `intensityDistribution` and `intensity` for lights? HOT 2
- Add anariUnsetAllParameters() API function HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from anari-docs.