Giter Club home page Giter Club logo

Comments (3)

johguenther avatar johguenther commented on May 25, 2024

Some questions

  • How are applications supposed to derive those values if the renderer cannot? Or is this meant as a manual override when internal heuristics produce terrible results, i.e., in the end the user needs to tweak them? What is with animations (camera or objects) – a user cannot update those values in realtime.
  • Does it need to be a "plane"? This is in particular odd with the omnidirectional camera. We could leave it undefined which distance measure is used (Euclidean or perpendicular).
  • Are those parameters understood as hint? I.e., is it an error if objects are visible that are closer than near or farer than far, or is this undefined?

from anari-docs.

jeffamstutz avatar jeffamstutz commented on May 25, 2024

How are applications supposed to derive those values if the renderer cannot? Or is this meant as a manual override when internal heuristics produce terrible results, i.e., in the end the user needs to tweak them?

Some applications straight up want to be able to control the near/far region to be rendered, which is a bespoke feature in practically any rendering engine. I think that's enough to justify having them as standard camera parameters independent of the issue of rasterization-based devices.

What is with animations (camera or objects) – a user cannot update those values in realtime

I'm not sure there's a unique problem to animations needing to recalculate this on the application side. It's totally common for real time applications to set a generic camera near/far that, while not tight for any given frame, is still very close to approximate the entire world's rendering region.

Are those parameters understood as hint? I.e., is it an error if objects are visible that are closer than near or farer than far, or is this undefined?

I think they should actually clip, because it's done in camera space having a straightforward/unambiguous interpretation for primary visibility no matter the rendering technique used. How lighting is affected, though, would be undefined and isn't important to define.

from anari-docs.

progschj avatar progschj commented on May 25, 2024

How are applications supposed to derive those values if the renderer cannot? Or is this meant as a manual override when internal heuristics produce terrible results, i.e., in the end the user needs to tweak them?

The way I did this in the devices was to use the scene bounds to derive them. You can always derive a far plane from the finite bounding box of the scene. That doesn't work for the near plane in case the camera is inside the bounding box though since zero can't be used. So I have to guess an acceptable near plane which I usually do with a "heuristic" like near = far/1000. In that case an optional override makes sense.

Also if someone has a scene with a massive ground plane that negatively affects depth precision it might be easier to just set the far plane on the camera instead of changing the scene content.

Are those parameters understood as hint? I.e., is it an error if objects are visible that are closer than near or farer than far, or is this undefined?

I think they should actually clip, because it's done in camera space having a straightforward/unambiguous interpretation for primary visibility no matter the rendering technique used. How lighting is affected, though, would be undefined and isn't important to define.

In the very least they should be understood as maximum near and minimum far value. I'm not sure they need to be clipping since then they would have a required effect and we also need to be much more prescriptive with respect to default behavior.

Does it need to be a "plane"? This is in particular odd with the omnidirectional camera. We could leave it undefined which distance measure is used (Euclidean or perpendicular).

I think maybe there should be language that says which cameras are affected by this. specifically ortho and perspective. Other camera types can still opt in and document how they interact with this, If we try to generalize to a "near value" or so we have to define the boundary shape for every case. For some omni cameras this would be a sphere but for a cubemap projection for example it would be a "near cube".

from anari-docs.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.