Giter Club home page Giter Club logo

openpbr's Introduction

OpenPBR Surface

Shader Playground, rendered in Arnold for Maya, using OpenPBR Surface. Artwork by Nikie Monteleone.


OpenPBR Surface is a specification of a surface shading model intended as a standard for computer graphics. It aims to provide a material representation capable of accurately modeling the vast majority of CG materials used in practical visual effects and feature animation productions.

OpenPBR Surface is an open standard hosted by the Academy Software Foundation (ASWF), and is organized as a subproject of MaterialX.

Specification


License: CC BY-SA 4.0

openpbr's People

Contributors

adrienherubel avatar antonpalmqvist avatar brechtvl avatar fpliu avatar jstone-lucasfilm avatar msuzuki-nvidia avatar peterkutz avatar portsmouth avatar virtualzavie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openpbr's Issues

Suggest "0.18" as the default base_color

Purely a VFX bias here, but we generally initialise colours to be the value of "middle grey", which as a result of our logarithmic perception, ends up being 0.18.
Was 0.8 chosen because if someone sets metallic to 1.0 with default values elsewhere, then this would result in a plausible metal?

specular_color should not generate a complementary color

Currently it does, because we say:

The specular_weight and specular_color parameters modulate the Fresnel factor.. The light transmitted through the dielectric will be compensated accordingly to preserve the energy balance (thus generating a complementary color if specular_color is not white).

We had a long discussion on Slack, and agreed that this is not the behavior that is wanted (most likely). Really one wants only the specular reflection to be tinted, not the base. This is also something that we wanted to fix up in the Standard Surface model. So the question for OpenPBR is how to describe that physically and unambiguously.

For a dielectric interface, it is physically unambiguous to stipulate that only the Fresnel reflection (from the top side) is tinted. This tinting is obviously ad-hoc/unphysical, but harmless as it merely multiplies one scattering mode (marked in red below) by a factor, effectively deleting some energy by an unspecified mechanism. Also it makes it totally clear what the effect on the light transport is, for an implementation.

image

Furthermore for the case of a layer of dielectric on top of a base, it is natural to interpret the specular weight as controlling the presence/coverage of the layer.

With weight w and color tint C, the resulting lobe combination would be approximated as:

w*C*fspec + (1 - w*E[fspec])*fbase

(where E is the reflectance). This is also how MaterialX does its layering throughput calculation.

For the base dielectric (e.g. a solid piece of glass) there is only the dielectric interface (no base), so only the Fresnel tinting applies (and we can interpret the specular weight in this case just as a multiplier of the tint), i.e. the lobe combination looks like:

w*C*fbrdf + (1 - E[fbrdf])*fbtdf

(so physically, all the regular transmission effects, e.g. TIR and Snell's window seen from below, and the refraction of the interior seen from above, are totally unaffected by the specular color or weight).

We anyway need to clarify this in the spec, to make it explicit how the specular color and weight parameters actually modify the physical configuration to achieve the desired effect.

Suggest separate parameters for diffuse and metal colours instead of "base_color"

In the OpenPBR specification, the diffuse albedo and the metallic F0 both share the base_color parameter, with base_metalness providing the control to mix between the dielectric-base and metal substrates.

We have found it to be useful to have these parameters specified independently as we typically paint these maps separately. Having them shared would make it difficult for us to use OpenPBR.

When Brent Burley chose this parameterisation, his aims were to make a simpler interface for BSDFs, primarily targeted at feature animation pipelines. We have found that for photo-realistic look development, having control of the diffuse separately from the metallic specular gives more artistic control.

Having these colours linked was one of the main barriers that prevented us from adopting Autodesk Standard Surface internally, as well.

Would it be possible to break base_color into metal_color and diffuse_color?

Subsurface defaults

subsurface_color default value I would not have as 1 since that's not very physical, would set it to the same as base_color.

subsurface_radius default value I would recommend to set to something else than (1, 1, 1). We have (1, 0.2, 0.1) in Blender, the exact choice is somewhat arbitrary but practically any real material has these decreasing from R to G to B and it's nice to see this effect immediately when dragging the subsurface_weight slider.

Allow IORs below 1?

Do we need to allow IORs to drop below 1? Physically this is implausible; though I know it can be useful for certain hacks to fake the IOR ratio being < 1, it should not really be done that way.

Handling of raytraced subsurface entry/exit bounces

When implementing raytraced subsurface scattering, the look of the material can vary significantly based on how the entry/exit bounce at the layer interface is handled.

The draft spec currently doesn't explicitly mention how this is to be handled, but there are some references, e.g. mentioning that the albedo mapping may depend on the interface IOR.

Is this something that should be specified, or is it to be left as an implementation detail?

The approaches that I am aware of are:

  • Perform a diffuse lambertian bounce at entry and exit: Easy to implement, but no particularly realistic and prone to overly white edges (since a significant fraction of rays will immediately hit the adjacent face and exit without ever scattering, regardless of incoming angle).
  • Refract through the interface according to the microfacet distribution at entry and exit: Arguably the most correct approach, but has practical downsides - additional storage for the microfacet parameters is needed, multiple bounces might be needed due to TIR, the BSSRDF no longer converges to a lambertian BSDF for radius->0.
  • Refract through the interface at the entry bounce, perform lambertian exit bounce: Provides most of the visual benefits of the full refraction approach while avoiding the downsides listed above.

Additionally, there is the question whether IOR and roughness values should be reused from the specular reflection lobe or specified separately.

Personally, if this is to be specified, I'd argue in favor of the "refractive entry, lambertian exit, reuse specular interface parameters" approach.

Emission units

In the emissive section it says "emissive properties are specified in photometric units", but then doesn't say what units they're actually specified in. Assuming this means nits, this should be specified.

Define unspecified BSDFs

Currently diffuse and fuzz bsdfs are unspecificied. From our point of view, these would be much better explicitly specified as not doing so will necessarily lead to look differences between implementations.

Definition of angle for F82

The specification says (in the comments on equation (43), that the angle for F82 is defined as 82 degrees. This is not accurate - the actual angle should be defined as arccos(1/7), which is close to but not exactly 82 degrees. In other words, mu_bar is defined as exactly 1/7. This is needed for the math to work out with integer powers, etc.

subsurface_color = base_color?

@brechtvl wrote:

We had planned to get rid of the distinct subsurface_color and use just base_color, partially because of an implementation quirk, and partially because nearly always the same texture map was connected to both. But maybe it is common to have separate texture maps for these for some users?

specular_tint?

@brechtvl wrote:

For non-physical color multiplier like specular_color and transmission_color (with depth 0), we called those "tint" instead of "color" to make a more clear distinction with the physical ones like base_color.

Anisotropy direction parametrization

The current OpenPBR specification proposes to control the anisotropy with a specular_anisotropy (respectively coat_anisotropy) and the direction of the specular highlight elongation with a specular_rotation (respectively coat_rotation).

In the case those inputs are given with a texture, or more generally if some filtering is involved, the specular_rotation will require special care to avoid artifacts due to the discontinuity at the 360° / 0° angle. At a minimum, this requires the ability to specify a nearest-neighbor filter, but obtaining a level of quality consistent with bilinear, trilinear or anisotropic filtering involves implementing a custom filtering scheme to handle the discontinuity case. Such a custom filtering has to be implemented deep into the shader, requires more texture fetches, and represents a non trivial amount of work for the developer of the renderer.

An alternative approach is to specify the direction with a flow map of the anisotropy direction 2D vectors. This parametrization is more suitable to texture filtering implemented in most renderers and 3D hardware, does not require a special case, and is very similar to normal maps which are widely supported.

Although it may seem expressing an angle (1 parameter) as a vector (2 parameters) might incur an additional bandwidth cost, factoring specular_anisotropy into the vector norm should lead to an equivalent cost (or lesser since no custom filtering with additional fetches is involved).

glTF expresses the anisotropy direction and strength as a 3 component texture though, so pro and cons of the two solutions would have to be weighted.

It may also seem that authoring a flow map directly is more difficult, but authoring a rotation map directly is in fact difficult as well and better performed with a dedicated tool anyway.

Normal maps

@brechtvl wrote:

It's not clear to me if these parameters expect tangent space or world/rendering space normals. I would guess it's world/rendering space if bump maps are to be supported as well, and because also for normal maps there might be various additional controls that feel out of scope for OpenPBR.

Oren-Nayar roughness parametrization is not sufficiently clear

We say:
image

But the Oren-Nayar model actually involves a parameter $\sigma$ which parametrizes the roughness, and we don't say exactly how this is related to the roughness parameter in our model.

I suggest we follow the approach from Mitsuba here:

        /* Conversion from Beckmann-style RMS roughness to
           Oren-Nayar-style slope-area variance. The factor
           of 1/sqrt(2) was found to be a perfect fit up
           to extreme roughness values (>.5), after which
           the match is not as good anymore */

        const Float conversionFactor = 1 / std::sqrt((Float) 2);

        Float sigma = m_alpha->eval(bRec.its).average()
            * conversionFactor;

So to be precise, we take

$$ \sigma = r/\sqrt{2} $$

where $r$ is the base_roughness in $[0,1]$. This is the scheme used in Arnold for example. If there are no objections, we can just specify the same relationship in OpenPBR.

Spec/coat roughness ranges should be [0,1]

Since we explicitly say not to use values $\ge 1$:

In practice we restrict to the range $\alpha \in [0,1]$, as $\alpha&gt;1$ does not produce a plausible rough appearance.

But right now these ranges are $[0, \infty]$:

image

Anisotropy parametrisation

In the current draft of the spec, a few parametrisations references are mentioned, and we propose a new one.
We are yet to confirm what parametrisation we want to use.

To help evaluate the models, here are some renders.
The roughness goes from 0 on the left to 1 on the right, and the anisotropy goes from 0 at the top, to 1 at the bottom.

Burley 2012 (Disney model)

$$ \alpha_t = \frac{r^2}{\sqrt{1 - 0.9 a}}, \quad\alpha_b = r^2 \sqrt{1 - 0.9 a} $$

where $0.9$ is a value chosen to limit the aspect ratio of the specular highlight to $10:1$.

2023-08-16 anisotropy-mapping-Burley2012

Georgiev2019 (Standard Surface)

$$ \alpha_t = \mathrm{min}\bigl(\frac{r^2}{\sqrt{1-a}}, 1.0\bigr), \quad\alpha_b = r^2 \sqrt{1-a} $$

2023-08-16 anisotropy-mapping-Georgiev2019

Kulla 2017 (Sony Pictures Imageworks)

$$ \alpha_t = r^2 (1 + a), \quad\alpha_b = r^2 (1 - a) $$

2023-08-16 anisotropy-mapping-Kulla2017

Neubelt 2013 (The Order: 1886)

$$ \alpha_t = r^2, \quad\alpha_b = \mathrm{lerp}(0, r^2, 1 - a) $$

2023-08-16 anisotropy-mapping-Neubelt2013

Kutz2021 (Adobe Standard Material)

$$ \alpha_t = r^2 + a^4, \quad\alpha_b = r^2 $$

2023-08-16 anisotropy-mapping-Kutz2021

Current OpenPBR proposed mapping

$$ \alpha_t = r^2 \sqrt{\frac{2}{1 + (1 - a)^2}}, \quad\alpha_b = (1 - a) , \alpha_t $$

2023-08-16 anisotropy-mapping-OpenPBR2023

Ordering of the parameters

The parameters should be sorted logically, and be ordered in a manner that is consistent with the workflow of the artist.

In a few places, the currently proposed ordering of the parameters seem to go against the grain.

Anisotropy

The currently proposed list includes:

Identifier Label
specular_roughness Roughness
specular_ior IOR
specular_ior_level IOR level
specular_anisotropy Anisotropy
specular_rotation Rotation

and:

Identifier Label
coat_roughness Roughness
coat_ior IOR
coat_ior_level IOR level
coat_anisotropy Anisotropy
coat_rotation Rotation

However, the artist will typically work with roughness and anisotropy at the same time. Having IoR in between makes it less practical.

I suggest changing the order to:

Identifier Label
specular_roughness Roughness
specular_anisotropy Anisotropy
specular_rotation Rotation
specular_ior IOR
specular_ior_level IOR level

And likewise for coat.

State white-furnace test behavior

The model as described should pass a "white-furnace" test, in various configurations. This is because the model itself describes a physical structure (not a particular approximation) where the ground truth appearance is supposed to be the correct physical light transport through the structure thus defined.

So the ground truth appearance should perfectly preserve energy, assuming the parameters are configured so there is no physical energy dissipation/absorption. Configurations where this should happen, so a white-furnace test (i.e. test that the object should disappear when illuminated by uniform background light) would pass include:

  • metal base with white base and edge color.
  • dielectric base (with no specular tint, and no volume, or volume with a white albedo)
  • subsurface with a white albedo
  • diffuse with a white albedo
  • all of the above, plus an uncolored-coat
  • all of the above, plus white fuzz

It would be good to point this out in the spec, as these are obviously important unit test cases to verify for an implementation.

Implementing thin-film interference in combination with F82-tint metallic Fresnel

The current spec draft points out the method used for implementing thin-film interference effects. However, this paper and its reference implementation assume that the physically correct dielectric and conductive Fresnel terms are used so that the optical phase shift can be computed.

However, OpenPBR uses the F82-tint model for metals, which directly computes the reflectivity without going through computing a complex IOR and then using it in the conductive Fresnel term.

Therefore, it might be helpful to include a note in the spec on how to reconcile this and how to compute/estimate the phase shift when implementing the thin-film component.

Purpose of base_weight and specular_weight?

From what I can see, both of these inputs are only used as multipliers for base_color and specular_color respectively.

If that is the case, they appear to be redundant, since the same effect could be achieved by just adjusting the color inputs instead.

transmission_depth versus transmission_density

@portsmouth wrote:

I suggest to use float transmission_density (i.e. extinction, the inverse MFP) to control the volume extinction scale (rather than its reciprocal, transmission_depth). This can default to zero meaning no volume. (This has units of inverse length, but users can just think of it as a density slider). This density control seems more intuitive than "depth" (it is also standard in heterogeneous volume rendering).

This would change the parametrization from II to III here:

image

specular_radius_scale should maybe be a color (not a vector3)?

Currently we say that subsurface_radius_scale has type vector3. But what does this mean operationally? It's confusing for people doing UI, as to whether this parameter should have a color picker, but we certainly want it to. The components are in [0,1] range so there's no issue with the values.

Perhaps it just means that the parameter should not be color managed, but do we not want that, as the meaning of the RGB channels (i.e. which "curve" of wavelength bands a given radius should apply to) should also change as the color space changes? So i'm not sure it doesn't make more sense to just leave it as a color.

image

transmission_tint?

@brechtvl wrote:

I would suggest to have a distinct non-physical transmission_tint parameter, that works in addition to the colors to control the volume scattering and absorption.

Glossy-diffuse slab description as a gloss layer is possibly confusing

We currently write in the spec:

The glossy-diffuse slab represents a dielectric with rough GGX microfacet surface BSDF, embedding a semi-infinite bulk of extremely dense scattering material.

This is the model we want to use physically, i.e. the slab is like the infinitely dense limit of the subsurface.

But then we say:

We choose to model this concretely as a layer of dielectric “gloss” on top of an opaque slab with a diffuse BRDF:

The reason we opted to describe glossy-diffuse as layer(diffuse, gloss) is then at least it's clear what the base color/roughness mean (i.e. the color and roughness of the Oren-Nayar base).

If you want to think of it instead as a dense subsurface inside dielectric, it's really not clear what base color/roughness should mean for the properties of the subsurface (e.g. there is no more Oren-Nayar model going on, so what is the roughness doing -- something like "rough volume", but that's not standard). Also if we did that with base color being the volume albedo say, technically the scattering would cause a color shift, requiring remapping (like the actual subsurface, but in some infinite density limit).

With the existing description saturation should occur for this layer too, due to the bounces within it. (And the coat would just add further saturation). That's not completely unrealistic, since even in the volumetric model some saturation will happen due to TIR and multiple scattering.

So I actually don't know what the best way to describe it is. Possibly the existing description is the best option given the currently available models. We might want to say though that we assume some remapping is done so that the supplied base color appearance is produced in some sense. How to define that so it is not incoherent I'm not sure though.

Proposal: Fuzz/sheen can darken as well as lighten

I’d like to propose that it would be nice if the fuzz/sheen parameter was able to darken as well as lighten. Many surface appearances such as nylon stockings, denim jeans, and silk all appear darker on the glancing angle rather than lighter.

Offering this as a topic for discussion.

Splitting specular_color between dielectric tint and metallic edge color

Currently, the specular_color parameter serves two purposes: It acts as a multiplier on top of the Fresnel term for the dielectric reflection, and it acts as the F82 parameter for the metallic component.

I'm not sure if this is a good idea, since these two purposes seem quite different to me:

  • The Fresnel multiplier is a non-physical tweak, while the metallic parameter models a physical effect
  • The Fresnel multiplier affects all angles, while the metallic parameter only affects near-grazing angles
  • Sharing the parameter makes it impossible to create a half-metallic material which matches a real metal, but doesn't tint the dielectric reflection (of course, this sort of half-metallic half-dielectric material is not really physical, but users will do it anyways)

Merge specular_weight and specular_ior_level

We had much discussion on Slack about how the layering (e.g. of the specular layer on the diffuse base) should be implemented in code, and described physically in the spec. We eventually came up with a proposal for a modification that clarifies and simplifies the model: to reinterpret specular_weight as the existing specular_ior_level, thus merging the two parameters and removing the latter from the model. (Also, removing coat_ior_level).

We originally settled on retaining both specular_weight and specular_ior_level from the ADSK and Adobe models as a sort of compromise solution. It seems though that it would be an improvement to merge them, both from the point of view of simplifying the user experience, and clarifying what the parameters are supposed to correspond to physically for implementers.

I try to summarize the discussion below (the background from Slack, then the proposal).


Background

Currently in the spec, we say about specular_weight that it functions as follows:

The specular_weight and specular_color parameters modulate the Fresnel factor of fdielectric. ... The light transmitted through the dielectric will be compensated accordingly to preserve the energy balance (thus generating a complementary color if specular_color is not white).

As discussed in #145, this intepretation of specular_color is problematic since it implies there will be a complementary color. So we will need to modify this to say that the specular_color only affects the reflection Fresnel factor, not the transmission.

But we also need to clarify what specular_weight does (in code, and physically). It actually doesn't make sense for specular_weight to only multiply the reflection Fresnel factor, since this would mean there would be darkening of the base even when specular_weight goes to zero. For example, in the case of the glossy-diffuse layer, this is supposed to be a layer of dielectric on top of a diffuse base. If specular_weight only modulates the reflection Fresnel factor, then dialing it to zero still generates darkening of the base due to the internal reflections in the layer, which are explicitly unaffected. In terms of the albedo-scaling approximation, specifying that the weight only multiplies the reflection Fresnel factor implies:

$$f_{\mathrm{glossy-diffuse}} \approx W \mathbf{C} f_{\mathrm{dielectric}} + \left(1 - E_{\mathrm{dielectric}}\right) \, f_{\mathrm{diffuse}}$$

where specular_weight $W$ and specular_color $\mathbf{C}$ multiply the first term. Then the $\left(1 - E_{\mathrm{dielectric}}\right)$ factor remains in the second term as specular_weight $W \rightarrow 0$, which produces darkening.

In standard surface, instead what we had was a formula like:

$$f_{\mathrm{glossy-diffuse}} \approx W \mathbf{C} f_{\mathrm{dielectric}} + \left(1 - W \mathbf{C} E_{\mathrm{dielectric}}\right) \, f_{\mathrm{diffuse}}$$

which ensures that as $W \rightarrow 0$, only the undarkened $f_{\mathrm{diffuse}}$ remains. Except, this generates the complementary color tint of the diffuse lobe.

It was proposed to alter this to (e.g. this is how MaterialX implements their dielectric layering):

$$f_{\mathrm{glossy-diffuse}} \approx W \mathbf{C} f_{\mathrm{dielectric}} + \left(1 - W E_{\mathrm{dielectric}}\right) \, f_{\mathrm{diffuse}}$$

Physically, this can be interpreted as meaning that the specular_weight is functioning as the presence weight of the dielectric layer (this interpretation leads to the formula above, in albedo-scaling approximation, as is easy to prove).

This makes some sense for the glossy-diffuse part of the specular lobe, but not really for the subsurface and transparent base, where the dielectric base is supposed to be the semi-infinite bulk. Putting this in a statistical superposition of present and absent is physically dubious (it also was for the glossy-diffuse layer really, as the dielectric was supposed to embed the diffuse medium, not just sit on top of it). So to use this presence interpretation of specular_weight we would have to specialize that to the glossy-diffuse case only, and say something different for the subsurface/transmission (e.g. that specular_weight reverts to being a non-physical multiplier of the Fresnel factor).


Simplification proposal

Rethinking the issue, Peter and I propose that we can achieve the desired the behaviour more simply just by having specular_weight function exactly as specular_ior_level does now, then omitting the latter parameter. That is, the specular_weight specifies a multiplier of the reflection Fresnel factor, achieved by modulating the IOR of the entire dielectric base.
The corresponding albedo scaling formula will be:

$$f_{\mathrm{glossy-diffuse}} \approx \mathbf{C} f_{\mathrm{dielectric}} + \left(1 - E_{\mathrm{dielectric}}\right) \, f_{\mathrm{diffuse}}$$

as now the specular_weight $W$ functions by altering the IOR of $f_{\mathrm{dielectric}}$ (thus the Fresnel factor). As $W \rightarrow 0$, the specular BRDF $f_{\mathrm{dielectric}} \rightarrow 0$ automatically, and thus its reflectance $E_{\mathrm{dielectric}} \rightarrow 0$.
We would retain specular_color $C$ as a non-physical tint of the reflection Fresnel factor.

This approach:

  • makes the reflection Fresnel factor go to zero as specular_weight $\rightarrow 0$, but does this in a more physically correct way, keeping the grazing angle highlight but having it narrow to zero width, rather than just killing the highlight.

  • makes the physical description clearer in the spec, and obvious how to implement. The dielectric base is now always present (embedding the media described by the transmission, subsurface, and diffuse slabs). The specular_weight is just modulating the IOR of this dielectric base.

  • removes a parameter which was practically redundant (as specular_weight and specular_ior_level had a very similar effect, apart from the former being less physically plausible with a damped highlight). The more obscure sounding specular_ior_level was likely to be a source of confusion to artists, especially given its extremely similar behaviour to specular_weight .

  • retains the ability to non-physically modulate the reflection lobe color/intensity, independently of the transmission, via specular_color. (So no functionality is actually lost).

It is optional whether we want to have the specular_weight increase the Fresnel factor at the top end of the range (e.g. default at 0.5, and max out at 1, where the Fresnel is doubled, as the current specular_ior_level works) or not. It would be reasonable to just omit this, and have the weight default to 1, and only decrease the Fresnel.

Additionally, we propose to remove coat_ior_level, as it is functionally equivalent to coat_weight (the presence weight of the coat) for the purposes of modulating the coat reflection strength. Removing this also simplifies implementation of the coat lobe.

specular_ior_level parameter discussion

@brechtvl wrote:

How to both have a specular IOR that takes values outside the 0..1 range and still allow texturing is something we struggled with as well. specular_ior_level is an interesting solution. I find the term "level" a bit unclear, I'd maybe call it specular_ior_texture to clarify its purpose, though that could be confusing in other ways.

Minor errors in Glossy-Diffuse section

image

NB, as in subsurface section:

Here Espec is the normal-direction reflectance of all energy reflected from the dielectric interface without macroscopic transmission.

Dielectric priority?

Dielectric priority for nested dielectrics, should we have it? This seems to be the standard approach now, so potentially we could attempt to incorporate it, either now or as a future extension.

It does require working out the policy for how the priority is applied. I would suggest that it works by assuming that the highest priority overlapping surface defines the entire dielectric base. Thus a glass of juice modelled as lower priority juice overlapping higher priority glass, will work and define the juice interior to the glass, whether the juice is modelled as dielectric volume or subsurface.

(Equal priority can mean the dielectric bases are effectively mixed).

Clarify handling of rays incident to the surface from the interior

Currently we don't say much about how the model deals with the difference between rays exiting and entering the surface. This has to be handled in a renderer (at least one which tries to correctly render glass objects with a coat/fuzz, for example) so we should clarify this.

For example consider the case of a glass object, with a coat and fuzz. Rays entering from the exterior (i.e. the ambient dielectric medium) will enter through the fuzz, then the coat, then transmit into the base glass.

Rays which hit the surface from the interior of the glass (having refracted into the glass at some earlier point in the path) instead hit the bottom side of the coat, then the fuzz, then transmit into the ambient medium:

image

The physical effect of the layers differs in these case. For entering rays:

  • the fuzz reflection is un-tinted by the coat absorption
  • the coat reflection is dimmed and roughened by the fuzz
  • the dielectric reflection and transmission are both dimmed and roughened by both the coat and the fuzz

While for rays which are exiting:

  • the fuzz reflection (viewed from inside) now will be tinted by the coat absorption
  • the reflection from the coat "top" interface (with the fuzz) is not dimmed or roughened by the fuzz. This reflection is also an internal one, so has different Fresnel factor.
  • only dielectric transmission is dimmed/roughened by coat and fuzz, while the reflection is unaffected (except for being an internal reflection as well)

However in Standard Surface (and MaterialX), we make no attempt to account for this in our albedo-scaling approximation of the layering. The Arnold implementation currently uses a rather crude approach where the normal is flipped so the surface always thinks the ray is entering 🤦‍♂️ (Then we have to use some messy logic to make nested dielectrics work despite this).

In reality there is kind of a symmetry between entering and exiting. In both cases the ray transmits from one external dielectric medium to another, via some intervening interfaces and layers of media, which we know. A sufficiently powerful layering formalism/system should be able to compute (within some reasonable approximation) the BSDF accounting correctly for this. It is possible to write down an albedo scaling approximation of this which looks symmetric, for example (which I didn't attempt in the spec).

It is probably too much detail for us to elaborate on this in the spec (at least at this stage), but I think we should at least discuss what would need to be done to correctly model the physics.

(Note, @iliyang and I discussed this issue previously in relation to Standard Surface).

Fuzz normal when layered on top of coat

Coat uses geometry_coat_normal, while other layers including fuzz use geometry_normal. Now that fuzz is on top of coat, this may no longer be correct.

Consider a material with a bumpy base layer, and a smooth coat layer on top that fills in the bumps. The fuzz should then have a smooth normal as well?

A solution could be to blend geometry_normal and geometry_coat_normal with coat_weight, and use that as the fuzz normal?

Should non-physical tinting affect lower layers?

The spec currently mentions that if specular_color is used to affect the color of the specular reflection, he underlying layers (diffuse/SSS/transmission) should be tinted according to the complementary color to preserve energy.

This makes sense from a physical perspective. However, from what I can tell, these parameters are intended for artistic control rather than physically motivated. Therefore, I'd argue that affecting the lower layers is unintuitive and unexpected - for example, if a user wants to achieve a white object with a green specular highlight, they'd need to set the base_color to something like (0.96, 1.0, 0.96). Instead, I'd suggest to scale the lower layer's intensity according to the maximum value across components, in order to preserve energy without tinting it.

Spec version metadata

In section 2.5, the spec lists a series of metadata to help compatibility. Ought the spec version be part of this? Or is there some other way to inspect which version of the spec a particular instance of an OpenPBR shader is?

In addition, from reading it's not clear what the different metadata being mentioned are, or where they are applied? Would be great to specify all this explicitly.

For consistency, every section should have a "weight" parameter.

While working on the UI for 3ds max, I realize how inconsistent it feels that everything has a weight, except emission and thin film.

I think everything should have a weight, even though this is just a multiplier in the case of emission and thin film, the UI consistency and the ease of explaining the functionality wins out, IMHO.

Also, since both thin-film thickness and emission luminance quite often have values outside the easily texturable 0-1 range, it is extremely convenient to be able to set e.g. a high luminance value, but then modulate the intensity of the light by a 0-1 weight map, or to set a given thin-film thickness, and modulate the coverage of the thin film with a 0-1 weight map.

The only reason not to have them is parameter count frugality, but if most real-world use cases has users plugging in multiplier nodes in front of these all the time, I feel that feature should be built in.

Better approximation for darkening layers under coat

Due to the possibility of total internal reflection at the coat/external medium boundary, layers under the coat should be darker. See https://graphics.cs.yale.edu/sites/default/files/wet.pdf for a detailed model describing the various effect of having a water layer on top of materials. I think that interpenetration of layers is out of scope for OpenPBR surface, but the total internal reflection should be modelled.

In the OpenPBR specification we describe the phenomenon without providing implementation details :
In the full light transport this observed color is further darkened and saturated due to multiple internal reflections from the inside of the coat, including a considerable amount of total internal reflection, which causes light to strike the underlying material multiple times and undergo more absorption. Also the observed tint color should vary away from coat_color as the incidence angle changes, due to the change in path length in the medium. The presence of a rough coat will increase the apparent roughness of the BSDF lobes of the underlying base. We generally assume that in the ground truth appearance, all these effects are accounted for.

In Standard Surface there is a dedicated parameter to simulate the darkening of layers under coat using the following formula :
base_color = pow(base_color, 1.0 + (coat * coat_affect_color)) subsurface_color = pow(subsurface_color, 1.0 + (coat * coat_affect_color))
This approximation is simple and disabled by default in Standard Surface.

In OpenPBR we should evaluate it against ground truth, possibly find a better approximation, and decide whether we want to make the effect user controllable.

Ability to tint specular highlights without affecting grazing angles

Currently, the only way to affect the color of the dielectric reflection lobe is specular_color, which affects all angles equally.

However, for compatibility with other material models (e.g. the classic Disney BSDF, or glTF's KHR_materials_specular), it would be useful to have a parameter for only affecting normal-incidence angles while leaving the grazing angles as they are.

More specifically, in a Schlick-style Fresnel term, this parameter would only affect F0, not F90. This can be implemented in combination with the proper dielectric Fresnel term by computing real_F0 from the IOR, then computing the Fresnel value using the dielectric term, and then remapping from the real_F0 .. 1.0 range to F0 .. F90.

transmission/translucent volume parametrization

@brechtvl wrote:

The way transmission_color and transmission_scatter work I believe is not easy to control if I remember correctly. I did some tests regarding this in the past, with the main issue that density and color are not controlled independently this way, which makes it harder to tweak and texture. Some images of that.

The parameters I used to try to solve that work as follows:

   scattering_coefficient = density * transmission_scatter_color
   absorption_coefficient = density * max(1 - sqrt(max(transmission_absorption_color, 0)), 0) * max(1 - transmission_scatter_color, 0)

question regarding color value range enforcement

Hi. Quite excited about the work you're doing.

Reading over the white paper, I noticed that the acceptable range for color values is set to be [0,1]. Does this imply that nonconforming color spaces should be excluded?

For example, ACES 2065-1, CIE LAB/LUV and OKLAB can have negative color values. Now, perceptual color spaces might be a questionable choice here, but it seems conceivable that someone would want to define their materials with AP0 primaries.

Visual discontinuity at `transmission_depth` of zero

I'm not yet sure of the cause of this visual discontinuity, whether it's in the MaterialX graph for OpenPBR or perhaps in the real-time approximation of the MaterialX Physically Based Shading nodes, so I wanted to report it here for discussion.

When transmission_depth is exactly zero, the transmission_color input has an intuitive visual effect in real-time OpenPBR renders, but this effect disappears entirely when transmission_depth is set to any non-zero value, creating a visual discontinuity as the artist moves the slider.

This issue can be seen most easily in the open_pbr_honey.mtlx example material, linked here:
https://academysoftwarefoundation.github.io/MaterialX/?file=Materials/Examples/OpenPbr/open_pbr_honey.mtlx

If you open the Property Editor and set transmission_depth to exactly zero, then the honey color of the transmission component becomes visible, but it disappears again when transmission_depth is set to a value slightly above zero.

OpenPBR Honey (zero transmission depth):
image

OpenPBR Honey (epsilon transmission depth)
image

MaterialX implementation

[Transcribing here a long previous thread of discussion, for reference].

I wrote some initial notes about the current form of the MaterialX reference implementation:

materialx_openpbr_commentary.pdf

To summarize, I was concerned that the way the physical layering is expressed in MaterialX doesn't quite capture the intent of the OpenPBR model. The main issue is that the layer operations don't explicitly account for the presence of the volumetric medium inside the layer.

If we look in detail at the sheen, coat, and specular layers, there's some problems representing each with the current MaterialX layer node:

Coat
In standard surface, this is represented as (changing names slightly for clarity):

coat_lobe = coat * coat_brdf(...) + lerp(white, coat_color * (1 - reflectance(coat_brdf)), coat) * base

This is actually supposed to represent the coat as an "intermittent" layer on top of the base, where the coat weight is just the presence/coverage weight of the coat, since the above unpacks as:

coat_lobe = (1-coat)*base + coat*coated_base

where

coated_base = coat_brdf + coat_color*(1 - reflectance(coat_brdf))*base

which is like the albedo-scaling top + base*(1-reflectance(top)) form for the coat+base layer. In other words coat_lobe is a statistical mix between uncoated base, and coated base.

In OpenPBR we just write that as

layer(base-substrate, coat, coat_weight)

where base-substrate and coat are slabs of material, and coat_weight is the coat presence weight (i.e. the fraction of surface which is coated).

The coat_color here is approximating the effect of the volumetric absorption in the coat layer. In OpenPBR we say that the color should be the "observed tint color of the underlying base at normal incidence", after accounting for all the light transport effects. In standard surface, the approximation of this is just this multiplication of the base by the coat_color tint.

In MaterialX, the coat is represented as:

coat_layer = layer(top = coat_bsdf,
                   base = thin_film_layer * (coat_color*coat + (1-coat)))

where coat_bsdf has a "weight" parameter equal to coat, which presumably is just multiplied into the BSDF.

First, the operation "layer BSDF A on top of BSDF B" doesn't strictly make sense to me as a physical operation, as BSDFs are not physical things you can layer. In OpenPBR we are careful to define the layering as placing a slab of material, which is the combination of (interface, medium), on top of another such slab. It doesn't make as much sense physically to talk about layering one BSDF on top of another, unless that is just a shorthand for the approximate albedo scaling combination of the BSDFs.

Then the way the volumetric absorption of the coat is accounted for in this formula, i.e. the (coat_color*coat+(1−coat)) factor, is rather artificial. This basically assumes the albedo scaling approximation is being used. Also the presence weight of the coat layer is rolled into the coat BSDF as a multiplicative weight, which also seems artificial as BSDFs don't generically have a multiplicative "weight" factor.

Sheen

In standard surface, this is written down as the combination:

sheen_layer = sheen * sheen_color * sheen_brdf(...) + (1 - sheen * reflectance(sheen_brdf)) * base_mix

This looks similar to the usual base*(1-reflectance(top)) + top albedo scaling approximation, except it isn't quite of that form since top = sheen * sheen_color, but the reflectance term doesn't include sheen_color. In fact this specific combination is supposed to represent reflecting fibres/flakes which produce a colored reflection but do not tint the base. Regular albedo scaling can't do that, as if top is colored this will produce a complementary color tint of the base lobe. Really this combination is supposed to be some loose approximation of a microflake volume with colored flakes but gray transmittance.

In OpenPBR we said that:

any light not reflected after multiple scattering is assumed to transmit to the lower layers (because the microflake volume has gray extinction, the transmitted light will not be tinted by the fuzz)

And suggest the explicit layer combination (matching standard surface):
image

To represent this as a MaterialX layer operator would require some generalization then, e.g. perhaps a Boolean to specify "whether the base layer is tinted by the complementary color of the coat layer BRDF".

Specular

The specular lobe in MaterialX is represented as:

specular_layer  = layer(top =  specular_bsdf,
                        base = transmission_mix)

transmission_mix  =  mix(fg = transmission_bsdf,
                         bg = opaque-base,
                         mix = transmission)

but (in OpenPBR anyway) this is supposed to be representing a dielectric interface (where specular_bsdf is the BRDF, and transmission_bsdf is the BTDF). It's not physical in general to think of this as an actual layer, the form above only really makes sense in the albedo scaling approximation, where it is then just roughly approximating the balance of energy between the dielectric lobes. If people took this layer seriously as a physical description, it would be unclear what it means except as a shorthand for albedo scaling, which defeats the purpose of trying to define a layering operation as something more general than albedo scaling. I think this specular reflection lobe should actually be written explicitly as the sum of BRDF and BTDF lobes, not artificially as a layer operation.


Overall I think it's quite difficult to map the formal layer/mix structure in OpenPBR into corresponding abstract layer operations that produce a well-defined implementation. In my view it's much easier to work with something like the standard surface form (or the analogous form of OpenPBR) where one is just evaluating a mixture of closures/lobes, with well-defined mixture weights, which is explicitly implementing a certain (actually perfectly acceptable, for VFX) approximation. Then there is no need to figure out how to carefully craft the layer API so that implementers will be able to reproduce the mixture model you want them to, you just give them the mixture model.

Alternatively, if we must go the route of using layer operations, this needs to be generalized appropriately, though as described it would probably have to be designed quite carefully in order for implementers to be able to make sense of it. (Or you tell implementers how to do this in a reference implementation, which then probably just has the form of the mixture model, which makes the intermediate layer description a bit redundant).

My thought about how we could possibly generalize the layer operation to do what is needed is allow that:

  • the layer can have a coverage/presence weight in [0,1] defaulting to 1.
  • the layer transmittance (or optical depth) can be specified, thus tinting the base. I think using a depth value would be quite artificial as these layers are conceptually infinitesimally thin (or at least, we don't care about the actual depth, we care only about the optical depth).
  • we specify whether the base layer is tinted by the complementary color of the coat layer BRDF (e.g. true for specular layer, not true for fuzz layer). This is a proxy for the details of the scattering/absorption processes inside the layer.
    Though you can see this is not fully general, and basically just designed to recover the mixture model we are looking for in the end.

MaterialX reference implementation errors

While trying out OpenPBR I noticed that there's some errors in the MaterialX reference implementation.

  • transmission_dispersion should be removed.
  • transmission_dispersion_abbe_number is missing.
  • transmission_dispersion_scale is missing.
  • transmission_scatter_anisotropy has uimin set to 0, this should be -1 so that it can allow for back scattering.

Coat/specular IOR defaults, and TIR issue if coat IOR > spec IOR

I think coat and specular IOR should be more different now, as the coat masks the specular if the IORs are the same. Current defaults are coat=1.6, spec=1.5.

The coat IOR should probably be lower than specular by default, to further boost the spec and prevent the TIR issue (e.g. coat=1.3, spec=1.6).

Also note that if the coat IOR is higher than the spec IOR, then TIR will occur in the specular reflection lobe. This looks weird if you don't account for the fact that the coat is sitting on top which bends the rays thus preventing the TIR, e.g. see the result boxed in red below. (NB, the current Arnold and MaterialX implementations do this):

image

Adobe proposes to avoid this by inverting the IOR of the base surface, though that seems to mess with the Fresnel physics which it would be best to avoid. (Though as a workaround in an implementation, maybe it's reasonable). We may want to note this problem and suggest some approaches, as otherwise a naive implementation could produce what looks like an artifact.

Suggest "presence" or "coverage" instead of "geometric_opacity"

I was wondering if anyone would be open to using an alternative name for geometric_opacity.

In OpenPBR the intention of this signal is to create cutouts of the geometry, rather than to change how much light is transmitted through the surface. In my experience opacity is usually used to describe the degree of transmittance.

Interestingly, the definition of the parameter mentions presence:

where α = 𝚐𝚎𝚘𝚖𝚎𝚝𝚛𝚢_𝚘𝚙𝚊𝚌𝚒𝚝𝚢 is the presence weight of the entire surface

In my opinion choosing presence would improve the readability of the parameter. coverage is another alternative if presence isn't appropriate for some reason.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.