academysoftwarefoundation / openpbr Goto Github PK
View Code? Open in Web Editor NEWSpecification and reference implementation for the OpenPBR Surface shading model
License: Apache License 2.0
Specification and reference implementation for the OpenPBR Surface shading model
License: Apache License 2.0
subsurface_color
default value I would not have as 1 since that's not very physical, would set it to the same as base_color
.
subsurface_radius
default value I would recommend to set to something else than (1, 1, 1). We have (1, 0.2, 0.1) in Blender, the exact choice is somewhat arbitrary but practically any real material has these decreasing from R to G to B and it's nice to see this effect immediately when dragging the subsurface_weight
slider.
The specification says (in the comments on equation (43), that the angle for F82 is defined as 82 degrees. This is not accurate - the actual angle should be defined as arccos(1/7), which is close to but not exactly 82 degrees. In other words, mu_bar is defined as exactly 1/7. This is needed for the math to work out with integer powers, etc.
Following up on the recent fixes in #151 by @portsmouth, we need to update our default example material in a corresponding way, so that it renders correctly in applications:
These parameters related to volume shading have been exposed but not used for VDF in the MaterialX implementation:
transmission_depth
transmission_scatter
transmission_scatter_anisotropy
Therefore code gen can't generate a shader with VDF even the backend supports volume shading.
While working on the UI for 3ds max, I realize how inconsistent it feels that everything has a weight, except emission and thin film.
I think everything should have a weight, even though this is just a multiplier in the case of emission and thin film, the UI consistency and the ease of explaining the functionality wins out, IMHO.
Also, since both thin-film thickness and emission luminance quite often have values outside the easily texturable 0-1 range, it is extremely convenient to be able to set e.g. a high luminance value, but then modulate the intensity of the light by a 0-1 weight map, or to set a given thin-film thickness, and modulate the coverage of the thin film with a 0-1 weight map.
The only reason not to have them is parameter count frugality, but if most real-world use cases has users plugging in multiplier nodes in front of these all the time, I feel that feature should be built in.
In the OpenPBR specification, the diffuse albedo and the metallic F0 both share the base_color
parameter, with base_metalness
providing the control to mix between the dielectric-base and metal substrates.
We have found it to be useful to have these parameters specified independently as we typically paint these maps separately. Having them shared would make it difficult for us to use OpenPBR.
When Brent Burley chose this parameterisation, his aims were to make a simpler interface for BSDFs, primarily targeted at feature animation pipelines. We have found that for photo-realistic look development, having control of the diffuse separately from the metallic specular gives more artistic control.
Having these colours linked was one of the main barriers that prevented us from adopting Autodesk Standard Surface internally, as well.
Would it be possible to break base_color
into metal_color
and diffuse_color
?
The model as described should pass a "white-furnace" test, in various configurations. This is because the model itself describes a physical structure (not a particular approximation) where the ground truth appearance is supposed to be the correct physical light transport through the structure thus defined.
So the ground truth appearance should perfectly preserve energy, assuming the parameters are configured so there is no physical energy dissipation/absorption. Configurations where this should happen, so a white-furnace test (i.e. test that the object should disappear when illuminated by uniform background light) would pass include:
It would be good to point this out in the spec, as these are obviously important unit test cases to verify for an implementation.
@brechtvl wrote:
For non-physical color multiplier like specular_color
and transmission_color
(with depth 0), we called those "tint" instead of "color" to make a more clear distinction with the physical ones like base_color
.
Currently we say that subsurface_radius_scale
has type vector3
. But what does this mean operationally? It's confusing for people doing UI, as to whether this parameter should have a color picker, but we certainly want it to. The components are in [0,1] range so there's no issue with the values.
Perhaps it just means that the parameter should not be color managed, but do we not want that, as the meaning of the RGB channels (i.e. which "curve" of wavelength bands a given radius should apply to) should also change as the color space changes? So i'm not sure it doesn't make more sense to just leave it as a color.
I was wondering if anyone would be open to using an alternative name for geometric_opacity
.
In OpenPBR the intention of this signal is to create cutouts of the geometry, rather than to change how much light is transmitted through the surface. In my experience opacity is usually used to describe the degree of transmittance.
Interestingly, the definition of the parameter mentions presence:
where Ξ± = ππππππππ’_πππππππ’ is the presence weight of the entire surface
In my opinion choosing presence
would improve the readability of the parameter. coverage
is another alternative if presence
isn't appropriate for some reason.
@brechtvl wrote:
It's not clear to me if these parameters expect tangent space or world/rendering space normals. I would guess it's world/rendering space if bump maps are to be supported as well, and because also for normal maps there might be various additional controls that feel out of scope for OpenPBR.
@brechtvl wrote:
Personally would be happy to keep transmission_extra_roughness
out since it's a pain to implement.
Dielectric priority for nested dielectrics, should we have it? This seems to be the standard approach now, so potentially we could attempt to incorporate it, either now or as a future extension.
It does require working out the policy for how the priority is applied. I would suggest that it works by assuming that the highest priority overlapping surface defines the entire dielectric base. Thus a glass of juice modelled as lower priority juice overlapping higher priority glass, will work and define the juice interior to the glass, whether the juice is modelled as dielectric volume or subsurface.
(Equal priority can mean the dielectric bases are effectively mixed).
The parameters should be sorted logically, and be ordered in a manner that is consistent with the workflow of the artist.
In a few places, the currently proposed ordering of the parameters seem to go against the grain.
The currently proposed list includes:
Identifier | Label |
---|---|
specular_roughness |
Roughness |
specular_ior |
IOR |
specular_ior_level |
IOR level |
specular_anisotropy |
Anisotropy |
specular_rotation |
Rotation |
and:
Identifier | Label |
---|---|
coat_roughness |
Roughness |
coat_ior |
IOR |
coat_ior_level |
IOR level |
coat_anisotropy |
Anisotropy |
coat_rotation |
Rotation |
However, the artist will typically work with roughness and anisotropy at the same time. Having IoR in between makes it less practical.
I suggest changing the order to:
Identifier | Label |
---|---|
specular_roughness |
Roughness |
specular_anisotropy |
Anisotropy |
specular_rotation |
Rotation |
specular_ior |
IOR |
specular_ior_level |
IOR level |
And likewise for coat.
Coat uses geometry_coat_normal
, while other layers including fuzz use geometry_normal
. Now that fuzz is on top of coat, this may no longer be correct.
Consider a material with a bumpy base layer, and a smooth coat layer on top that fills in the bumps. The fuzz should then have a smooth normal as well?
A solution could be to blend geometry_normal
and geometry_coat_normal
with coat_weight
, and use that as the fuzz normal?
From what I can see, both of these inputs are only used as multipliers for base_color
and specular_color
respectively.
If that is the case, they appear to be redundant, since the same effect could be achieved by just adjusting the color inputs instead.
Currently it does, because we say:
The specular_weight and specular_color parameters modulate the Fresnel factor.. The light transmitted through the dielectric will be compensated accordingly to preserve the energy balance (thus generating a complementary color if specular_color is not white).
We had a long discussion on Slack, and agreed that this is not the behavior that is wanted (most likely). Really one wants only the specular reflection to be tinted, not the base. This is also something that we wanted to fix up in the Standard Surface model. So the question for OpenPBR is how to describe that physically and unambiguously.
For a dielectric interface, it is physically unambiguous to stipulate that only the Fresnel reflection (from the top side) is tinted. This tinting is obviously ad-hoc/unphysical, but harmless as it merely multiplies one scattering mode (marked in red below) by a factor, effectively deleting some energy by an unspecified mechanism. Also it makes it totally clear what the effect on the light transport is, for an implementation.
Furthermore for the case of a layer of dielectric on top of a base, it is natural to interpret the specular weight as controlling the presence/coverage of the layer.
With weight w
and color tint C
, the resulting lobe combination would be approximated as:
w*C*fspec + (1 - w*E[fspec])*fbase
(where E
is the reflectance). This is also how MaterialX does its layering throughput calculation.
For the base dielectric (e.g. a solid piece of glass) there is only the dielectric interface (no base), so only the Fresnel tinting applies (and we can interpret the specular weight in this case just as a multiplier of the tint), i.e. the lobe combination looks like:
w*C*fbrdf + (1 - E[fbrdf])*fbtdf
(so physically, all the regular transmission effects, e.g. TIR and Snell's window seen from below, and the refraction of the interior seen from above, are totally unaffected by the specular color or weight).
We anyway need to clarify this in the spec, to make it explicit how the specular color and weight parameters actually modify the physical configuration to achieve the desired effect.
Due to the possibility of total internal reflection at the coat/external medium boundary, layers under the coat should be darker. See https://graphics.cs.yale.edu/sites/default/files/wet.pdf for a detailed model describing the various effect of having a water layer on top of materials. I think that interpenetration of layers is out of scope for OpenPBR surface, but the total internal reflection should be modelled.
In the OpenPBR specification we describe the phenomenon without providing implementation details :
In the full light transport this observed color is further darkened and saturated due to multiple internal reflections from the inside of the coat, including a considerable amount of total internal reflection, which causes light to strike the underlying material multiple times and undergo more absorption. Also the observed tint color should vary away from coat_color as the incidence angle changes, due to the change in path length in the medium. The presence of a rough coat will increase the apparent roughness of the BSDF lobes of the underlying base. We generally assume that in the ground truth appearance, all these effects are accounted for.
In Standard Surface there is a dedicated parameter to simulate the darkening of layers under coat using the following formula :
base_color = pow(base_color, 1.0 + (coat * coat_affect_color)) subsurface_color = pow(subsurface_color, 1.0 + (coat * coat_affect_color))
This approximation is simple and disabled by default in Standard Surface.
In OpenPBR we should evaluate it against ground truth, possibly find a better approximation, and decide whether we want to make the effect user controllable.
Currently thin film has a range of 0-2000 nanometers. I'd like to propose for discussion instead measuring in micrometers which would be 0-2 range, and then dividing that by 2 in order to have a normalized 0-1 parameter range. This makes a more artist friendly slider.
The current OpenPBR specification proposes to control the anisotropy with a specular_anisotropy
(respectively coat_anisotropy
) and the direction of the specular highlight elongation with a specular_rotation
(respectively coat_rotation
).
In the case those inputs are given with a texture, or more generally if some filtering is involved, the specular_rotation
will require special care to avoid artifacts due to the discontinuity at the 360Β° / 0Β° angle. At a minimum, this requires the ability to specify a nearest-neighbor filter, but obtaining a level of quality consistent with bilinear, trilinear or anisotropic filtering involves implementing a custom filtering scheme to handle the discontinuity case. Such a custom filtering has to be implemented deep into the shader, requires more texture fetches, and represents a non trivial amount of work for the developer of the renderer.
An alternative approach is to specify the direction with a flow map of the anisotropy direction 2D vectors. This parametrization is more suitable to texture filtering implemented in most renderers and 3D hardware, does not require a special case, and is very similar to normal maps which are widely supported.
Although it may seem expressing an angle (1 parameter) as a vector (2 parameters) might incur an additional bandwidth cost, factoring specular_anisotropy
into the vector norm should lead to an equivalent cost (or lesser since no custom filtering with additional fetches is involved).
glTF expresses the anisotropy direction and strength as a 3 component texture though, so pro and cons of the two solutions would have to be weighted.
It may also seem that authoring a flow map directly is more difficult, but authoring a rotation map directly is in fact difficult as well and better performed with a dedicated tool anyway.
@brechtvl wrote:
I would suggest to have a distinct non-physical transmission_tint
parameter, that works in addition to the colors to control the volume scattering and absorption.
@brechtvl wrote:
We had planned to get rid of the distinct subsurface_color
and use just base_color
, partially because of an implementation quirk, and partially because nearly always the same texture map was connected to both. But maybe it is common to have separate texture maps for these for some users?
@portsmouth wrote:
I suggest to use float transmission_density
(i.e. extinction, the inverse MFP) to control the volume extinction scale (rather than its reciprocal, transmission_depth
). This can default to zero meaning no volume. (This has units of inverse length, but users can just think of it as a density slider). This density control seems more intuitive than "depth" (it is also standard in heterogeneous volume rendering).
This would change the parametrization from II to III here:
We had much discussion on Slack about how the layering (e.g. of the specular layer on the diffuse base) should be implemented in code, and described physically in the spec. We eventually came up with a proposal for a modification that clarifies and simplifies the model: to reinterpret specular_weight
as the existing specular_ior_level
, thus merging the two parameters and removing the latter from the model. (Also, removing coat_ior_level
).
We originally settled on retaining both specular_weight
and specular_ior_level
from the ADSK and Adobe models as a sort of compromise solution. It seems though that it would be an improvement to merge them, both from the point of view of simplifying the user experience, and clarifying what the parameters are supposed to correspond to physically for implementers.
I try to summarize the discussion below (the background from Slack, then the proposal).
Currently in the spec, we say about specular_weight
that it functions as follows:
The
specular_weight
andspecular_color
parameters modulate the Fresnel factor of fdielectric. ... The light transmitted through the dielectric will be compensated accordingly to preserve the energy balance (thus generating a complementary color ifspecular_color
is not white).
As discussed in #145, this intepretation of specular_color
is problematic since it implies there will be a complementary color. So we will need to modify this to say that the specular_color
only affects the reflection Fresnel factor, not the transmission.
But we also need to clarify what specular_weight
does (in code, and physically). It actually doesn't make sense for specular_weight
to only multiply the reflection Fresnel factor, since this would mean there would be darkening of the base even when specular_weight
goes to zero. For example, in the case of the glossy-diffuse layer, this is supposed to be a layer of dielectric on top of a diffuse base. If specular_weight
only modulates the reflection Fresnel factor, then dialing it to zero still generates darkening of the base due to the internal reflections in the layer, which are explicitly unaffected. In terms of the albedo-scaling approximation, specifying that the weight only multiplies the reflection Fresnel factor implies:
where specular_weight
specular_color
specular_weight
In standard surface, instead what we had was a formula like:
which ensures that as
It was proposed to alter this to (e.g. this is how MaterialX implements their dielectric layering):
Physically, this can be interpreted as meaning that the specular_weight
is functioning as the presence weight of the dielectric layer (this interpretation leads to the formula above, in albedo-scaling approximation, as is easy to prove).
This makes some sense for the glossy-diffuse part of the specular lobe, but not really for the subsurface and transparent base, where the dielectric base is supposed to be the semi-infinite bulk. Putting this in a statistical superposition of present and absent is physically dubious (it also was for the glossy-diffuse layer really, as the dielectric was supposed to embed the diffuse medium, not just sit on top of it). So to use this presence interpretation of specular_weight
we would have to specialize that to the glossy-diffuse case only, and say something different for the subsurface/transmission (e.g. that specular_weight
reverts to being a non-physical multiplier of the Fresnel factor).
Rethinking the issue, Peter and I propose that we can achieve the desired the behaviour more simply just by having specular_weight
function exactly as specular_ior_level
does now, then omitting the latter parameter. That is, the specular_weight
specifies a multiplier of the reflection Fresnel factor, achieved by modulating the IOR of the entire dielectric base.
The corresponding albedo scaling formula will be:
as now the specular_weight
We would retain specular_color
This approach:
makes the reflection Fresnel factor go to zero as specular_weight
makes the physical description clearer in the spec, and obvious how to implement. The dielectric base is now always present (embedding the media described by the transmission, subsurface, and diffuse slabs). The specular_weight
is just modulating the IOR of this dielectric base.
removes a parameter which was practically redundant (as specular_weight
and specular_ior_level
had a very similar effect, apart from the former being less physically plausible with a damped highlight). The more obscure sounding specular_ior_level
was likely to be a source of confusion to artists, especially given its extremely similar behaviour to specular_weight
.
retains the ability to non-physically modulate the reflection lobe color/intensity, independently of the transmission, via specular_color
. (So no functionality is actually lost).
It is optional whether we want to have the specular_weight
increase the Fresnel factor at the top end of the range (e.g. default at 0.5, and max out at 1, where the Fresnel is doubled, as the current specular_ior_level
works) or not. It would be reasonable to just omit this, and have the weight default to 1, and only decrease the Fresnel.
Additionally, we propose to remove coat_ior_level
, as it is functionally equivalent to coat_weight
(the presence weight of the coat) for the purposes of modulating the coat reflection strength. Removing this also simplifies implementation of the coat lobe.
@brechtvl wrote:
The way transmission_color and transmission_scatter work I believe is not easy to control if I remember correctly. I did some tests regarding this in the past, with the main issue that density and color are not controlled independently this way, which makes it harder to tweak and texture. Some images of that.
The parameters I used to try to solve that work as follows:
scattering_coefficient = density * transmission_scatter_color
absorption_coefficient = density * max(1 - sqrt(max(transmission_absorption_color, 0)), 0) * max(1 - transmission_scatter_color, 0)
In the current draft of the spec, a few parametrisations references are mentioned, and we propose a new one.
We are yet to confirm what parametrisation we want to use.
To help evaluate the models, here are some renders.
The roughness goes from 0 on the left to 1 on the right, and the anisotropy goes from 0 at the top, to 1 at the bottom.
where
Currently, the specular_color
parameter serves two purposes: It acts as a multiplier on top of the Fresnel term for the dielectric reflection, and it acts as the F82 parameter for the metallic component.
I'm not sure if this is a good idea, since these two purposes seem quite different to me:
We currently write in the spec:
The glossy-diffuse slab represents a dielectric with rough GGX microfacet surface BSDF, embedding a semi-infinite bulk of extremely dense scattering material.
This is the model we want to use physically, i.e. the slab is like the infinitely dense limit of the subsurface.
But then we say:
We choose to model this concretely as a layer of dielectric βglossβ on top of an opaque slab with a diffuse BRDF:
The reason we opted to describe glossy-diffuse as layer(diffuse, gloss)
is then at least it's clear what the base color/roughness mean (i.e. the color and roughness of the Oren-Nayar base).
If you want to think of it instead as a dense subsurface inside dielectric, it's really not clear what base color/roughness should mean for the properties of the subsurface (e.g. there is no more Oren-Nayar model going on, so what is the roughness doing -- something like "rough volume", but that's not standard). Also if we did that with base color being the volume albedo say, technically the scattering would cause a color shift, requiring remapping (like the actual subsurface, but in some infinite density limit).
With the existing description saturation should occur for this layer too, due to the bounces within it. (And the coat would just add further saturation). That's not completely unrealistic, since even in the volumetric model some saturation will happen due to TIR and multiple scattering.
So I actually don't know what the best way to describe it is. Possibly the existing description is the best option given the currently available models. We might want to say though that we assume some remapping is done so that the supplied base color appearance is produced in some sense. How to define that so it is not incoherent I'm not sure though.
Currently we don't say much about how the model deals with the difference between rays exiting and entering the surface. This has to be handled in a renderer (at least one which tries to correctly render glass objects with a coat/fuzz, for example) so we should clarify this.
For example consider the case of a glass object, with a coat and fuzz. Rays entering from the exterior (i.e. the ambient dielectric medium) will enter through the fuzz, then the coat, then transmit into the base glass.
Rays which hit the surface from the interior of the glass (having refracted into the glass at some earlier point in the path) instead hit the bottom side of the coat, then the fuzz, then transmit into the ambient medium:
The physical effect of the layers differs in these case. For entering rays:
While for rays which are exiting:
However in Standard Surface (and MaterialX), we make no attempt to account for this in our albedo-scaling approximation of the layering. The Arnold implementation currently uses a rather crude approach where the normal is flipped so the surface always thinks the ray is entering π€¦ββοΈ (Then we have to use some messy logic to make nested dielectrics work despite this).
In reality there is kind of a symmetry between entering and exiting. In both cases the ray transmits from one external dielectric medium to another, via some intervening interfaces and layers of media, which we know. A sufficiently powerful layering formalism/system should be able to compute (within some reasonable approximation) the BSDF accounting correctly for this. It is possible to write down an albedo scaling approximation of this which looks symmetric, for example (which I didn't attempt in the spec).
It is probably too much detail for us to elaborate on this in the spec (at least at this stage), but I think we should at least discuss what would need to be done to correctly model the physics.
(Note, @iliyang and I discussed this issue previously in relation to Standard Surface).
Hi. Quite excited about the work you're doing.
Reading over the white paper, I noticed that the acceptable range for color values is set to be [0,1]. Does this imply that nonconforming color spaces should be excluded?
For example, ACES 2065-1, CIE LAB/LUV and OKLAB can have negative color values. Now, perceptual color spaces might be a questionable choice here, but it seems conceivable that someone would want to define their materials with AP0 primaries.
Currently diffuse and fuzz bsdfs are unspecificied. From our point of view, these would be much better explicitly specified as not doing so will necessarily lead to look differences between implementations.
Iβd like to propose that it would be nice if the fuzz/sheen parameter was able to darken as well as lighten. Many surface appearances such as nylon stockings, denim jeans, and silk all appear darker on the glancing angle rather than lighter.
Offering this as a topic for discussion.
In section 2.5, the spec lists a series of metadata to help compatibility. Ought the spec version be part of this? Or is there some other way to inspect which version of the spec a particular instance of an OpenPBR shader is?
In addition, from reading it's not clear what the different metadata being mentioned are, or where they are applied? Would be great to specify all this explicitly.
I think coat and specular IOR should be more different now, as the coat masks the specular if the IORs are the same. Current defaults are coat=1.6, spec=1.5.
The coat IOR should probably be lower than specular by default, to further boost the spec and prevent the TIR issue (e.g. coat=1.3, spec=1.6).
Also note that if the coat IOR is higher than the spec IOR, then TIR will occur in the specular reflection lobe. This looks weird if you don't account for the fact that the coat is sitting on top which bends the rays thus preventing the TIR, e.g. see the result boxed in red below. (NB, the current Arnold and MaterialX implementations do this):
Adobe proposes to avoid this by inverting the IOR of the base surface, though that seems to mess with the Fresnel physics which it would be best to avoid. (Though as a workaround in an implementation, maybe it's reasonable). We may want to note this problem and suggest some approaches, as otherwise a naive implementation could produce what looks like an artifact.
[Transcribing here a long previous thread of discussion, for reference].
I wrote some initial notes about the current form of the MaterialX reference implementation:
materialx_openpbr_commentary.pdf
To summarize, I was concerned that the way the physical layering is expressed in MaterialX doesn't quite capture the intent of the OpenPBR model. The main issue is that the layer operations don't explicitly account for the presence of the volumetric medium inside the layer.
If we look in detail at the sheen, coat, and specular layers, there's some problems representing each with the current MaterialX layer node:
Coat
In standard surface, this is represented as (changing names slightly for clarity):
coat_lobe = coat * coat_brdf(...) + lerp(white, coat_color * (1 - reflectance(coat_brdf)), coat) * base
This is actually supposed to represent the coat as an "intermittent" layer on top of the base, where the coat weight is just the presence/coverage weight of the coat, since the above unpacks as:
coat_lobe = (1-coat)*base + coat*coated_base
where
coated_base = coat_brdf + coat_color*(1 - reflectance(coat_brdf))*base
which is like the albedo-scaling top + base*(1-reflectance(top))
form for the coat+base layer. In other words coat_lobe is a statistical mix between uncoated base, and coated base.
In OpenPBR we just write that as
layer(base-substrate, coat, coat_weight)
where base-substrate and coat are slabs of material, and coat_weight
is the coat presence weight (i.e. the fraction of surface which is coated).
The coat_color
here is approximating the effect of the volumetric absorption in the coat layer. In OpenPBR we say that the color should be the "observed tint color of the underlying base at normal incidence", after accounting for all the light transport effects. In standard surface, the approximation of this is just this multiplication of the base by the coat_color
tint.
In MaterialX, the coat is represented as:
coat_layer = layer(top = coat_bsdf,
base = thin_film_layer * (coat_color*coat + (1-coat)))
where coat_bsdf has a "weight" parameter equal to coat, which presumably is just multiplied into the BSDF.
First, the operation "layer BSDF A on top of BSDF B" doesn't strictly make sense to me as a physical operation, as BSDFs are not physical things you can layer. In OpenPBR we are careful to define the layering as placing a slab of material, which is the combination of (interface, medium), on top of another such slab. It doesn't make as much sense physically to talk about layering one BSDF on top of another, unless that is just a shorthand for the approximate albedo scaling combination of the BSDFs.
Then the way the volumetric absorption of the coat is accounted for in this formula, i.e. the (coat_color*coat+(1βcoat))
factor, is rather artificial. This basically assumes the albedo scaling approximation is being used. Also the presence weight of the coat layer is rolled into the coat BSDF as a multiplicative weight, which also seems artificial as BSDFs don't generically have a multiplicative "weight" factor.
Sheen
In standard surface, this is written down as the combination:
sheen_layer = sheen * sheen_color * sheen_brdf(...) + (1 - sheen * reflectance(sheen_brdf)) * base_mix
This looks similar to the usual base*(1-reflectance(top)) + top
albedo scaling approximation, except it isn't quite of that form since top = sheen * sheen_color
, but the reflectance term doesn't include sheen_color
. In fact this specific combination is supposed to represent reflecting fibres/flakes which produce a colored reflection but do not tint the base. Regular albedo scaling can't do that, as if top is colored this will produce a complementary color tint of the base lobe. Really this combination is supposed to be some loose approximation of a microflake volume with colored flakes but gray transmittance.
In OpenPBR we said that:
any light not reflected after multiple scattering is assumed to transmit to the lower layers (because the microflake volume has gray extinction, the transmitted light will not be tinted by the fuzz)
And suggest the explicit layer combination (matching standard surface):
To represent this as a MaterialX layer operator would require some generalization then, e.g. perhaps a Boolean to specify "whether the base layer is tinted by the complementary color of the coat layer BRDF".
Specular
The specular lobe in MaterialX is represented as:
specular_layer = layer(top = specular_bsdf,
base = transmission_mix)
transmission_mix = mix(fg = transmission_bsdf,
bg = opaque-base,
mix = transmission)
but (in OpenPBR anyway) this is supposed to be representing a dielectric interface (where specular_bsdf
is the BRDF, and transmission_bsdf
is the BTDF). It's not physical in general to think of this as an actual layer, the form above only really makes sense in the albedo scaling approximation, where it is then just roughly approximating the balance of energy between the dielectric lobes. If people took this layer seriously as a physical description, it would be unclear what it means except as a shorthand for albedo scaling, which defeats the purpose of trying to define a layering operation as something more general than albedo scaling. I think this specular reflection lobe should actually be written explicitly as the sum of BRDF and BTDF lobes, not artificially as a layer operation.
Overall I think it's quite difficult to map the formal layer/mix structure in OpenPBR into corresponding abstract layer operations that produce a well-defined implementation. In my view it's much easier to work with something like the standard surface form (or the analogous form of OpenPBR) where one is just evaluating a mixture of closures/lobes, with well-defined mixture weights, which is explicitly implementing a certain (actually perfectly acceptable, for VFX) approximation. Then there is no need to figure out how to carefully craft the layer API so that implementers will be able to reproduce the mixture model you want them to, you just give them the mixture model.
Alternatively, if we must go the route of using layer operations, this needs to be generalized appropriately, though as described it would probably have to be designed quite carefully in order for implementers to be able to make sense of it. (Or you tell implementers how to do this in a reference implementation, which then probably just has the form of the mixture model, which makes the intermediate layer description a bit redundant).
My thought about how we could possibly generalize the layer operation to do what is needed is allow that:
The current spec draft points out the method used for implementing thin-film interference effects. However, this paper and its reference implementation assume that the physically correct dielectric and conductive Fresnel terms are used so that the optical phase shift can be computed.
However, OpenPBR uses the F82-tint model for metals, which directly computes the reflectivity without going through computing a complex IOR and then using it in the conductive Fresnel term.
Therefore, it might be helpful to include a note in the spec on how to reconcile this and how to compute/estimate the phase shift when implementing the thin-film component.
Do we need to allow IORs to drop below 1? Physically this is implausible; though I know it can be useful for certain hacks to fake the IOR ratio being < 1, it should not really be done that way.
When implementing raytraced subsurface scattering, the look of the material can vary significantly based on how the entry/exit bounce at the layer interface is handled.
The draft spec currently doesn't explicitly mention how this is to be handled, but there are some references, e.g. mentioning that the albedo mapping may depend on the interface IOR.
Is this something that should be specified, or is it to be left as an implementation detail?
The approaches that I am aware of are:
Additionally, there is the question whether IOR and roughness values should be reused from the specular reflection lobe or specified separately.
Personally, if this is to be specified, I'd argue in favor of the "refractive entry, lambertian exit, reuse specular interface parameters" approach.
In the emissive section it says "emissive properties are specified in photometric units", but then doesn't say what units they're actually specified in. Assuming this means nits, this should be specified.
The spec currently mentions that if specular_color
is used to affect the color of the specular reflection, he underlying layers (diffuse/SSS/transmission) should be tinted according to the complementary color to preserve energy.
This makes sense from a physical perspective. However, from what I can tell, these parameters are intended for artistic control rather than physically motivated. Therefore, I'd argue that affecting the lower layers is unintuitive and unexpected - for example, if a user wants to achieve a white object with a green specular highlight, they'd need to set the base_color to something like (0.96, 1.0, 0.96)
. Instead, I'd suggest to scale the lower layer's intensity according to the maximum value across components, in order to preserve energy without tinting it.
I'm not yet sure of the cause of this visual discontinuity, whether it's in the MaterialX graph for OpenPBR or perhaps in the real-time approximation of the MaterialX Physically Based Shading nodes, so I wanted to report it here for discussion.
When transmission_depth
is exactly zero, the transmission_color
input has an intuitive visual effect in real-time OpenPBR renders, but this effect disappears entirely when transmission_depth
is set to any non-zero value, creating a visual discontinuity as the artist moves the slider.
This issue can be seen most easily in the open_pbr_honey.mtlx
example material, linked here:
https://academysoftwarefoundation.github.io/MaterialX/?file=Materials/Examples/OpenPbr/open_pbr_honey.mtlx
If you open the Property Editor and set transmission_depth
to exactly zero, then the honey color of the transmission component becomes visible, but it disappears again when transmission_depth
is set to a value slightly above zero.
But the Oren-Nayar model actually involves a parameter
I suggest we follow the approach from Mitsuba here:
/* Conversion from Beckmann-style RMS roughness to
Oren-Nayar-style slope-area variance. The factor
of 1/sqrt(2) was found to be a perfect fit up
to extreme roughness values (>.5), after which
the match is not as good anymore */
const Float conversionFactor = 1 / std::sqrt((Float) 2);
Float sigma = m_alpha->eval(bRec.its).average()
* conversionFactor;
So to be precise, we take
where base_roughness
in
Purely a VFX bias here, but we generally initialise colours to be the value of "middle grey", which as a result of our logarithmic perception, ends up being 0.18.
Was 0.8 chosen because if someone sets metallic to 1.0 with default values elsewhere, then this would result in a plausible metal?
Currently, the only way to affect the color of the dielectric reflection lobe is specular_color
, which affects all angles equally.
However, for compatibility with other material models (e.g. the classic Disney BSDF, or glTF's KHR_materials_specular), it would be useful to have a parameter for only affecting normal-incidence angles while leaving the grazing angles as they are.
More specifically, in a Schlick-style Fresnel term, this parameter would only affect F0, not F90. This can be implemented in combination with the proper dielectric Fresnel term by computing real_F0
from the IOR, then computing the Fresnel value using the dielectric term, and then remapping from the real_F0 .. 1.0
range to F0 .. F90
.
@brechtvl wrote:
How to both have a specular IOR that takes values outside the 0..1 range and still allow texturing is something we struggled with as well. specular_ior_level is an interesting solution. I find the term "level" a bit unclear, I'd maybe call it specular_ior_texture to clarify its purpose, though that could be confusing in other ways.
While trying out OpenPBR I noticed that there's some errors in the MaterialX reference implementation.
transmission_dispersion
should be removed.transmission_dispersion_abbe_number
is missing.transmission_dispersion_scale
is missing.transmission_scatter_anisotropy
has uimin
set to 0, this should be -1 so that it can allow for back scattering.A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.