Search this blog

25 June, 2014

Oh envmap lighting, how do we get you wrong? Let me count the ways...

Environment map lighting via prefiltered cubemaps is very popular in realtime CG.

The basics are well known:
  1. Generate a cubemap of your environment radiance (a probe, even offline or in realtime).
  2. Blur it with a cosine hemisphere kernel for diffuse lighting (irradiance) and with a number of phong lobes of varying exponent for specular. The various convolutions for phong are stored in the mip chain of the cubemap, with rougher exponents placed in the coarser mips.
  3. At runtime we fetch the diffuse cube using the surface normal and the specular cube using the reflection vector, forcing the latter fetch to happen at a mip corresponding to the material roughness.
Many engines stop at that, but a few extensions emerged (somewhat) recently:
Especially the last extension allowed a huge leap in quality and applicability, it's so nifty it's worth explaining a second.

The problem with Cook-Torrance BRDFs is that they depend from three functions: a distribution function that depends on N.H, a shadowing function that depends on N.H, N.L and N.V and the Fresnel function that depends on N.V.

While we know we can somehow solve functions that depend on N.H by fetching a prefiltered cube in the reflection direction (not really the same, but the same different that there is between the Phong and Blinn specular models), if something depends on N.V it would add another dimension to the preintegrated solution (requiring an array of cubemaps) and we completely wouldn't know what to do with N.L as we don't have a single light vector in environment lighting.

The cleverness of the solution that was found can be explained by observing the BRDF and how its shape changes when manipulating the Fresnel and shadowing components.
You should notice that the BRDF shape, thus the filtering kernel on the environment map, is mostly determined by the distribution function, that we know how to tackle. The other two components don't change much of the shape but scale it and "shift" it away from the H vector. 

So we can imagine an approximation that integrates the distribution function with a preconvolved cubemap mip pyramid, and the other components are somehow relegated into a scaling component by preintegrating them against an all-white cubemap, ignoring specifically how the lighting is distributed. 
And this is the main extension we employ today, we correct the cubemap that has been preintegrated only with the distribution lobe with a (very clever) biasing factor.

All good, and works, but now, is all this -right-? Obviously not! I won't offer (just yet) solutions here but can you count the ways we're wrong?
  1. First and foremost the reflection vector is not the half-vector, obviously.
    • The preconvolved BRDF expresses a radially symmetric lobe around the reflection vector, but an half-vector BRDF is not radially symmetric at grazing angles (when H!=N), it becomes stretched.
    • It's also different from the its reflection-vector based one when R=H=N but there it can be adjusted with a simple constant roughness modification (just remember to do it!).
  2. As we said, Cook-Torrance is not based only on an half-vector lobe. 
    • We have a solution that works well but it's based only on a bias, and while that accounts for the biggest difference between using only the distribution and using the full CT formulation, it's not the only difference.
    • Fresnel and shadowing also "push" the BRDF lobe so it doesn't reach its peak value on the reflection direction.
  3. If we bake lighting from points close enough that perspective matters, then discarding position dependence is wrong. 
    • It's true that perceptually is hard for us to judge where lighting comes from when we see a specular highlight (good!) but for reflections of nearby objects the error can be easy to spot. 
    • We can employ warping as we mentioned, but then the preconvolution is warped as well.
    • If for example we warp the cubemap by considering it representing light from a box placed in the scene, what we should do is to trace the BRDF against the box and see how it projects onto it. That projection won't be a radially symmetric filtering kernel in most cases.
    • In the "box" localized environment map scenario the problem is closely related to texture card area lights.
  4. We disregard occlusions.
    • Any form of shadowing of the preconvolved enviroment lighting that just scales it down is wrong as occlusion should happen before prefiltering.
    • Still -DO- shadow environment map lighting somehow. A good way is to use screen-space (or voxel-traced) computed occlusion by casting a cone emanating from the reflection vector, even if that's done without considering roughness for the cone size, or somehow precomputing and baking some form of directional occlusion information.
    • Really this is still due to the fact that we use the envmap information at a point that is not the one from which it was baked.
    • Another good alternative to try to fix this issue is renormalization as shown by Call of Duty.
  5. We don't clip the specular lobe to the normal-oriented hemisphere
    1. So, even for purely radial-symmetric BRDFs around the reflection vector (Phong), in an environment without occlusion, the approximations are not correct.
    2. Not clipping is similar to the issues we have integrating area lights (where we should clip the area light when it dips below the surface horizon, but for the most part we do not)
    3. This is expected to have a Fresnel-like effect - we are messing up with the grazing angles.
    4. A possible correction would be to skew the reflection vector away from the edges of the hemisphere, and shrink it (fit it to the clipped lobe)
  6. We disregard surface normal variance.
    • Forcing a given miplevel (texCubeLod) is needed as mips in our case represent different lobes at different roughnesses, but that means we don't antialias that texture considering how normals change inside the footprint of a pixel (note: some HW gets that wrong even with regular texCube fetches)
    • The solution here is "simple" as it's related to the specular antialiasing we do by pushing normal variance into specular roughness.
    • But that line of thought, no matter the details, is also provably wrong (still -do- that). The problem is closesly related to the "roughness modification" solution for spherical area lights and it suffers from the same issue, the proper integral of the BRDF with a normal cone is flatter than what we get at any roughness on the original BRDF.
    • Also, the footprint of the normals won't be a cone with a circular base, and even what we get with the finite difference ddx/ddy approximation would be elliptical.
  7. Bonus: compression issues for cubemaps and dx9 hardware.
    • Older hardware couldn't properly do bilinear filtering across cubemap edges, thus leading to visibile artifacts that some corrected by making sure the edge texels were the same across faces.
    • What most don't consider though is that if we use a block-compression format on the cubemap (DXT, BCn and so on) there will be discontinuities between blocks which will make the edge texels different again. Compressors in these cases should be modified so the edge blocks share the same reference colors.
    • Adding borders is better.
    • These techniques are relevant also for hardware that does bilinear filter across cubemap edges, as that might be slower... Also, avoid using the very bottom mips...
I'll close with some links that might inspire further thinking:
#phyiscallybasedrenderingproblems #naturesucks

6 comments:

Anonymous said...

We precompute a 3D texture parameterized by V.N, roughness, and "F0" (spec intensity at normal incidence) which encodes the direction, spread, and intensity of the incidence lobe for our BRDF, and use that to sample the envmap. The spread is stored as an oblong gaussian with major axis assumed to be in the plane of N and V. Works pretty well.

PS said...

You sample a 3D texture to avoid a dot product? Are they really *that* expensive? Or am I being too thick or noobish here?

DEADC0DE said...

I think he was saying that he samples a 3d texture to know how to sample the prefiltered envmap.

The 3d texture stores some parameters that tell how the BRDF looks like, or in other words which prefiltered mip level of the envmap does represent the BRDF best.

That is because as I wrote in the article even if you do the state-of-the-art Cook-Torrance D/FG split (see Brian Karis Unrealengine 4 presentation, all the links are in the article) you still commit certain errors

Anonymous said...

Right, the 3D texture encodes the best-fit specular lobe to integrate the envmap with.

Anonymous said...

English is my second language so please feel free to correct me.

When we convolve a cubemap with diffuse or specular convolution, shouldn've convolve each texel differently depending on material of the surface from which that texel is coming from? Say one cubemap texel is metal and another is dielectric?

Something tells me we can ignore difference in materials if we create our cubemap from a point that's far enough from surfaces and we use correct parameters when rendering objects in source cubemap, but I'm still not sure on how correct diffuse and specular convolutions of arbitrary cubemap are.

Am I missing something?

DEADC0DE said...

Anonym: sorry for the late reply!

You are right. The standard solution for that problem is to use mipmaps for specular (and maybe a second cubemap or SH or something else for diffuse).

In the specular cubemap, we convolve each mip with a different specular lobe, observing that as the lobes get wider, we need less resolution

see https://github.com/dariomanesku/cmftStudio