Afterlights
Sunday, September 6, 2009 at 6:25PM
n00body in Deferred Shading

UPDATE:
For a good example of this type of renderer in motion, check out my Cubism Demo video and/or download the demo source+executable from the "Projects" section of the website. ;)

Original:
In my project, I have been grappling with all the headaches that accompany trying to develop a lighting pipeline. Which is more important, many and varied materials, or being able to have any combination and number of lights? Is a more general lighting system worth the complexity of managing shader variations?

Ultimately, forward rendering was a scary prospect, due to the risk of shader combinatorial explosions and limited numbers of lights on screen. However, deferred rendering was unappealing due to material limitations and its tendency to eat VRAM. Then there were crazier systems like projecting lights onto an SH basis per object to provide multiple lights in one pass, but having limited support for point lights and basically no support for spot lights.

Ultimately, I found myself leaning towards a hybrid system of forward and deferred shading to avoid grappling with the problems of either in isolation. Now the question remained, what kind of deferred shading should I choose?

Recently, I came across the concept of Afterlights, a form of Deferred Shading developed by Realtime Worlds, and first introduced in the game Crackdown (Shader X7, Section 2.6). Essentially, they tackle the issue of the fat G-Buffer by looking at what buffers are needed by multiple systems, and adding only the smallest amount of data necessary to get deferred shading.

Firstly, they make the assumption that most modern graphics pipelines will have per-pixel depth/normals for various deferred/post-processing effects (fog, soft-particles, SSAO, outlines, etc). So why not just share that data between the different stages of the pipeline? This way, we can justify eating the cost, and have most of what is needed to perform the actual lighting.

Now the problem becomes how to handle material data? This one is more difficult, as this data is only really used by deferred shading system. To make matters worse, it tends to take up the most memory out of the G-Buffer. So how much material data is really necessary? What's the smallest amount we need to perform deferred shading? Where will it be stored? How will it be accessed?

To answer these questions, we need to look at the other buffers that are available to be repurposed. Many times, RTs in a G-Buffer aren't fully utilized, with more than a few channels going to waste. This is a problem for the light-buffer, since its channels can't be accessed in the shader when the application is rendering to it. In this situation, the only way to access them is via hardware blending. This is where Afterlights come into the picture.

 

Approach:

So the way they work is that during the G-Buffer phase, we obtain the luminance of each surface's albedo texture, and store it in the free alpha-channel of the light buffer. Then during the light accumulation phase, we use alpha-blending set to blend colors as (DST_ALPHA, ONE) and alpha as (ZERO, ONE). This configuration effectively modulates each light's contribution by the per-pixel albedo luminance before adding them to the light buffer. Thus, they are darkened in a way that roughly corresponds to the actual albedo.

Conversion:

(Afterlights) Albedo (Afterlights) Luminance

Figure 1. 1, Albedo; 2, Luminance

 

G-Buffer:

(Afterlights) Normals (Afterlights) Depth [enhanced] (Afterlights) Global (Afterlights) Luminance

Figure 2. 1, View-space normals; 2, Linear View-space depth [enhanced]; 3, Light Buffer [Forward + Emissive]; 4, Luminance

 

Lighting:

(Afterlights) Global (Afterlights) Local (Afterlights) Local + Global (Deferred Shading) Local (Deferred Shading) Global + Local (Deferred-Afterlights) Local

Figure 1. 1, Light Buffer [Hemisphere light]; 2, [Afterlights]; 3, [Hemisphere light + Afterlights]; 4, [Standard Deferred]; 5, [Hemisphere light + Standard Deferred]; 6, Difference

 

Properties:

This biggest advantage of this technique is how it utilizes the depth/normal buffers that are shared between multiple systems. Plus, the only data specific to this system makes use of a shared RT channel that normally goes to waste. Finally, it integrates nicely with hardware blending to not only avoid adding extra complexity to the lighting shaders, but even simplifying them in the process.

Sadly, it has many disadvantages as well. Since it only stores the luminance of the albedo, it only darkens each light. This has the consequences of making it look like you are rendering with black-n-white textures. Then there is the issue that lights that would normally be absorbed by the surface will still appear, and end up looking a bit out of place. Finally, it can't provide specular lighting, since it only encodes information about each surface's albedo.

Probably the biggest issue is one of implementation.  Specifically, that the light buffer has to have an alpha-channel for this approach to work. This can be a problem if you want HDR, as it makes an fp16 RT the only real option.

 

Solutions:

To deal with the desaturated look, you need to have some kind of pre-existing scheme for providing global illumination during the G-Buffer filling phase. For my examples, I used a hemisphere light, but you could use more sophisticated approaches (SH, Ambient Cube, Rim lighting, etc). Even then, if several lights pile up on a surface, they will make the material start to desaturate. So it is best to spread them out, and avoid having too many lights influencing one area.

For material variety, you could employ directional lights which are forward-rendered during the G-Buffer filling phase. These would allow specular effects and illumination modulated by the actual albedo. Generally speaking, it is better to render them this way since they affect all objects, and would be expensive to render in the deferred phase (texture samples, fill rate). Besides, they wouldn't introduce too many shader combinations since they would all be the same light type, and they would only need the most basic parameters (direction, color, intensity).

As for the alpha-channel necessity, you might be able to get around this if you use an RGBA8 target for your normal buffer, and had an unused channel. Then you could store the luminance in said free channel, sample it in the light shader, and perform the multiplication yourself.

 

Final Thoughts:

Bottom line, Afterlights work best when used for decorations and effects. If you want something that will illuminate scenes and provide material variety, then you will be sorely disappointed.

After having used them in my own projects, I can safely say they are a viable option for including deferred shading in any renderer. Depending on what you plan to do, they may be a valid option for your projects as well.

On a final note, the demo project I mentioned in the PDN article will make use of this approach for lighting the different normal map variations. I promise to release said project with source code in the not too distant future. Here's hoping you will find it useful.

Article originally appeared on Crunchy Bytes (http://n00body.squarespace.com/).
See website for complete article licensing information.