About Me

Hi. I'm Josh Ols. Lead Graphics Developer for RUST LTD.

Contact:
crunchy.bytes.blog[at]gmail[dot]com

View Joshua Ols's profile on LinkedIn

Meta

Entries in Deferred Shading (10)

Sunday
Oct102010

Deferred Physical Shading

This year's SIGGRAPH had an interesting segment covering Physically-based BRDFs, that offered some sobering insights into the problems of traditional realtime lighting models. Particularly, how current models not only produce less believable results, but force artists to waste time and effort compensating for their definciencies. Naturally, they explained how to derive better lighting models that behave more realistically, and offered some insights into the behavior of real-world materials.

 

Links:

[1] SIGGRAPH '10: Physically-Based Shading Models in Films and Games

[2] Practical Implementation of Physically-Based Shading Models

[3] Crafting Physically Motivated Shading Models for Games

 

Implications for Deferred Shading:

The difference that concerns me most is that the Physically-Based BRDFs rely on Fresnel terms for both their diffuse & specular calculations. Specifically, how these Fresnel terms require the per-light Light Direction & Half Angle vectors. So in a Deferred Renderer, each object's material's Fresnel coefficients must be stored in the G-Buffer so that they can be available for the lighting phase. Also, for materials that use per-RGB Fresnel coefficients, I would need to be able to accumulate colored specular lighting.

 

These conditions present problems for trying to integrate such a system into a Light Pre-Pass (LPP) renderer. At a minimum, I would need to add single or per-RGB Fresnel coefficients to the already overcrowded G-Buffer. So I really have no choice but to add another render target to the G-Buffer. Then per-RGB fresnel would force me to have a dedicated specular light accumulation buffer, rather than using the approximation trick.

 

Conclusions:

Ultimately, these changes would make the LPP renderer more bloated than a standard Deferred Renderer. So if I wanted to start using a Physically-based BRDF, I would have to make the switch. Now I'm at an impasse, because my initial tests have demonstrated enough of a visual improvement to peak my interest. This has left me with much to consider before I can proceed.

Thursday
Sep302010

Cubism Demo & Source

UPDATE:
Had to briefly take down the code for some last minute modifications.

UPDATE2:

Demo is back up again. Sorry about the inconvenience.

Original:
Well people, it took longer than I'd like, but I'm ready to post an actual demo with source code. Following up on my Cubism prototype post, I have now released the completed demo on my website so that any interested parties may have a look. Just visit the new "Projects" link on the navigation bar at the top of the page.

Anyone who has questions/suggestions/etc, feel free to post in the comments section. ;)

Monday
Sep062010

Cubism & Framework

Up to this point, I have been taking an ad-hoc approach to my demo framework. Whenever I needed a feature, I just added it to the existing codebase. However, my code was reaching a point where it was becoming unpleasant to use, and difficult to upgrade. Finally, I hit my breaking point and decided my codebase needed a massive overhaul before I could continue with anything else. So refactoring everything has been occupying my time, in-between trying to find a job.

To test out my new framework, I decided to toss out all my old demo code and try something new. I wanted to go in a different direction with this one, opting for a test scene with lots of small lights, and a lot of moving elements. To that end, "Cubism" was born!

Reasoning:
"Cubism" was loosely inspired by the look of 3D Dot Game Heroes, and its impressive demonstration of pixelated voxel art. This approach seemed like a good way for me to introduce complexity, without having to commission an artist for a detailed animating object. However, this raised the problem that I am not myself an artist, and could only hope to do some kind of procedural programmer art. So it had to be visually interesting!

Content:
Deferred Rendering:
I decided to use Afterlights as my Deferred Rendering approach for this demo, because it is the simplest approach available. Also, I could get away with the lighting inaccuracies, since this demo isn't shooting for photorealistic graphics.

Scan:
This was meant to have a line of peaks going across a row, where each peak was unique to its column. This line would then scan to an end, then reverse direction to scan back to the other end. Since this is broken, I am going to try to implement a ripple effect instead.

Voxel Sprite:
This was made using an .xpm file included with in the compiled code. Every pixel is represented by a colored cube, with transparent pixels having no cube.

Ocean Waves:
This grid uses sin() functions to generate heights based on the cube's position in the grid.

Tidal Wave:
Similar to the Ocean Waves grid, but having waves without gaps.

Conclusion:

So that's it for now. This little project is far from finished, as it still needs a lot of work to clean up various hacks I used to get it running. Also, during the process of coding this demo I encountered problems with my new framework design that have forced me to rethink some of my decisions. Once I have finished revising both the demo and the framework, I hope to release their source code so that any interested parties may have a look. ;)

Anyone who has questions, feel free to comment or contact me via e-mail.

Tuesday
May252010

Linear sRGB Blending & Deferred Lighting (ver. 2)

Since my new computer's is equipped with an SM 4.0 graphics card, I decided to play around with some features that were standardized for this class of GPU. In particular, I was interested in how they now correctly perform linear blending when using sRGB rendertargets. This means that I get the correct color results of linear blending, but the concentrated precision of automatic sRGB conversion.

 

This is relevant to my project because I have read about other Deferred Renderers using this feature on consoles to avoid needing high precision lighting buffers. So it seemed like a good opportunity to dust off my old Deferred Ligthing code, and see how it could benefit from this approach.

 

If you are unfamiliar with sRGB and Gamma-space, the following links will provide a far better explanation than I could.

 

Links:

[1] RenderWonk: Adventures with Gamma-Correct Rendering

[2] blog.illuminate.labs: Are you Gamma Correct?

[3] Gamefest Unplugged (Europe) 2007: HDR The Bungie Way

 

Tests:

For my experiment, I used a scene with six point lights of varying intensities. The data is stored in the range [0,2] in RGBA8 buffers so I can get Medium Dynamic Range lighting. I'm using standard Lambertian diffuse lighting, and Normalized Phong for specular. The defining difference here is whether it uses a standard RGBA8 buffer storing linear values, or an RGBA8_sRGB buffer storing gamma-space values.

 

For the sake of completeness, I have decided to test both of the common Deferred Lighting light accumulation variations. The first having two render targets, to accumulate diffuse and specular lighting independently. The second having one render target that stores diffuse lighting and specular intensity,  approximating colored specular lighting via the diffuse lighting chromacity.

 

Results:

For the two RT approach, the difference between the raw linear RGB and linear sRGB blending versions is pretty dramatic. The diffuse lighting sees significantly less banding in the center of dim light contributions, and has a smooth falloff out to their edges. The specular light contributions only see improvements around their edges, yielding a subtle but noticeable difference. In both cases, the overall lighting quality is significantly better.

 

The one RT approach had the same diffuse lighting quality as the two RTs approach, but only marginally improved specular lighting. I had concerns about using the sRGB trick with this approach because automatic sRGB conversion doesn't affect the alpha channel. Not so surprisingly, the stored specular intensity produces nasty banding artifacts in the colored specular lighting approximation regardless of whether or not linear sRGB blending is being used.

 

2RT, sRGB:

sRGB - Diffuse sRGB - Gloss 1 sRGB - Gloss 47 sRGB - Gloss 256

Figure 1, 1, Diffuse; 2, Gloss - 1; 3, Gloss - 47; 4, Gloss - 256

 

2RT, RGB:

RGB - Diffuse RGB - Gloss 1 RGB - Gloss 47 RGB - Gloss 256

Figure 2, 1, Diffuse; 2, Gloss - 1; 3, Gloss - 47; 4, Gloss - 256

 

 

1RT, sRGB:

1RT sRGB -  Gloss 1 1RT sRGB -  Gloss 47 1RT sRGB -  Gloss 256

Figure 3, 1, Gloss - 1; 2, Gloss - 47; 3, Gloss - 256

 

1RT, RGB:

1RT RGB -  Gloss 1 1RT RGB -  Gloss 47 1RT RGB -  Gloss 256

Figure 4, 1, Gloss - 1; 2, Gloss - 47; 3, Gloss - 256

 

Conclusion:

All in all, I'd say this approach produces some really nice results when compared against using high-precision buffers. This way, I get the storage cost & read/write bandwidth of an RGBA8 buffer, but with the blending & precision benefits that would normally necessitate an RGBA16F buffer. For this reason, I am strongly considering this approach over floating point buffers on SM 4.0 hardware.

 

Just have to wait and see I guess.

Friday
Mar122010

Neo Renderer

So I made the big anouncement of going deferred again. The question now is, what implementation should I use to meet the needs of my project? Having mulled over this one for about a week, I have decided that deferred lighting will once again be my solution.

 

Pipeline:

In my prior work, I favored a minimalist g-buffer in order to cut back on storage as much as possible, and make it feasible to add hardware MSAA on SM 4.0 hardware. Now, I am much less concerned with those goals, and would prefer to favor better precision for better visuals and more flexibility.

Buffers [SM 3.0]:

  • G-Buffer

    • 1x fp16

    • [Normals (packed), Specular power, Linear eye depth]

  • Light-Buffer (SM 3.0)

    • 1x fp16

    • [Diffuse lighting (RGB), Specular intensity]

  • Post-Buffer

    • 1x fp16

    • [Lit scene (RGB), 0.0]

At a minimum, I need to be able to accomodate normals, specular power, and depth to get decent Phong lighting. If I went with MRT, I would have had to use a 1-2 channel float buffer for the depth, which would break compatibility with OpenGL. So, I decided it would be better to pack my normals, and shove all the data in an RGBA16F buffer. This arrangement ensures that all the data can easily be accessed for both lighting and post-processing.

 

Buffers [SM 4.0]:

  • G-Buffer

    • 1x fp10

    • [Normals (packed), Specular power]

  • Light-Buffer (SM 3.0)

    • 2x fp10

    • [Diffuse lighting (RGB)], [Specular lighting (RGB)]

  • Post-Buffer

    • 1x fp10

    • [Lit scene (RGB)]

Here you see a more reduced version of my buffer setup, which takes advantage of SM 4.0 capabilities (depth buffer sampling, fp10 textures). This setup would allow me to justify the cost of storing separate colored diffuse and specular lighting. Granted, I will be forced to be use MRT, since no hardware supports a 6-channel texture format. However, hardware that can support this combination of features makes the overhead a moot point.

 

Final Thoughts:

If my assumptions are correct, this setup will produce much better visuals while being more efficient than my prior deferred lighting renderer. It will also be a good excuse for me to explore the possibilites of tinkering with the contents of the light buffer(s) for different effects. Though I think I will leave that for another day. ;)