About Me

Hi. I'm Josh Ols. Lead Graphics Developer for RUST LTD.

Contact:
crunchy.bytes.blog[at]gmail[dot]com

View Joshua Ols's profile on LinkedIn

Meta
« AA + HDR encoding | Main | Screen-Space Ambient Occlusion »
Saturday
Dec192009

A new direction?

Almost a month and a half without a post...Thought I should do something to remedy that. Sadly, I've been hard-pressed to think of what I should write about next. I've been in a bit of a creative rut, since I am not quite sure how I want to advance my project. So here's where I am, and where I plan to go...

 

Current Results:

My deferred lighting renderer has demonstrated the merit of the approach, and could probably do well with further optimization. Even with the restriction of RGBA8 targets and no MRT, it produces acceptable lighting and allows more material variety than a standard deferred renderer.

Sadly, I have found that lots of small lights with Blinn-Phong shading is about the best it can do. Plus, the lighting quality is far from the best, due to limited dynamic range and banding artifacts resulting from the low-precision storage. Plus, having a few large lights really killed my performance, which can be a pain for effect & fill lights. It was most pronounced when the camera neared an object that filled the screen, and was covered by several large lights.

As the quality and limitations began to bug me, I felt it prudent to step back, and seriously evaluate what I want this renderer to do.

 

New Goals:

Game:

In an attempt to have the game design inform the engine technology, I stopped to evaluate what I planned to do with my game. Since I plan to do something loosely like a traditional RPG, I will have mostly static worlds. So that implies mostly self-contained set pieces that are connected by choke points that allow the rest of the level to stream in as you approach said point. It also necessitates having good quality lighting at close views, particularly for scenes with characters.

Originally, I wanted to have a completely dynamic world with destructable objects, and long draw distances where you could see everything (ala Crysis, Bad Company, etc). Eventually, I realized that these require fancier tech and higher specs. Both of which are prices I prefer not to pay, especially for features I don't strictly need. Really, I only wanted them in the first place because I thought they were the future of rendering tech for games, and that I just had to have them.

 

Technical:

Material variety is something I have decided I need. Blinn-Phong with rim-lights and clever texture usage can cover an impressive range of materials, but it is not one size fits all. I would like to support custom BRDFs and diffuse lighting functions for the common materials that this combination can't cover (hair, skin, brushed metal, extremely rough surfaces, etc). There needn't be a lot of dynamic lights, so long as the ones that are present emphasize the materials characteristics.

Lighting quality was also an issue I kept bumping into with my deferred renderers. Banding and saturating at 1.0 due to limited dynamic range and low-precision storage are both unacceptable. So, I want to have full float precision for my light accumulation in order to support linear HDR calculations.

Pre-computed lighting is something I have eschewed up until this point more for political reasons than on actual merit. I liked the sound of "completely dynamic" lighting that deferred shading could provide. However, precomputation can provide higher quality lighting effects at much lower runtime cost. Sure the world becomes static, and the storage requirements go up, but I can live with those for better visual quality.

Low-precision storage and compression will be especially important as a result of the prior requirement. So I will need to get better quality using encoding schemes (RGBM, etc) with standard compression schemes (S3TC). At runtime, I will have to avoid use of anything larger than RGBA8 for my render targets.

Antialiasing is something I'd like to support, since aliasing is quite noticeable in modern games. Granted, I have often observed that HD resolution with post-processing effects like Depth of Field and Motion Blur can make aliasing far less noticeable. However, rendering at sub-HD resolutions brings them out in full-force, and they start to become distracting. To this end, I plan to include support for the standard MSAA, and Nvidia's CSAA. 

 

New Restrictions:

My desire to support these new features, particularly AA on SM 3.0 hardware, has imposed more than a few limitations on the options I have for my new renderer.

The lighting approach is the first compromise, because MSAA will limit me to RGBA8 targets. Since HDR lighting is a must, I will have to use an MSAA compatible HDR encoding scheme. To my knowledge, this limits me to RGBM and NAO32, with the latter being more appealing for having quality on par with RGBA16F. However, single-pass lighting will be my only option, since encoding my output ensures that hardware blending will not be possible.

Translucency will be difficult as a result of being unable to blend into my encoded framebuffer. So I will need a workaround like compositing all my alpha into a non-encoded buffer, and combining it with the encoded buffer in a shader. Linear alpha-blending will be difficult as well, due to my restriction to RGBA8 render targets. I can only hope that the translucent nature of the object will help cover up any banding artifacts introduced by the low-precision buffer.

Now the hard part, how do I get single pass lighting that can take a fairly arbitrary number of lights without producing absurd numbers of shader combinations?

 

New Approach:

I'm looking at the dynamic Spherical Harmonic lighting approach that has been employed by a variety of new AAA games (Halo 3, Gears of War 2, God of War 3). In this system, every moving object has its own SH coefficients, which the CPU uses to composite all the lights that affect the object. Then the coefficients are uploaded to the objects shader, which doesn't have to care about the exact number of lights. This drastically cuts down on shader permutations, while allowing very predictable shading performance for an arbitrary number of lights. Plus, I can extract the dominant light direction and color from the basis for use in custom lighting equations (BRDFs, SSS approximations, etc), allowing for material variety.

Like any other approach, this one comes with more than a few downsides. Due to the low-frequency nature of SH, lighting will not be pixel-perfect. Directional lights will work well enough, but point lights will lose definition, and spot lights will probably look completely incorrect. Also, the trick to extract a directional light limits custom lighting functions to just that light, with the rest being "ambient" lighting. This light is also the only one that can be shadowed, so material defintion will be lost in shadowed areas. Finally, this approach may need a more beefy CPU, in order to composite all the lights that affect an object each frame.

These limitations are annoying, but potentially more survivable. If I can get better lighting quality and more material variety, then that will solve most of my problems.

 

Wrapping up:

So I have decided to segway into this different approach, to maybe get my creative juices flowing again. This will also provide a good excuse for me to rewrite my now cluttered code base using what I have learned from using my hacky hodge-podge of abstractions. Hopefully the net result will be code that is easier to maintain, and a renderer that will do a better job meeting my needs.

PrintView Printer Friendly Version

EmailEmail Article to Friend

References (3)

References allow you to track sources for this article, as well as articles that were written in response to this article.
  • Response
    Crunchy Bytes - Journal - A new direction?
  • Response
    Response: Ultimate Muscle
    Crunchy Bytes - Journal - A new direction?
  • Response
    Response: Ultimate Muscle
    Crunchy Bytes - Journal - A new direction?

Reader Comments (2)

Did you check out Screen Space Spherical Harmonics http://solid-angle.blogspot.com/2009/12/screen-space-spherical-harmonic.html, maybe that method will suit your needs better.
December 20, 2009 | Unregistered CommenterJens Fursund
I hadn't seen that before, and appreciate you pointing it out to me. Though it doesn't look like it would meet my specific needs & restrictions, it was an interesting read nonetheless.
December 22, 2009 | Registered Commentern00body

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
All HTML will be escaped. Hyperlinks will be created for URLs automatically.