About Me

Hi. I'm Josh Ols. Lead Graphics Developer for RUST LTD.

Contact:
crunchy.bytes.blog[at]gmail[dot]com

View Joshua Ols's profile on LinkedIn

Meta

Entries in Translucency (2)

Sunday
Feb262012

Physical Glass/Gem Prototype

UPDATE:

I figured I should be nice and at least give you guys some details. This technique is SM3.0 compatible, and doesn't use alpha blending. :)

Original:

I just wanted to show off a test shot from my new physically-based material system that takes full advantage of Unity 3.5's features (ie. Linear HDR pipeline, Directional Lightmaps, Light Probes). 

The particular focus of this shot is my experimental physically-based glass/gem shader. In this scene, you can see two dynamic lights on all the objects, plus baked lighting to fill out the rest of the scene. Please forgive the programmer art, as this is still early and experimental. 

Hope you enjoy it! ;)


Prototype Physically-based Glass Shader

PS. I will give a proper breakdown of the technique in the future, after my team's game project has released.

Monday
Sep142009

Stippled Alpha

UPDATE1:
There is something I didn't consider until Digital Foundry pointed it out for their FF13 PS3 vs 360 article. Basically, if you use this effect at a lower resolution, and then upscale the image for final display, the effect can look pretty bad. Be forewarned!

UPDATE2:
I later found out that FF13, Resonance of Fate, & GoW3 were actually using Alpha-To-Coverage antialiasing. I wanted to clear that up to avoid confusion.

Original:
Stippled alpha has actually been around for a while, but has only recently resurfaced as a viable technique for adding translucency. It takes the idea that a densely perforated surface when viewed at a distance will appear to be translucent. This technique has already been used to great effect in Final Fantasy 13 & Resonance of Fate (hair, beards, eyelashes), the Nvidia Luna demo (veil), Zack and Wiki & Dead Space Extraction (particle effects), God of War 3 (fur).

Translucency is a must in almost all modern games. However, the typical implementation using alpha-blending is difficult to integrate into the rendering pipeline due to results being depth order-dependent. You can't perform the usual optimizations of ordering by shader, materials, textures, etc. The problem is further exacerbated by how this requirement restricts translucency to single-pass forward lighting.

So for multipass and deferred shading, it necessitates exploring other options. Most times, they will make translucent objects statically lit, or only lit by global light sources. Other than that, they will impose restrictions on their usage, usually limiting them to particles, windows, and some foliage. Everywhere else, they try to make alpha-tested cutouts pick up the slack. Sadly, while cutouts may work for a variety of things, they are not a true replacement for translucency.

So let's see how stippled alpha tries to solve this.

 

Implementation:

For starters, the effect needs to be screen-aligned. If we applied the effect to each surface like a texture, we will get shimmering as the camera moves around the object. Also, to get the best quality we need to make sure that the discarded components are only one pixel in size, and padded by one pixel on all sides. So we must make sure that the stipple pattern is 2x2 pixels, with three opaque, and one transparent.

Thus far, I have found two approaches to accomplishing this. The first uses a 2x2 texture of three opaque values, and one transparent value. Then it is just a matter of mapping every 2x2 square of pixels to this texture, and using discard on the transparent pixel of each. The second just uses information about pixel positions to guess which one should be eliminated based on where it lies in a 2x2 square.

For my example, I will use a technique that doesn't require a texture, and just makes use of information obtained inside the shader. Basically, pixelPosition is the position of each pixel in the range [0, screenWidth], and [0, screenHeight]. Using this, we just cut the dimensions in half, and look at the fractional components for each pixel position. Wherever the components for both width and height exceed 0.5, that means we are on the pixel that needs to be discarded. It's also trivial to do a variation where it discards the other three pixels, and preserves the one that is past 0.5 for both width and height.

Pseudo-code:

halfPosition = fract(pixelPosition * 0.5);

discard(step(halfPosition.x, 0.5) && step(halfPosition.y, 0.5));

 

Variations:

(stippled alpha) High-density (stippled alpha) Low-density

Figure 1. 1, High-density; 2, Low-density


G-buffer:

(stippled alpha) Light-Buffer (stippled alpha) Normal-Buffer (stippled alpha) Depth-Buffer[Enhanced]

Figure 2. 1, Light-buffer; 2, Normal-buffer; 3, Depth-buffer

 

Properties:

Probably the biggest advantage of this technique is that it will work well with multipass and deferred shading. However, it also extremely nice from the performance standpoint, since it is basically just a modification of standard alpha-testing. Front-to-Back sorting, state-sorting, updating the depth buffer all work with this approach. As a result, it will also work with post-processing effects that take advantage of the depth buffer. It also has the advantage that translucent objects that overlap themselves won't have popping issues (see Luna demo). Finally, as screen resolutions go up, so will the quality of this technique, since the perforations are the size of a pixel.

Of course, many of the things that make this approach great also make it not so great. For starters, it only allows one layer of translucency for everything using the same technique, since the perforations are based on absolute pixel-position. So the object nearest to the camera will obscure those farther away. This also makes it hard to do things like drawing a box where you can see the inside wall behind the translucent front wall. There is also the issue of the constant size of the perforations. So as objects get further from the camera, their translucency quality gradually gets worse. Ultimately, this technique will not look so great on lower-resolution monitors, since there will be fewer pixels to utilize.

There are some specific cases where it will cause conflicts with other tech that is seeing more use in games. Screen-Space Ambient Occlusion will be problematic since the perforations make for a dramatic differences in normal and depth per-pixel on an objects surface. So it will manifest as noisy dark and bright spots all over said surface.

Then there is the problem with downsampled buffers. Because they reduce the normal and depth buffers based on an average of several pixels, the wildly different values can produce unexpected results. Sometimes it will make the object disappear, and sometimes it will make it completely opaque. The worst case being when it causes a weird grid to form, that will produce really nasty visual artifacts. Probably the most annoying aspect is that the specific behavior may be different between the depth and normal buffers. This is affected by screen resolution, and will be better or worse depending on the size you pick. Sadly, the example images were obtained at the "standard" 1280x720. :(

Artifacts:

(stippled alpha) SSAO (stippled alpha) Normal-Buffer [downsampled] (stippled alpha) Depth-Buffer [downsampled]

Figure 3. 1, SSAO; 2, Normal-buffer [downsampled]; 3, Depth-buffer [downsampled]

 

Final Thoughts:

This is not a drop-in replacement for alpha-blending. However, it is another approach that can be useful in places where neither alpha-blending or alpha-testing is really appropriate. It will look best for gossamer materials that overlap themselves (veil), or cutouts that need to gradually lose opacity (hair). More than anything, if you have a multipass/deferred rendering pipeline that needs translucent object to be affected by dynamic lighting, this seems like the only really viable option available.

So that is why I am considering this tech for my project. Here's hoping you find the information useful for your own. ;)