About Me

Hi. I'm Josh Ols. Lead Graphics Developer for RUST LTD.

Contact:
crunchy.bytes.blog[at]gmail[dot]com

View Joshua Ols's profile on LinkedIn

Meta

Entries in Graphics (5)

Sunday
May032015

TeamColor refinements 

Background

Not long ago, I realized that Alloy's Team Color formulation was grossly inflexible and not super intuitive. The primary reason being that if you had more than one color mask overlapping a given pixel, the top one would dominate. This presented a problem if you wanted to mix multiple colors and precisely control their contributions to the final combined color.

It looked a little something like this:

half3 tint = lerp(half3(1,1,1), aTint, masks.a);
tint = lerp(tint, rTint, masks.r);
tint = lerp(tint, gTint, masks.g);
baseColor *= lerp(tint, bTint, masks.b);

 

As you can see, it was layer order-dependent and made it basically impossible to have all the colors influence one fragment in a clean way. The only problem it really solved was avoiding black zones between masked colors by ensuring that the starting color was white.

 

New Approach

I needed something that would still prevent the black zone problem, but also allow easy blending between all the masked colors covering a given pixel. Plus, when the total weight of the masks is above 1, it needs to renormalize all the mask weights so that they total 1. Finally, when the total weight is below 1, it needs to add the color white by the remaining weight. 

So I switched to something more like this:

half weight = dot(masks, half4(1,1,1,1));
masks /= max(1, weight);
baseColor *= rTint * masks.r 
            + gTint * masks.g 
            + bTint * masks.b 
            + aTint * masks.a 
            + (1 - min(1, weight)).rrr;

 

This new approach does everything I need without being overly complex and/or costly.

Sunday
Jan112015

Parallax/POM + Detail Mapping Gotchas

So I'm going to make a bold assertion. We as an industry are doing texture transforms wrong! >:(

...I suppose you want context for that statement. Well then...

 

Background

Let's start from the most common way to transform textures:

Texcoord = UV * Tiling + Offset;

This approach has a very serious problem in that Offset and Tiling are both implicitly coupled together. By that I mean that when the Tiling rate increases/descrease then the Offset's translation effect is decreased/increased. To compensate for this limitation, artists typically have to manually adjust the Offset value by dividing by the previous Tiling amount and then multiplying by the new amount. 

It would be far more preferable to have the two properties be decoupled, so that Offset always has a consistent translation effect. Thankfully it is pitifully easy to do this in code via:

Texcoord = (UV * Tiling) + (Offset * Tiling);

or just:

Texcoord = (UV + Offset) * Tiling;

Sadly we can't do much with this as the content creation tools do it the wrong way, so the engines also have to do it that way to be consistent. So what does knowing this really gain us?

I'll tell you...

 

Parallax Mapping

For those who don't know, Parallax Mapping of most varieties work by using a height map to generate a per-pixel texture coordinate offset to create the illusion of depth on flat surfaces. Now what you may not know is that this offset is implicitly multiplied by the Tiling amount of the texture coordinates used by said heightmap. So assuming your base texture uses the same texture coordinates, you can apply the parallax offset by doing the following:

Texcoord = UV * Tiling + ParallaxOffset;

Simple! Convenient! But...

 

Parallax + Detail Mapping

Detail mapping is a very common way to increase surface detail by combining a second set of textures into the base textures, usually at a much higher Tiling frequency. As you might imagine, combining parallax + detail mapping is a great way to make visually interesting surfaces. However, it comes with a nasty gotcha as the equation from above doesn't work for detail mapping!

The detail textures' Tiling rate will almost always be different from the base maps', meaning that the Parallax Offset will almost always translate the texture coordinates either too much or too little. This results in the rather unsightly effect of detail maps swimming across the surface as the camera changes position relative to it. So how do we fix this?

It turns out that the same fix applies as with any other offset. You just need to remove the base Tiling by dividing it out of the Parallax Offset. The resulting normalized offsets can be applied the correct way and benefit from the detail Tiling amount without any texture swimming. That looks something like this:

DetailTexcoord = (UV + (ParallaxOffset / BaseTiling)) * DetailTiling;

Any questions?

Monday
Jan182010

Cheap specular lighting

Just the other day I read an interesting presentation from Midway, documenting their work on "Mortal Kombat vs. DC Universe". In it, they describe the various rendering tricks and engine changes they used to get Unreal Engine 3 running at 60Hz. One such trick that peaked my interest was how they approximate specular lighting.

Basically, they just treat the eye vector like a directional light, and use it to calculate lighting like normal. Then, they multiply the result by the current ambient color/intensity. This approach gives them specular highlights that are cheap, always visible, and look like they are influenced by the environment.

This is especially convenient because the equation reuses calculations from other parts of the shader. It uses NdotE that was calculated for rim-lighting, and ambient lighting calculated from SH. So it produces a passable approximation to specular lighting that fits in well with environment lighting, and doesn't require having actual lights present.

Sunday
Aug162009

Materials

Picking up where the last entry left off, I'll start with materials, and meshes.

The trend these days seems to be keeping vertex data to the absolute minimum. All the material surface details are supplied by multiple high-resolution textures. Of course these high-res textures won't fit in VRAM at the same time, so they need texture streaming. Of course they will take up a lot of space on disk, so they need custom compression schemes. Of course, at runtime you will need to transcode them to a GPU friendly format. Then you need servers to manage the really huge textures between multiple workstations. Version control, custom tools...

I say screw all that, and keep it simple. So for my materials, it will be a combination of texture and vertex data. The goal here is to get sufficient pixel density for my materials to look good, but not have several giant textures per object, and all the hell that approach would bring.

Materials:

For the sake of efficiency, the engine will support RGBA8, DXT1, and DXT5 textures. The expected average texture resolutions for most objects will be 1024 x 1024, 512 x 512. Most textures will default to trilinear mipmap filtering, not giving artists control over this option.

Most objects/characters will be limited to an albedo map and a normal map, in order to cut down on texture memory. Shared detail maps will be used to provide greater pixel density, as well as specular variance across the surface.

Emissive and specular color/intensity will be will be provided per-mesh, per-texture layer. Emissive might benefit from a unique texture, but only when there are a lot of different colored sources all over the object. Otherwise it would end up with a high-res texture that is mostly filled with black, with only small spots of emissive. Specular color/intensity would be better with a unique texture, but can be approximated well enough with per-mesh specular and detail maps.

Geometry:

  • float3, Position
  • float3, Normal
  • short4, Tangent
  • short2,Texcoord
  • uchar4, Detail texture masks
  • uchar4, Directional Occlusion
  • uchar4, Albedo tint color


I plan to split the data up into two interleaved arrays, one with dynamic data and the other with static data. I'm trying to figure out a scheme that would allow each to be 16-byte aligned. However, I think I will be fine with 32 and 16.

I believe I will be able to get away with using a normalized short type for my texcoords, but I am not sure about my tangents. I have considered using the half type for these two attributes. However, my plan is to perform animation on the CPU, so I am not sure it would really be worth the extra calculations.

The second array is all about material data, allowing an artist to control the distribution and color of detail maps per-mesh, per-vertex. Doing things like masking and occlussion per-vertex may seem old-fashioned, but it will drastically reduce VRAM usage. It also allows the values to be smoothly interpolated, even at higher screen resolutions.

 

That's all for now. Starting to sound intereting yet? ;)

Sunday
Aug162009

Rendering Approach

Hey there adoring public! I've got an update! :p

Having had a chance to refine my plan, I now have a fairly good idea of how I want my game's renderer to behave. So without further ado, here you go!

Renderer:

It will utilize a hybrid of forward and deferred shading to allow some material variety, but avoid a shader combinatorial explosion. Each stage of the pipeline will be hard-coded, so it won't really be possible to arbitrarily add or remove new stages at runtime.

I'm not sure yet whether I will be using ubershaders with dynamic branching, or preprocessor definitions, to handle shader complexity. I plan to avoid writing a complex, generalized shader managment system.

As it stands, I will likely have to use a RGBA16F target for rendering, and accumulating lights. On SM 3.0 PC hardware, that is about the only way I know to allow multipass HDR lighting with linear blending. This point will be most important for alpha-blending, since I can't rely on sRGB blending for consistent behavior between DX9 and DX10 hardware.

Lighting:

A skylight system will handle ambient lighting and reflections, covering the vast majority of materials. This will consist of a directional light (the sun and/or moon), and an ambient light using a gradient texture. These lights are responsible for filling out the scene, so that it isn't too dark to see. This approach has the advantages of being simple, dynamic, and causing all the objects to feel integrated in the same environment.

Local lights will be handled exclusively by the deferred shading system. These will be Lambertian only, having no influence on specular reflection. For the most part, these lights are just for decorations or effects. This pass will be limited to lights that have a definite volume and extent (ie. NO directional lights).

Shadowing will only be provided for the sun directional light. Rather than darkening or going black, it will simply remove the sun's contribution from the skylight, while leaving the the ambient contribution. This way, detail will not be lost in the shadowed areas.

In the case of materials that need a custom BRDF (ex. Hair, car paint), the sun directional light will be used. The result will then augment or replace the reflections provided by the skylight.

 

More to follow in the journal entry. ;)