Thursday, May 14, 2009


The past few months I've felt a new resolve to get Star Trader back on track and the main hurdle to this has been two technological uncertainties - the streaming system and the material system. The streaming system is not as big a priority as the game doesn't NEED it necessarily to proceed, but my material system is another matter all together.

My original intention was to build an advanced prototype of the new material system in-place with the old one so I can compare and contrast which is better and whether the old system has a place as a low-level fallback. If it worked better use it, if not, scrap it and fallback to the old with minimal effort. Theoretically this is a sound principal since it can be risky to fully commit to a system that has not yet been completed. In practice though things did not go as planned and in the future I will probably just create a separate branch which I can then integrate/merge to or throw away depending on the outcome.

In the end all my predictions were valid and having planned the system so thoroughly (over 10 pages of design notes with many more pages of format specs), it was unlikely that I missed something. The new system is better, plain and simple, and while I'm still really proud of the old system (comparably it's easier to use and more powerful than the material systems in some games I've worked on), gutting the old system code is going to be a necessary hassle. It's a shame because it was especially difficult implementing the new system around the old one. The best comparison I can muster is like navigating a maze overrun with sharp and pointy vines.

The new material system is essentially a node graph (DAG) that exists at three levels; Template, Definition, and Shader Tree. The Template represents the base logic from which shaders are generated as well as base states and parameter definitions. It's here that the user specifies node statements by utilizing a library of pre-existing "operators". The material Definition just overrides the properties of the existing parameters, allowing the same template to be reused with different arguments/states. The Shader Tree is where the real magic happens.

The tree is pretty straightforward except for the way in which it handles permutations. Essentially each level of the tree references a state, and each sibling node a state condition. A state might be 'Fog', and it's conditions 'On' or 'Off'. The tree for this would be relatively simple. In run-time the tree is traversed using the current state of 'Fog' to determine whether to use the 'On' or 'Off' shader. Things get interesting when we introduce more than one permutation. For instance, let's add a 'Point Light' state with 3 conditions; '0 lights', '1 lights', '2 lights'. For each state and all possible conditions, the tree is generated with a leaf node containing the actual procedurally generated shader (with all the accumulated state attributes/code). In this example we'd end up with a total of six shaders for all possible combinations of those state conditions.

Now sure, with current shader models it's possible to build all that logic into an 'uber-shader' of sorts, but from my experience, this is incredibly slow and inefficient. While a Set..Shader() call is not cheap (maybe half as bad as SetTexture()), conditional branching on a per-pixel level is, well, SLOW! A 720p screen resolution could potentially require 921,600 per-pixel conditional checks per render frame! By reducing the work load to the bare essentials via condition specific permutations, it's possible to save a tremendous amount of fill-rate, leaving us to do other interesting things.

Personally, I think graph based shader editors get a little too much attention. The big push for them seems to be from people who believe that exposing shader code visually will allow technical artists to create more interesting materials than little 'ole graphics programmer could. However, in my humble opinion (and taking into account some experience I've had with the subject), giving a technical artist that level of control usually leads to unexpected performance issues down the line and really doesn't elleviate the complexity required to make a graphically stunning effect in short time (thats what a good artist frontend like the "Definition" layer is for). The biggest reason I went in this architectural direction was for the ability to generate permutations which is much easier when you can break up your code into logical chunks, as graph based shader systems do. Having said all that, I don't mean to downplay the importance of a node editor since I really would prefer laying down nodes in a visual graph editor as opposed to adding them manually in a text file. That's actually my first priority after the GUI system is done, hehe.

Why are permutations so important? Anyone who's worked on a large enough project has experienced shader bloat to some degree or another. As an example, think of the shaders that light objects in a scene. Let's say you can light an object with up to 4 lights at once (optimally, with a 1-1 mapping). Well you're probably going to need at least the basic light types; point, spot, and directional/planar. That already puts you at 12 shaders (+1 for non-lit). What if you wanted to add fog? Environment mapping? An ambient occlusion term? Parallax bumpmapping? It can get pretty wacky to account for all of these things! For some people it's not a big deal, but, in my case where I like custom tailered shaders for unique effects it can get pretty out of hand.

So now with a good solution nearly complete I need to rebuild my shader library. The plan so far is to concetrate on a simple and elegant lighting solution that matches the stylized look of the game (remember those Invader Zim images)? Right now this means going back to Deferred Rendering. It's a fantastic algorithm despite being a little overhyped. It's much more efficient than your standard forward renderer, although saying that all your lighting is free (as I've heard from so many people) is ridiculus - you definitely pay the price for your light volumes and the per-pixel cost of evaluating all those lights (and if you want shadows you better be willing to pay the price). Despite this it really does allow you to display a very high level of quality at a reasonable price.

For my purposes I intend to use it to lay down a base layer as part of a two tiered rendering pipeline. If an object doesn't meet the criteria for being lit by the Deferred Renderer, it continues down the pipe to the Forward Renderer. The main benefit to using the Deferred Renderer has to do with the way I do my non-photorealistic lighting. Specifically I use something very similar to the G-Buffer in order to generate some nice outlines so it makes sense to leverage this into a full lighting solution. I'll also be able to more easily implement things like Depth-of-Field and Glare/HDR, although MSAA becomes a challenge. The Forward Renderer is required so I can draw objects that don't fall into the additively lit category (which actually happens to be quite a few things). I won't get the free inking/shading with it as I would with the Deferred Renderer but I'm hoping to find a solution that will work well for things like translucent lit objects.

I wish I had some interesting shots to show all this off but really it's just been a lot of engine work. The idea is to streamline the development process so unfortunetly this kind of feature doesn't lend itself well to screenshots. :-)

On the upside I do have some interesting things to show next time related to the terrain tools I've been working. More to follow in the coming weeks but until then, here's a teaser;

No comments: