Saturday, January 27, 2007

Terrain Prototype

I recently read an article on a study that discovered that the more educated you are, the longer it takes to answer simple questions. Forgive me for not having the link handy, but I'll summarize. The research concludes that more educated people have more points of reference to base their answers on, in other words more possibilities.

As an example, a (relatively) simple question to ask an American is "What battle represented the last major offensive by the Confederacy in the Civil War". The average person probably doesn't know this, but may have at least heard of Gettysburg, probably the most famous Civil War battle. A college graduate with perhaps a Masters in History will have a lot more perspective on the matter. In mere fractions of a second his mind will go through Antietam, Fort Sumter, Bullrun etc... A simple computer related analogy immediately comes to mind; a fragmented harddrive.

Unfortunately there's no Microsoft tool for defragmenting or re-indexing your brain (...yet) but it can definitely help to just step back, take a break and come back with a fresh perspective. I personally use fresh perspective to battle false assumptions and an over-abundance of information on a subject that clouds my ability to make a clear decision. Right now I'm taking a break from my Zone system for instance so I can come back when I'm ready with a clear head ready to tackle the problem again. Unfortunately it's been taking a while for this to happen...

So last weekend I did a hardcore coding binge and fleshed out a terrain prototype for Star Trader. I hadn't made a terrain engine in a long time so I got caught up in a lot of the simple details like grid creation, normals/tangents generation, indices ordering (for triangle stips), and grid resampling (I'm currently using bicubic filtering to overcome 8-bit heightmap artifacts). Texturing terrain was also something new I didn't have much experience in, specifically procedural texturing based on terrain properties. What I ended up doing was implementing a very simple real-time per-pixel "terrain splatting" algorithm, that takes a number of texture layers (like grass, rock, snow) and blends between them based upon a pixels altitude and slope. It looks pretty good but the algorithm I'm using right now is pretty rigid and doesn't allow for much alteration so I have plans to improve it. I also added detail normal mapping which looks great when you get real close to the terrain. Here's a teaser to show what I've accomplished thus far:

Like I said, needs lot of improvement, but I think it was a worthy first attempt. For comparison, here's some offline renders I made a number of years back for the Star Trader planet surfaces before I decided to convert to realtime terrain rendering.

The terrestrial/lush planet surface:

And the desert planet surface:

Comparatively the results aren't that different and I'm pretty sure I'll be able to achieve similar results once I get a sky and some aerial perspective in there.

One of my main goals for the terrain renderer was to allow for very high-polygon throughput. That shot for instance is pumping out over 1 million triangles brute force at about 60 frames per second on a Radeon x800 running at 1280x1024 with 6x MSAA. To achieve this kind of performance it really helped that I used triangle strips to keep the number of vertex indices down to a minimum. Another thing I made sure to do was keep the vertex size as small as possible. Right now it's at minuscule 24 bytes used just for the vertex position and normal. Everything else, like the tangent vectors and texture coordinates are synthesized on the GPU computationally.

I know this may sound counter to traditional thought but the main reasoning behind this is that while graphics cards have greatly improved their ability to compute complex data, bus speeds for transferring data to the GPU haven't really improved at the same rate. By maintaining a "skinny" vertex size I'm able to improve cache coherency as well as maintain instruction parallelism (since less vertex fetches are required). This is a much bigger deal on the next-gen consoles but is still very relevant in the PC world as well.

Now that I'm relatively happy with the performance I plan on moving on to sky rendering and aerial perspective, probably using the technique described by Preetham in "A Practical Analytic Model for Daylight", which is pretty easy to implement with incredible results. After this I want to finish up the per-pixel splatting, then I'll start integrating the prototype code into the main Star Trader codebase.

BTW, if you have any experience with Terragen or World Machine and would like to help with Star Trader, drop me a line. I'll be needing terrain for the variety of planet types which include Terrestrial (earth like), Ice, Water (underwater), Desolate/Barren, Desert, Mountainous, Volcanic, and Forest/Swamp.



Patrick said...

I'm very impressed by your command or procedural content techniques. If I were producing a game with a lot of spatial content requirements, I'd hire you in a second. I'm curious what you think of procedural content coming from the other direction, where the content is social instead of spatial. I'm referring to drama engines like Storytron or Facade. I spoke on the subject at GDC recently, you can get the slides here:

Patrick said...

Sorry, botchet forgot to nest the link.

Aurelio said...

Although I'm not familiar with Storytron, I have played around with Facade quite a bit and while it represents a very interesting research project, I think "drama engines" still have a long way to go.

In the end the problem is that we're still trying to use traditional a.i. programming techniques and scripting to replicate very complex human behaviors. Even the most interesting chat bots can't pass the turing test (yet), and we expect synthetic actors to replicate the full gamut of emotions for a grown adult? I think at present thats un-realistic and while it's possible to stage scenarios with situations the "director" can anticipate (like in Facade), fully dynamic scenes with virtual actors that "feel" is a very difficult task to accomplish.

The one thing we're getting better at is having the computer better interpret what the human character is trying to convey. It's just the response that needs some work right now. Nintendogs has done this pretty well, maybe the next step is a simulation with a child (whose behavior is much more predictable than an adults).

What I would love to see some day is a reproduction of the old west holodeck program from Star Trek: TNG. When we can get to a state where thats possible I think we'll finally be able to offer players an interesting interactive experience.