5/08/2008

05-08-08 - 6

(I think this is all stuff that's common knowledge and shouldn't be any secrets, but if you feel like I'm spilling some kind of beans let me know and I'll stop)

Talking to people in Seattle got me excited about games again. It's a pretty interesting time. The big negative at the moment is how huge and complex games are, and how huge the teams have to be, which kind of sucks. On the other hand, games are still advancing very rapidly, and some big advances are going to be made in the next 10 years. We're finally getting to the point where we can realistically talk about making true dynamic simulations of complex worlds.

In contrast, at the moment there are basically two types of games : 1. "Deep, narrow" games, that are simulations, but in very limited ways - the user can only do a few things and the environment is strictly limited in how it responds; this includes things like Halo as well as lots of old games like Mario etc. 2. "Broad, shallow" games that have lots of canned hacky interactions and try to create the illusion of a huge world you can interact with in lots of ways, but aren't true simulations and you can't do anything the designers didn't specifically code in; this includes things like GTA and the Sims as well as most of the classic j-RPG's and stuff like that.

Everyone is very excited about the continuing growth of CPU power as we continue to move into the multi-core era. The problem is that with machines getting more complex we can do a lot more and run a lot more content, if we keep making games the way we do now that would require 10-100 X as much art & design work, which means prohibitive schedules and team sizes. So we need a way to make tons more interaction without a lot more content creation, which of course means code.

The big exciting things coming up IMO :

Procedural interaction ; N*M problem. We want to have N types of things in the world and M things the user can do, but we don't have to manually set up N*M special interactions, and of course the user can put objects together and they should respond to each other as well which is N*N or even N*N*M, etc. This means that the way things work together needs to be more procedural. Currently in games for me to be able to do a "pick up" , I need to code or animate a different "pick up weapon" , "pick up box" , "pick up humanoid", etc. In the real world if I know how to do a "pick up" I can pick up anything.

"sparse virtual geometry" ; importance-based object detail & existance. This is very vague and nobody knows really how to do it, but we know it's what we want to do. In fact, I and others have been talking about this since 1998 or so. I believe Carmack has been talking about it a lot recently. I should emphasize first off this is not about rendering performance, though rendering is part of it. The idea is that you want to have a world which has "virtual" geometry and objects of massive detail and uniqueness. We want to get away from instancing and just decorators on simplistic backgrounds, and actually have something like a city that's full of millions of unique objects all interactable. Of course you can't just have all those objects in the simulation all the time, so you want to page out stuff that's not important and down-res the distant stuff. Down-res means lower LOD, not just for rendering, but also for collision, AI, pathfinding, etc. etc. The overall goal is that you wind up with a constant performance system similar to the sparse virtual textures, with more detail where you need it.

Dynamic everything. Currently all complex games rely very heavily on having the basic structure of the world static or semi-static, which allows you to precompute lots of helpful things like AI paths, radiosity, collision acceleration structures, etc. People are doing semi-dynamic worlds at the moment mainly by allowing only small or specific canned changes to the static world and precomputing the effect of those changes. The goal in the future is of course to support dynamic everything, which means fully realtime lighting, PVS, etc. I actually don't think the rendering is the hardest part of this at all; things like packaging for paging, and AI are much bigger problems, which leads us to -

AI that can understand its environment. Currently nobody has AI that can actually understand geometry. The designers do lots of tagging, or some complex tool runs a preprocess to analyze and figure things out about the world, and the result is some simple tags like "you can walk here" or "this is a good place to shoot from" etc. There are two reasons this is bad. One, of course if you have totally dynamic geometry you can't do this. Two, the amount of work it takes to tag all this stuff for designers is pretty massive and we'd like to eliminate it. This is a huge area of work that we'll probably only begin and could continue for a hundred years, in the extreme case it includes things like strategic reasoning about environments. In the short term we'll probably have to continue to rely on markup for the macroscopic issues (like "this is a flanking area", "this is a good first retreat area") but we'd like to advance to dynamic smart AI for microscopic issues (like "I can walk here" or "I can take cover here").

This is all pretty pie in the sky; there are a lot less insane things that are also going to be happening, or maybe just some baby steps in these directions.

Also this is all very exciting tech, but in terms of what actually makes games good or bad in the short term, none of these have anything to do with making better games. The problems with games in terms of the user experience are the same as ever :

1. Not enough time spent on all the little details that annoy the user; stuff like UI usability, controls, load time & paging, good tutorial integration, etc.

2. Shitty stories, boring characters, uninspired worlds, gameplay that doesn't integrate the play and the user and the world/backstory, generic uncreative art designs, schedule cuts that wind up wrecking the world/story/art.

3. Generic repetitive gameplay due to lack of game design vision and lack of risks taken in prototyping/prepro.

To some extent these things can be improved by technology, largely by making content easier to create, more foolproof, and faster to iterate. The goal of game content creation should be that the artists/designers can just try anything they want, instantly see it in the game, and have it all "just work", so they don't have to know a bunch of weird rules about how to make good content and don't have to worry about manually tagging up a bunch of junk.

No comments:

old rants