12/06/2004

12-06-04 - 1

12-06-04

A couple of followups on my talk at game-tech :

First and mainly, I think I made a mistake in presenting the reason for robustness and decoupling. Robustness is the idea that the game never goes down, even with horribly broken use, and decoupling is the idea that when someone horribly breaks one system, the other systems keep running so people can still work. These are not just important for small teams or with junior people, they're always important. We acheive these things primarily by making the engine and code resilient to even fatal errors. There are other ways to meet these goals, namely Branches (source-control branches, so people work in isolation until their work is stable, then they merge to the main tree), and Build Escrow (in which the coder's build first goes through test approval before going to the content team). Branches and Build Escrow are both good, but they only work once the game is reasonably well established. We were building the code base from scratch, and rapidly iterating on design trying to hit milestones and demos, so it was frequently important to be able to code up a feature in the morning, deliver it to design and have it done by evening. For larger teams, the ideas of robustness and decoupling are even more important. This is part of the idea of overall team productivity. If a programmer takes the time to make his code extra safe, maybe it takes him 5% or 10% more time. If he does not, and he checks in code that breaks the build for the entire company, he's costing 100% of the work of maybe 50 other people. That's a catastrophic loss of productivity, even if it only happens once every hundred days it's a disaster. Most games these days are "design limitted", that is, the things that really prevent you from shipping a great game on time are in design, so it's silly to save coder time and potentially cost a lot of designer time.

What are the disadvantages of a clean self-protecting C++ coding style? Well, the compile times are slightly slonger, but our build is around 10 minutes and Halo 2's is around 7 minutes, so the difference is not very big. Also, that has more to do with arranging your headers well, we're currently sloppy about that, we could do much better to hide implementations. Having more clean separation of modules and opaque interfaces around whole systems would help that immensely. A related problem is that things like smart pointers and classes used in scopes forces you to put some things in the headers; you can't have a pure C-style hidden interface without doing a lot of work with pimpls and such. A lot of people think it takes too much time to code this way - that's just not the case; once you get used to it, the additional time needed is totally negligible, and is more than made up for with the time savings of having nice templates and classes that automate so many operations for you. The only two real disadvantages that I know of are - 1) when you hire new people, you have to teach them your base classes and your system; our system forms a sort of meta-language on top of C++, which is mostly enforced through compiler errors, but there are some things that you just have to know to do; 2) the protections seem to make people lazy, both in code and content; the protections are supposed to be in addition to proper testing and good algorithms, etc, not instead of them.

A little thing - the hierarchical allocation parser can obviously be used on things other than memory size; you can do it on allocation count, you can do it on CPU usage, vert count, etc. It's a nice way to track anything and figure out where it's coming from. This is especially nice with a "long frame tracker". To do the long frame tracking, you just run the tracking stats every frame, and you reset them to zero between frames. Then, as soon as you see a frame that you consider long, eg. longer than 1/20th of a second for example, you log the stats for that frame.

No comments:

old rants