02-28-11 - Game Branching and Internal Publishing

Darrin West has a nice post on Running branches for continuous publishing . Read the post, but basically the idea is you have a mainline for devs and a branch for releases.

At OW we didn't do this. Whenever we had to push out a milestone or a demo or whatever, we would go into "code lockdown" mode. Only approved bugfixes could get checked in - if you allow dev work to keep getting checked in, you risk destabilizing and making new bugs.

This was all fine, the problem is you can lose some dev productivity during this time. Part of the team is working on bug fixes, but some aren't and they should be proceeding with features which you want after the release. Sure, you can have them just keep files checked out on their local machines and do work there, and that works to some extent, but if the release lockdown stretches out for days or weeks, that's not viable, and it doesn't work if people need to share code with eachother, etc.

If I had it to do over again I would use the dev_branch/release_branch method.

To be clear : generally coders are working on dev_branch ; when you get close to a release, you integ from dev_branch to release_branch. Now the artists & testers are getting builds from release_branch ; you do all bug-fixes to release_branch. The lead coder and the people who are focused on the release get on that branch, but other devs who are doing future fixes can stay on dev_branch and are unaffected by the lockdown.

The other question is what build is given to artists & designers all the time during normal development. I call this "internal publication" ; normal publication is when the whole team gives the game to an external client (the publisher, demo at a show, whatever), internal publication is when the code team gives a build to the content team. It's very rare to see a game company actually think carefully about internal publication.

I have always believed that giving artists & designers "hot" builds (the latest build from whatever the coders have checked in) is a mistake - it leads to way too much art team down time as they deal with bugs in the hot code. I worked on way too many teams that were dominated by programmer arrogance that "we don't write bugs" or "the hot build is fine" or "we'll fix it quickly" ; basically the belief that artist's time is not as important as coder's time, so it's no big deal if the artists lose hours out of their day waiting for the build to be fixed.

It's much better to have at least a known semi-stable build before publishing it internally. This might be once a day or once a week or so. I believe it's wise to have one on-staff full time tester whose sole job is to test the build before it goes from code to internally published. You also need to have a very simple automatic rollback process if you do accidentally get a bad build out, so that artists lose 15 minutes waiting for the rollback, not hours waiting for a bug fix. (part of being able to rollback means never writing out non-recreatable files that old versions can't load).

Obviously you do want pretty quick turnaround to get new features out; in some cases you artists/designers don't want to wait for the next stable internal publication. I believe this is best accomplished by having a mini working team where the coder making a feature and the designer implementing it just pass builds directly between each other. That way if the coder makes bugs in their new feature it only affects the one designer who needs that feature immediately. The rest of the team can wait for the official internal publication to get those features.


Anonymous said...

We got internal publishing fairly right on the last project I worked on. The artists got a special build with cheats activated and certain play restrictions loosened, menu to select levels (etc.), and a model viewer that our Maya plugin could talk to directly.

It was actually pretty easy to sort out, once we'd actually decided to do it, and making the build was basically one part-time coder-morning per week (mostly waiting for the compiler).

No branches (in fact we generally didn't use them, more fool we) - this was just about manageable for the internal build, since problems weren't generally a big deal and the programmers used the art tools too. For the real builds, it sucked in all the ways you stated.

We had QA check the art build, but not having Maya licences they couldn't do much more than play the game part. That was useful enough, though, because it meant we could give the producer/ management/etc. the art build, and they could play the game. Sad to say but on most projects I've worked on they've usually had to make do with the last milestone build (= out of date) or whatever one of the programmers had on their machine at the time (= random and full of programmer-specific tweaks).

Anyway, it looks like I went on more than I was expecting, so I hope you liked my cool story. The game still utterly sucked to work on, looking back. Half-OK art tools process can't solve everything ;)

Carsten Orthbandt said...

We used three branches for this pretty much as you describe with great success.
There was a "dev" branch that was supposed to work at all times but of course got broken now and then. This was where the coders hacked away.
Then there was a "rel" branch that only had stable code. This was given to art and content teams and QA. If something went bad in this branch you could be pretty sure it was bad content (e.g. level scripts).
Finally, there was a public branch that would represent the most recent public version.
As a refinement we generally kept all public branches around and named them after their purpose (E32011Demo and the like).

This worked like a charm with virtually no lock-down time whatsoever. Perforce made this model pretty easy.

old rants