By the Books: Solid Software Engineering for Games

GDC roundtable by Noel Llopis and Brian Sharp



Thanks for everybody who attended the roundtable. The sessions were really packed, and the participation was great (about 120 people in three roundtables, with some people attending multiple sessions.) Here’s a summary of what was covered in all the three sessions.


We have tried to include links to most of the relevant web sites and references. If you have any other great links that you’d like us to include, email me at and I’ll add them for everybody’s benefit.



Several people asked about resources dealing with software engineering. These are some of the resources that were proposed:

·        Software Project Survival Guide by Steve McConnell. Microsoft Press 1997. Great book to get started. If you think your project is in trouble and you need to change things now, this is the book for you. The complete title is amusingly appropriate: How to Be Sure Your First Important Project Isn't Your Last.

·        Rapid Development by Steve McConnell. Microsoft Press 1996. The companion book to the Survival Guide. This book goes in detail about why you should be doing something, and explains the different options. One particularly interesting feature is that it covers many different best practices and for each he mentions what the probability of success is, how difficult it is to implement, etc.

·        After the Gold Rush by Steve McConnell. Microsoft Press 1999. Unfortunately out of print. Very appropriate to the game industry (maybe more so than to the software industry as a whole). Read it as our gold rush is ending (or has ended already?).

·        The Pragmatic Programmer by Andrew Hunt and David Thomas. Addison-Wesley 1999.

·        Large-Scale C++ Software Design by John Lakos. Addison-Wesley 1996. This is not a light read. Lots of dense information, and lots of re-readability. Particularly interesting is the theme throughout the book to reduce coupling and dependencies (logical and physical). Are your compile times too long? This is the book for you.

·        Extreme Programming Explained by Kent Beck. Addison-Wesley 1999.

·        Software Development magazine. Deals with software development in general. Lately there have been lots of articles on agile methodologies.

·        Joel on Software. Random musings on software development. Controversial and amusing.

·        Jamie on Game Development. Like Joel’s site, but specifically about game development.


And last but not least the sweng-gamedev mailing list. Mail to join. It deals with software engineering in the game industry in particular. It’s very much in the same spirit as the GDC roundtable.



This was an easy one. Almost everybody was using C++, with three people using straight C, one person straight assembly (yes, it was a PC title!) and three using Java. This is definitely a change from other years where straight C was much more prevalent.


Of those using C++ only about 20 people were using the STL. Interestingly, about half the people who were using STL where doing PC development and the other half console development. Only a few people had written their own allocators (and most of them were doing console development).


STLPort was mentioned as a good cross-platform version of STL that can be easily posted to work on some consoles. One person went as far as to re-write all the STL from scratch to suit their purposes better.



The overwhelming majority of the participants were using “code and fix” (a.k.a. hack and slash). All of them wanted to learn what else other people were doing with the intention of changing.


One person was successfully using the Rational Unified Process (RUP) (and yes, it was in a game project). Nobody else was using something as close to a “heavy” process.


A few other people (5%) were using something they would describe as the waterfall or iterative waterfall model.


Agile methods/Extreme programming (XP)

Extreme programming drew a lot of discussion in all roundtables, so it has its own section instead of putting it under the rest of the methodologies.


Some XP links:





Only 5% (4 out of 80) of the people were doing XP to some extent. Most seemed familiar with the XP concept though, and several people were intrigued with it. Perhaps one of the barriers of XP is the “all or nothing” commitment that some people claim you need.


Interestingly, several other people were doing pair programming outside of XP (although not all the time). The range of interaction doing pair programming ranged from the “paper monkey” approach (you just need to say your thoughts out loud to anybody) to fully interactive programming with equal partners changing drivers.


Some of the major advantages mentioned about XP mentioned were:

·        Dissemination of knowledge within the team

·        In particular learning IDE and debugging techniques

·        Mentoring less experienced programmers

·        It forces people to check their egos at the door.


Process change

Everybody agreed that they wanted to change their development process to one extent or another. The main question was, how exactly do you go about changing it?


Making small changes at the time was suggested as a good way to ease into a new process. Some other people suggested trying a technique for a short period of time and see how it works. One of the major problems is people’s expectations: If someone is convinced that a certain technique is going to fail, it most likely will. Also, if it fails for some other reason, the new technique will be quickly blamed. The opposite is also true, with people convinced that something is going to work.


Another phenomenon that can hamper process change is the “boy who cried wolf” phenomenon.  An employee who convinces his coworkers to try a new process will lose credibility in their eyes if the process is not successful.  Since it’s hard to know ahead of time whether a process will succeed or not, this poses a difficult problem to the programmer at a company resistant to change.  One attendee said he had luck promising his coworkers that they could drop a new process if it didn’t yield results after two weeks.  Others countered, pointing out that some good processes take longer than two weeks to show results.


The general consensus seemed to be that those processes are easiest to implement that can be broken into small chunks and implied unobtrusively into the status quo.  Coworkers that adhere to a conservative mentality of process – “what I’m doing now seems to work, why change to something that might not?” – can be won over if they begin using a new process without realizing it, and are later made aware of the change.  For example, to instate pair programming, one attendee made a point to stop by every coworker’s office periodically and ask for programming help on something.  The resulting productivity boost made it hard later to argue again formalizing the process, and so it stuck.


One thing that everybody agreed on is that mandating a new process from above just doesn’t work. People have to buy into it. Peer pressure is a good way to get everybody in the same wavelength once a process has started.


Code reviews can be used to make sure everybody is doing things correctly (although that assumes you’re doing code reviews—more on that later).


This is a very company culture-dependent thing. Some companies are always evaluating their development process and trying to improve it (that sounds like extreme programming applied at a higher level).


Finally, somebody mentioned the role of quality in the development process, and how higher efficiency can be achieved through higher quality. This is related to the broken window principle in the Pragmatic Programmer book.


Coding standard

Only about 50% of the attendees had anything resembling an official coding standard in their companies, and only about half of those actually followed them. Even fewer enforced them.


The Thursday session was an exception only insofar as 90% of the attendees had a coding standard at their company.  Nonetheless, only 50% of those said it was enforced to any substantial degree, so the implication is the same.


This was a topic where there was a lot of division. Some people felt it was a necessary requirement for effective team development (team being the key word). Others thought it was a waste of time. The topic of automated formatting tools was brought up, but other than the default IDE formatting, nobody was using a specific tool.


It was mentioned that it’s important that the coding standard not be too constrictive and it addressed the main issues to make the code more consistent and readable (naming conventions, architectural issues).


One side effect of using a coding standard that was brought up is that it might help to reduce people’s egos. Coming into a new project and having to change your coding style could set a good precedent to adopting new ideas.


Only about 17-20% of the people were doing any sort of design before jumping into coding. Most people used UML in one form or another. Some did their rough class layout that way, others used it mostly for its sequence diagrams.


Some people went all the way and used some tools that would then generate some header files to start with, others just wanted the diagramming capabilities of the tools, and others would just use a white board to sketch out some ideas and move on. Usually people would not try to keep their UML diagrams up to date as implementation progressed, and used it mostly as a design tool.


Some of the UML tools that people used were:

·        Visio. Just for diagrams.

·        ArgoUML. Free, open-source, ostensibly for code generation, but only supports Java (currently.)

·        Rational Rose. Expensive. Some people didn’t have good results with it.



Another controversial topic, but one that nobody is happy with. A few people said that all they wanted was very self-explanatory code. Everybody else wanted some documentation to some extent, but nobody was happy how it worked out.


The main problem is the documentation getting out of date with the implementation itself. To solve that people try to put all the documentation as comments in the code itself. Even though that helps, it can also get out of date as soon as developers feel the pressure. One company had a dedicated person (technical writer) to keep the documentation up to date.


Most everybody had some sort of design document at the start of a project (although some people didn’t!). Only a few people 10-15 actually did some kind of technical design document covering the major technologies they would go after, major risks, proposed solutions, etc. As with the code documentation, these documents quickly get out date. However, some people felt they served their purpose early in the development cycle, and maintaining them would be a waste of time.


One good tool many people suggested for documentation is Doxygen. It’s free, cross-platform, and extremely powerful. Even if you don’t write a single comment, it will still generate some form of program structure, hyperlink variables with their declarations, etc.  Over half the attendees of the Thursday session used Doxygen, many of them for very different purposes; as a tool, it seems to solve many problems well.



Everybody has some form of QA (Quality Assurance) team. But how many people are actually writing test programs, especially unit tests for specific modules? Not many, only about 18% of the attendees. Of those, not everybody runs them periodically with a script.  Only a few attendees said they wrote automated tests – tests that run every time the executable is run.


Some people just write custom test programs. Others use CPPUnit, a generic unit-testing framework that allows you to easily create and run pass/fail unit tests for your C++ classes. One team found it too restrictive, so they built their own unit-testing framework using the FUBI technology to easily export functions for testing.


Several people brought up that testing in games is hard. There are some things you just can’t test for (is it fun? does it look good? are the units balanced?). So testing is usually limited to some parts of the code, especially the underlying technology libraries (math, collision detection, physics, etc).


Attendees acknowledge unit tests as being a hard process element to make “fun.”  Furthermore, it’s hard to break testing into small pieces for adoption, and it’s certainly not obvious how a programmer might get his coworkers to start writing tests without realizing it.  Everyone who used unit tests agreed that they helped quite a bit, but those people were few and far between, a testament to the difficulty of adoption.


Tools: Source control

Unlike other years, just about everybody was using source control. The only person who wasn’t using it had a team of three people.


Most people however were using the most basic functionality of source control: prevent people overriding previous versions, and keeping a history of past versions. Only four people were doing branching and merging when releasing stable versions or working locally. None of those people were using SourceSafe either.


The source control programs that people were using were:

·        Visual SourceSafe. Has gotten some bad press, but some people were using it without any problems. Binary assets are a bit more iffy though. Not very fast, and branching/merging support is terrible. About half the attendees were using SourceSafe.  On Thursday, roughly 80% were using VSS.  Only 10% of those using it said they liked it.

·        CVS. Offers a different model than SourceSafe, and has quite a following. Free is always good too. Roughly the other half were using CVS.  On Thursday, 10% were using CVS, and all said they liked it.

·        Perforce. Not free, but very robust and fast. Good branching support. Only four or five people were using it.  On Thursday, 10% were using Perforce, and all said they liked it.

·        Bitkeeper. Nobody was using it, but it was brought up as an alternative.


(There has recently (as of 4/1/2002) been an extensive discussion of source control products on the sweng-gamedev mailing list.  Those interested would be well advised to browse the archives.)


Tools: Defect tracking

Everybody agreed that having a tool for defect tracking was essential for the project, especially towards the end. Several people mentioned that it was particularly useful to have a defect tracking database that the publisher can also access to enter bugs directly.


Unlike with other tools or practices, there was a huge variety of tools being used, with no clear winner. These are some of the tools mentioned:

·        PRTracker

·        Perfect Tracker

·        Rational ClearQuest

·        Filemaker Pro

·        Bugzilla

·        FogBUGZ

·        MS Access

·        SourceSafe (?)

·        Intranet web board (for small teams only)


Tools: Other

In addition to all the tools listed so far, these are some of the other development tools mentioned during the roundtables:

·        PC Lint. Invaluable tool to check for potential problems. Its main drawback is the sheer amount of warnings you will get the first time. It will take a while to configure it so you only get what you really want. Some people recommend using it from the very beginning of a project, otherwise it’s too much to fix.

·        PreFast and PreFix. Some people claim they seem a good alternative to PC Lint. (Unfortunately I haven’t been able to track those down other than mentions in the MS research site).

·        NuMega DevPartner Studio: A Suite of three products:

o       BoundsChecker. Runtime memory analysis of your C++ programs.

o       TrueTime: Runtime speed profiling.

o       TrueCoverage: Runtime code-coverage analysis.