Sponsored By

In this chapter of Object-Oriented Game Development, Julian Gold covers iterative development techniques as he seeks to explain what order programming tasks should be done, and when a game should be considered finished.

Julian Gold, Blogger

July 20, 2005

41 Min Read

Introduction

In this chapter, we discuss two important questions in development and provide a single answer for both. They turn out to be fundamental not only to the logical structure of the code development process but also to the production methodology.

Here are the questions:

  • What order should tasks be performed in?

  • When is the game finished?

Prioritising tasks

The worst thing in the world as far as development is concerned is to be writing system-critical code towards the end of a project. Yet this is such a common occurrence you would think someone would have spotted it and put a stop to it. Not only will (and does) it induce huge amounts of stress in the team, it is absolutely guaranteed to introduce all sorts of problems in other systems that were considered stable.

Ideally, we would like to pick an order to perform tasks in that does not lead to this horror. Ideally, we would like to be able to know in advance which are the tasks and systems that we need to work on first and which are those that can wait a while. If we cannot attain this ‘ideal' state – and I would be foolhardy to suggest we can – we can certainly do better than writing critical-path code during Alpha or Beta phases in a project!

How long is a piece of virtual string?

Although a game is a finite piece of software, it is rather tricky to describe criteria for “completion.” It is almost universally true that the functionality and features of games that we see on store shelves are only some percentage of the development team's ambitions. More will have been designed than implemented, and not all that was implemented will have been used. Given then that games rarely reach the “all done” mark, how are we to decide if a game is releasable? What metrics are available to inform us how much is actually ‘done and dusted'?

Consider also a problem of scheduling sub-tasks: say a programmer (call her Jo) has said it'll take 10 days to write the ‘exploding trap' object, and that she's 4 days into this time. Is her task 40% complete? It's very hard to tell, especially since we cannot see the trap exploding till maybe day 9 or 10. But let's be optimistic, and suggest that Jo works hard and gets the job ‘done' in 8. Usually there is a profit of +2 days marked up, the task is marked as complete, and everything looks hunky dory for the project.

Later on, it turns out that the trap needs to be modified since (say) it needs to be able to trap larger objects. It's another 4 days of work for our Jo, and now we have a deficit of –2 days, and suddenly the project starts to look like it's slipping.

The point is this: most objects in a game rarely get written just once. We'll revisit them over the course of a project to fix bugs, add and remove features, optimise and maybe even rewrite them entirely. This isn't a pathological behaviour: almost all significant software systems grow and evolve over the course of time. How naïve then does the ‘4 days in, 40% complete' metric look? Pretty damn naïve, to put it politely. What we really need is a system that allows time and space for evolution without driving projects into schedule loss and the resulting state of semi-panic that characterises most development processes.

Incremental delivery

Milestones round my neck

Almost all software development (outside of research, which by its nature it open-ended) is driven by some kind of milestone system. Let me state unequivocally now that this is a good thing: the days of anarchic commercial software development should be buried and remain so. Nevertheless, just by virtue of it being a ‘good thing' does not mean that it doesn't come with its own particular set of pros and cons. In particular, if we accept (for all the ‘pro' reasons) that milestone-driven development is the way to go then we must also pay attention to the ‘con' side that will inevitably frustrate our attempts to make the process work with the efficiency we require for delivery on time, within budget.

One of the most difficult cons games developers have to deal with is the different way that milestones and the associated schedules are interpreted by production teams and management. As most of those who have worked with non-trivial software products, or in fact any large project that requires multiple bespoke interacting component parts spanning a variety of disciplines, have come to realise, schedules represent a team's best guess at how the product will evolve over time.

On the other hand, management – perhaps unused to the way that schedules are produced, perhaps because they require correlation of studio funding with progress – often read the document completely differently. They see the document almost as a contract between themselves and developers, promising certain things at certain times.

This disparity between seeing a schedule as a framework for project evolution to facilitate tracking, and as a binding agreement to deliver particular features at particular times, causes much angst for both developers and managers. The former often have to work ridiculous hours under pressure to get “promised” features out. The latter have responsibility for financial balances that depend on the features being in place.

Internal and external milestones

We can see that there are some basic premises about milestones that need to be addressed:

  • Teams who do not work to milestones that mark important features becoming available in the game will not be able to deliver on time.

  • Teams who are held to unrealistic milestones will not be able to deliver on time, irrespective of how financially important or lucrative that may be.

  • Managers need to know how long the team thinks development will be and what the important markers are along the way. Without this there can be no business plan and therefore no project.

Clearly, the sort of milestones that managers need be aware of are ‘cruder' or ‘at a lower granularity' than the milestones that developers need to pace the evolution of the product. We can therefore distinguish between ‘external' milestones, which are broad-brush descriptions of high-level features with granularity of weeks (maybe even months), and ‘internal' milestones that are medium and fine-level features scheduled in weeks and days.

Managers therefore never need to know the internal mechanisms that generate the software. To adopt a programming metaphor, the team can be viewed as a ‘black box' type of object with the producer as its ‘interface'. There are two types of question (‘public methods', to extend the analogy) that a manager can ask of a producer:

  1. “Give me the latest version of the game”

  2. “Give me the latest (high-level) schedule”

This is an unrealistically simple example of interaction between production and management. The latter will want to know issues of team dynamics, why things are running late (as they inevitably seem to), and a whole host of other project-related information. However, it draws a fuzzy – but distinguishable – line in the sand between the scheduling of features and accountability for their development.

The breaking-wheel of progress

There is one other important sense in which management and develop perceive milestones differently. It is based on the concept of ‘visibility' and is without doubt the biggest millstone (half-pun intended) around developers' necks this side of Alpha Centauri.

Almost ubiquitously in the industry, management refuses to regard features that they cannot see (or perhaps hear) within a short time of picking up the game as importantly as those obviously visible (or audible) ones. For those of us that work on AI, physics, memory managers, scripting systems, maths, optimisation, bug-fixing and all those other vital areas of a game's innards that are not open to visual inspection, this is particularly galling. To spend weeks and months working on hidden functionality only to have the team's work dismissed as ‘inadequate' because there was no new eye-candy is an all too common occurrence.

The education of managers in the realities of development is a slow, ongoing and painful process. Meanwhile, we developers have to work with what we are given, therefore it remains important to – somehow! – build ongoing visible / audible progress into the development of the project.

There is an intimate relationship between the concept of visibility and of completeness. Many tasks may not become tangibly present until they are ‘complete'. Saying that something is ‘40% complete', even if that were a rigorously obtained metric, might still amount to ‘0% visible'. So we'll only be able to fully address the issue of progress when we deal later with determining ‘completeness' for a task.

Always stay a step ahead

Despite our best – though sometimes a little less – efforts, we will slip. We shall deliver a feature late or perhaps not even at all, and if the management is in a particularly fussy mood then there may be much pounding of fists and red faces. Worse than showing no visible progress would be to show retrograde progress – fewer features apparent than a previous milestone. Nevertheless it is a common and required ability for projects to arbitrarily disable and re-enable particular functionality within the code base. With the advent of version control systems, we are now able to store a complete history of source code and data, so in theory it is always possible to “roll back” to a previous version of the game that had the feature enabled.

Just because it's possible, does that make it desirable? In this case, yes. Indeed, I would argue that working versions of the game should be built frequently – if not daily at least weekly – and archived in some sensible fashion. When the management asks production for the latest version of the game (one of their two allowed questions from the previous section), then the producer will return not the current (working) build but the one previous to that.

Why not the current working build? Because it is important to show progress, and development must ensure that to the best of their ability the game has visibly improved from one iteration to the next. If it becomes necessary – and it usually does – to spend time maintaining, upgrading, optimising or rewriting parts of the code base, then releasing the next-but-one working version gives another release with visible improvements before we hit the ‘calm' spot with no apparent progress.

From one point of view, this is a ‘sneaky manoeuvre'. It's no more sneaky than (say) insuring your house against flood1. Publishers and managers always want to see the ‘latest' version and a development team itching to impress may well be tempted to show them it. Resist this urge! Remember, development should be opaque to management inspection other than through the supplied ‘interface'. Anything else is just development suicide.

Iterated delivery

So we've decided that rather than work specifically to release code at external milestones, we'll supply ‘work in progress' builds at these times. Internally we'll be working to our own schedule. How should we organise this schedule?

I'll start by assuming that there is a reasonably comprehensive design document for the game (believe me, you'd be surprised the number of times there isn't). This document should describe, in brief, what the game is about – characters (if any), storyline (if any), situations and rules. Step one to producing an internal schedule is to produce the Object Oriented design diagram for the game. We are not interested here in the diagram specifying interrelationships between the objects; the end goal is simply to produce a big list of all classes that map directly to concepts in the game. Auxiliary classes such as containers and mathematical objects need not apply – we are only looking for classes that map to game-level concepts.

Once we have produced this list, it needs to be given back to the design team, as step two is really their call. They need to classify all the objects in the list (I'll use the terms ‘objects' and ‘features' interchangeably in this section) into the following three groups:

  • Core: These are features that form the basis for the game. Without them there is only a basic executable shell consisting of (some of): startup code, rendering, memory management, sound, controller support, scripting support, resource management, etc. Should any of these ‘non-game' systems require engineering then they should be added to the core group, which will otherwise contain the most fundamental objects. For definiteness, consider a soccer game. The most fundamental objects are:

    • player (and subclasses)

    • stats (determining player abilities)

    • ball

    • pitch (and zones on the pitch)

    • goal

An executable that consists of working versions of these objects (coupled to the non-game classes) is generally not of playable, let alone releasable quality.

  • Required: This group of features expands the core functionality into what makes this game playable and unique. Often these features are more abstract than core features. They will embody concepts such as NPC behaviour, scoring systems and rules. Also they will pay some homage to the particular genre the game will fit into, because rival products will dictate that we implement features in order to compete effectively. To continue the soccer example, we might place the following features in this group:

    • AI for Player subclasses.

    • Referee (either a visible or invisible one that enforces rules)

    • Crowd (with context-dependent sounds and graphics)

    • Knockout, League and cup competitions.

A game consisting of core and required features will be playable and releasable. Nevertheless, it should be considered the minimal amount of content that will be releasable, and still requires work if the game is to be near the top of the genre.

  • Desired: These are features that provide the ‘polish' for the game. This will include such things as visual and audio effects, hidden features and levels, cheats. Features in this group will not alter gameplay in significant ways, though they will enhance the breadth and depth of the playing experience and (as with required features) the competition may dictate their inclusion.

    Depending on the type of game, they may be game-related objects. For example in the soccer game, having assistant referees would be a desired feature, as the game will function just fine without them.

The end result is a list of features that is effectively sorted in terms of importance to the product. It is tempting to say that the optimal order of tasks is then to start at the top – the most important ‘core' tasks – and work our way down. We carry on completing tasks until we run out of time.

Well it's close, but there's no cigar for that method. There are fundamental problems in organising work this way. There is little evidence of anything that resembles ‘continual progress'. In the pathological case, the game is in bits for the entire development cycle until just before the end when the bits are pulled together and – hopefully! – fit. This is guaranteed to have producers and management biting their knuckles with stress. Furthermore, the most likely outcome is that components do not work together or have unforeseen side-effects that may involve radical re-design very late on in the project.

Clearly it is correct to do the most important tasks first and the superficial tasks last (time allowing). But if we wish to show continual improvement of the product, we shall need to be a little smarter. So we shall progress to the third phase of the Iterated Delivery method (the actual ‘iterated' part). We'll start again with the list of features, which because an Object Oriented Design process has generated them, map directly to classes.

Consider just one of these classes. How does it start off its life? Usually something like this:

// File Player.hpp
class Player
{
public:
  Player();
  ~Player();

private:
};

// File Player.cpp
#include “Player.hpp”

Player::Player()
{
}

Player::~Player()
{
}

Over the course of the product development, much will be added, much will also be removed, but generally the object evolves. This evolution can occur in one of two ways: firstly it can start with zero functionality and end up fully implemented. This is possible, but not very common. More realistically, the object is either fully or partially re-written to have more complex or more robust or more efficient behaviour over the duration.

So far, so obvious. But consider the formalisation of the principle that objects evolve: instead of evolving the feature from zero functionality at the start to full functionality at the end, consider writing versions of the full object functionality. We define the following four versions of the feature:

  1. The null version: This is the initial version of the object interface with no implementation (empty functions). Note that this is a complete project that can be compiled and linked that will run, albeit not doing anything useful.

  2. The base version: This has a working interface and shows ‘placeholder' functionality. Some of the required properties may be empty, or have minimal implementation. For example, a shadow may be represented by a single grey sprite; a human character may be represented by a stick-man or a set of flat-shaded boxes. The intent is that the object shows the most basic behaviour required by the design without proceeding to full implementation, and therefore integration problems at the object level will show up sooner rather than later.

  3. The nominal version: This iteration of the feature represents a commercially viable object that has fully implemented and tested behaviour, and is visually acceptable. For example: the shadow may now be implemented as a series of textured alpha-blended polygons.

  4. The optimal version: This is the ultimate singing-and-dancing version, visually state-of-the-art and then some. To continue the shadow example, we may be computing shadow volumes or using projective texture methods.

We'll refer to the particular phase an object is in at any point in the project as the level of the class. A level 1 object has a null implementation; a level 4 object is optimal.

Some points to note: first of all, some objects will not naturally fit into this scheme. Some may be so simple that they go straight from null to optimal. Conversely, some may be so complex that they require more than four iterations. Neither of these scenarios presents a problem for us, since we aren't really counting iterations per se. We're effectively tracking implementation quality. In the case of an apparently simple object, we can only test it effectively in the context of any associated object at whatever level it's at. In other words, systems and subsystems have a level, which we can define slightly informally as:

L(subsystem) = min j L(objectj)

L(system) = min i L(subsystemi)

with L () denoting the level of an object, system or subsystem. Applying this idea to the application as a whole,

L(application) = min k L(systemk)

or in simple terms, the application's level is the smallest of its constituent object levels.

Now we need to put the ideas of level and of priority together to get some useful definitions, which form the basis of Iterated Delivery.


Definitions

An application is defined as of release quality if and only if its required features are at the nominal level.

An application is referred to as complete if and only if its desired features are at the optimal level.

 

From these definitions, we see that there is a sliding scale that starts from a barely releasable product all the way up to implementing and polishing every feature the design specifies. The product just gets better and better, and – provided that the tasks have been undertaken in a sensible order – can be released at any time after it becomes of release quality.

The second point to note is that Object Oriented development is perfectly suited to a level-based scheme (and conversely procedural development does not adapt as easily). For example, consider our shadow code. An object that has a shadow may declare:

class Shadow;

class AnObject
{

public:
  // Interface…

private:
  Shadow * m_pShadow;
};

Each level of the shadow object can be implemented in a separate subclass:

// File Shadow.hpp
class Shadow
{
public:
  // Interface only: null implementation
  virtual void Compute( SCENE::Frame * pScene ) = 0;
  virtual void Render( REND::Target * pTarget ) = 0;
};

// File ShadowBasic.hpp
class ShadowBasic : public Shadow
{
public:
  // Base implementation.
  virtual void Compute( SCENE::Frame * pScene );
  virtual void Render( REND::Target * pTarget );

private:
  Sprite * m_pSprite;
};

// File ShadowPolygonal.hpp
class ShadowPolygonal : public Shadow
{

public:// Nominal implementation.
  virtual void Compute( SCENE::Frame * pScene );
  virtual void Render( REND::Target * pTarget );

private:
  Polygon * m_pPolygons;
};

// File ShadowProjected.hpp
class ShadowProjected : public Shadow
{
public:
  // Optimal version.
  virtual void Compute( SCENE::Frame * pScene ) = 0;
  virtual void Render( REND::Target * pTarget ) = 0;

private:
  Texture * m_pProjectedTexture;
};

Within our ‘AnObject' class, polymorphism allows us to control which available implementation of shadows we use:

m_pShadow = new ShadowProjected();

We can even use a so-called ‘factory' pattern to create our shadow objects:

// AnObject.cpp
#define SHADOW_LEVEL level_NOMINAL
// …
m_pShadow = Shadow::CreateShadow( SHADOW_LEVEL );

// Shadow.cpp
/*static*/
Shadow * Shadow::CreateShadow( int iLevel )
{
  Shadow * pShadow = 0;
  switch( iLevel )
  {
    case level_BASE:
      pShadow = new ShadowBasic();
      break;

    case level_NOMINAL:
      pShadow = new ShadowPolygonal();
      break;

    case level_OPTIMAL:
      pShadow = new ShadowOptimal();
      break;
  }

  return( pShadow );
}

Waste not, want not

Does this mean that we have to write similar code three times? Yes it does, but all the process has done is highlight the fact that (by and large) this is what we do anyway. It just so happens that when we do things piecemeal in an ad hoc order we're less aware of the partial and complete rewrites during development. By developing in the above fashion, we are to some extent duplicating work, but that duplication does not go to waste. Firstly, we shall acquire a degree of experience when writing our basic implementations that will be useful when we write the more complex ones. If we're clever (and we are) then we shall write a number of support functions, systems and objects that will make implementation of the nominal and optimal versions considerably simpler.

Furthermore, by writing an object in a suitably object-oriented fashion, we may end up with a reusable component. Once we've written a basic shadow then it can be used as a base implementation in any game that requires them. That means we can get instant functionality at the logical and visual level. It is the author's experience that it is harder to transpose nominal implementations and more difficult still to move optimal code in this fashion: the idea is to get placeholder functionality in as quickly as possible.

Ordering using priorities and levels

Returning to the two classification schemes, we can see that there is at least one sensible order to perform tasks in that fulfils our goals: we would like to perform the important – core – tasks first, primarily because we shall be writing some of the major systems and subsystems that later layers will depend upon. Then we would perform the required tasks, then the desired ones2. Whilst this is a laudable attempt at doing the things that matter first, we can do much better by integrating the object's level.

So our next attempt at task ordering will be this:

  • Starting with the core tasks, then proceeding to the required tasks, then the desired tasks, create the null implementation of each object. Once this has been done (most projects can get to this state in a week or two), the project should build and run without doing very much.

  • Now go back to the core tasks and start writing the nominal implementations, carrying on through the required tasks. At this point, the code is – by our definition – releasable. We can then carry on getting the desired features to their nominal status.

  • Finally we repeat the sweep from Core to Desired until we either run out of tasks or are stopped in our tracks by external factors.

This – breadth-first – approach is much better than a single sweep from Core to Required to Desiredwith no reference to level. It is a universe better than the “let's do the cool bits first” approach! It shows near continual growth throughout the development cycle, and makes sure we focus our attention early on in the places where it is most required. We have a handle on how “complete” our product is and it is now considerably simpler to create meaningful internal and external milestones. However, it has a problemette that arises in its day-to-day implementation.

1. In a nutshell, it is not clear that it is more advantageous to undertake base level desired features than nominal level core features, or to write nominal level desired features in preference to optimal level core features. There are a number of factors that will determine whether it is or not.

2. Although progress is continuous, it isn't smooth.

Consider a project with 12 features to implement (labelled F1 to F12). Assuming for the moment a single programmer, we may order tasks as shown in Table 9.1 (the numbers representing the ordinal number of the task).

Table 9.1: naïve programmer scheduling.

Features

Core

 

 

 

Required

 

 

 

Desired

 

 

 

Tasks are undertaken in priority order. When we've finished the pass at the current level, we start at the top again at the next level. This is fine, indeed it's better than most hit-and-miss attempts, but it does suffer from the disadvantage that after the base level, no new functionality appears – the existing stuff just improves. Whilst this is fine from a purely theoretical standpoint, it does make it difficult to impress. Since we always want to be able to keep one step ahead of the demands put on development, it remains prudent to tweak the order a little. Consider the ordering in Table 9.2

Table 9.2: a better ordering of tasks

Features

Core

 

 

 

Required

 

 

 

Desired

 

 

 

Here, we've deferred implementing the lower-priority tasks F10, F11 and F12 at the base level till around the second half of the project. This places a greater emphasis on getting the most important games systems to a releasable state. It also means we can spoil ourselves a little and get to work on one or two of the flash bits early on, and from week to week, we can see our game both grow and improve.

So much for one-programmer teams, they are the exception, not the rule. The concept readily extends to multi-programmer teams, and Table 9.3 shows an ordering of tasks based on two programmers based on the previous ordering. Notice, however, that there is a slight problem with the allocations: looking at task F1, programmer A implements the Null and Base level functionality of the feature, however it just so happens that programmer B ends up doing the Nominal level and A is finally scheduled to implement the Optimal level.

Now it really depends on your point of view whether this is a problem or not. One school of thought suggests that it is bad policy to put ‘all your eggs in one basket.' If there is only one programmer who can do a set of tasks, what happens when (not if) they get ill, or (heaven forbid) they leave? Without someone else who knows the system, there is a significant loss of momentum whilst programmer C is brought in to fill A's shoes and learn their ways.

Table 9.3: naïve ordering for two programmers A and B

Features

Core

 

 

 

Required

 

 

 

Desired

 

 

 

The other school of thought is that a Jack of all trades is a master of none: programmers who work on many systems spend a lot of time getting into a paradigm, only to spend a short while doing it and then starting another one. It's scarily easy to forget even the stuff that seems obvious when you don't actively use it for a while, and if the code has been sloppily implemented or the paradigm is complex and/or undocumented, there is again a loss of momentum whenever programmers change task. Although the changes are probably smaller, it can happen several times over the course of the project, and the damage is cumulative.

On the other hand, it is reasonable – indeed vital! – to recognise and effectively utilise the basic skill sets of your team. If you have a renderer specialist on board, it seems a bit of a waste having him or her write AI if there are graphics tasks to be done.

There is no simple answer to this dilemma: the author suggests that communication is vital – all programmers should know about what other programmers are doing via code reviews, code should be clear and informatively documented (either via meaningful commenting or actual paper or electronic documentation) and systems should be engineered to be as self-contained and maintainable as is humanly possible.

Assuming that we wish to keep the same programmer with (basically) the same task types, Table 9.4 shows the improved two-programmer itinerary:

Table 9.4: Schedule for 2 programmers accounting for skills

Features

Core

 

 

 

Required

 

 

 

Desired

 

 

 

Scheduling with an Iterated Delivery system

The Iterated Delivery system shifts the focus of production from “when will it be finished?” to “when will it be good enough?” Since there is no meaningful definition of ‘finished', but we have provided a definition of ‘good enough', we have established a metric by which to measure the progress of our project with. Consequently, Iterated Delivery solves a number of difficulties that arise in the scheduling of work.

The major mistake of developers when estimating schedule times is that they perform a top-down analysis of the problem, breaking the large, complex tasks into smaller, more manageable ones, and then estimating the times for those. The task time is then the sum of the sub-task times, with some amount of contingency added. There is no usually account taken of learning curve, code revision or assembly of components. Is it any wonder that tasks almost ubiquitously over-run?

I am not suggesting that top-down analyses are wrong - it is nigh impossible to schedule without them – but they miss out important information that is an integral part of the software development process. Iterated Delivery puts that information back in. The developers still do a top-town analysis, they still estimate sub-task times and add them up to get task times, and risk-analysis / buffering3 contingency times still need to be accounted for. The important difference is this:

The time scheduled for a task is the sum of the times for the null, base, nominal and optimal levels.

I'm hoping that you didn't recoil too much from that assertion. What it sounds like on a naïve level is that you are taking a task time, multiplying it by four – once for each level – then delivering within the allotted time which has been grossly exaggerated, thus earning some kudos for finishing early. If you think that I'm suggesting that, let me reassure you that I am not. The statement is to be read as “account in the schedule for the fact that you rewrite significant portions of code when you understand more about the problem. Schedule the less important tasks so that rewriting them, if at all, occurs after the important ones.”

Thus, Iterated Delivery becomes its own risk management and contingency system. The harder, more time-consuming (and therefore riskier) tasks are deferred to times when they cannot hurt the project if they do not come in on time (assuming that they are at least at nominal level towards the alpha date). We simply go with the nominal version if the optimal version is slipping and we must ship tomorrow. Or else we are granted more time to complete the task because it would make a visible improvement (though one ought to ask why it had such a low priority if this were so).

End Notes

1 As I write this section, areas of Britain have been submerged as rain falls near continually and rivers burst their banks. Property owners will bless their house insurance.

2Typically, it would be nice if game development followed this scheme because at least it makes some commercial sense. The usual approach is to attempt all the optimal-level desired tasks first in order to make an impression and then somehow backstitch a game into the demo!

3 Buffering adds on a fixed proportion of schedule time to allow for illness, holiday, meetings and other unforeseeable but likely eventualities.

--

This article is excerpted from Pearson Education's Object-Oriented Game Development, (ISBN 032117660X) by Julian Gold.

_____________________________________________________

 

Read more about:

Features

About the Author(s)

Julian Gold

Blogger

Julian Gold is a developer with Microsoft Research Cambridge, working on the use of Bayesian methods and Machine Learning in video games. An industry “veteran” – ten years - he has worked in the past for Sony, Sega, Argonaut and SixByNine Ltd, and has a prior academic career in scientific computing. Object Oriented Game Development is published by Addison-Wesley, ISBN 0-321-17660-X, and is available from all good bookstores. And probably some bad ones, too.

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like