Computer programming is typically viewed as an act of solving a problem and capturing the solution in an algorithm. This engineer-centric attitude dominates in most professional settings, including the video game industry. It manifests itself by focusing on function over architecture in software design: as long as the code does what is required, it doesn’t matter too much how it is written. Many programmers do have an instinctive or pragmatic urge to write “clean” code for the sanity of theirs and their co-workers’, but intensive prototyping and time pressures involved in game development often put a stop to such urges. “We don’t have time to refactor this,” the argument goes, “and we’re not even sure if we’ll ever need it again, anyway.”
But there is more to it than meets the eye. If we take a step back we might notice that software that a game development team builds, is not just a means to an end. It doesn’t just determine the product being built, but it also affects the team who is building it. On the surface, this is obvious: of course the engine determines what is possible and therefore what game design decisions are made. If a new feature is needed, programming time is committed to make it happen. Yes, but more often than not, that new feature will be defined in terms of what’s already in the engine. For example, if the engine already has an affordance system (picture The Sims) and the feature request is for the enemies to be able to dig trenches, then a natural solution might be to simply add a new affordance object that invites an NPC character to dig at its location. Notice how this solution comes with strings attached, i.e. that the level designer needs to place these objects on the map and so the NPC can only dig where the designer chooses. This may or may not be desirable.
What I’m getting at is that the example illustrates how the software determines the mental paths of the team. In fact, it forms an integral part of the team’s mindset, a common framework of concepts with which ideas are communicated and solutions invented. Like language, it can both facilitate and hinder team productivity. It is therefore crucial that the framework offers an effective representation of both the problem and the solution.
What is an effective representation? Obviously, it needs to capture all the important aspects of the concept it is pertaining to, so it can actually do its job. However, in order to facilitate communication it should also be intuitive, if not actually predictable. For the sake of illustration, imagine you are writing a FIFO (First-In, First-Out) task queue. An intuitive implementation of it would have the push and pop methods on the queue class, to emphasise a mental image where you add and remove tasks to and from the queue. A less intuitive yet still perfectly complete one would attach these methods to the tasks themselves. Notice how this compels everyone to think of tasks as having fixed predecessors and successors. It makes them more likely to say “let’s make the AI pass a precursor of the physics pass,” than “let’s make sure that AI runs before physics.” I can imagine scenarios where such implementation is actually quite intuitive, but I’m doubtful if it should still be called a FIFO queue.
Promoting a wrong intuition about a concept or a system is a very real pitfall. One example is the idea that a shader program can be effectively designed as a block diagram. I would hazard a guess that no complicated program can be effectively designed in this fashion, but at least in the case of shaders I have witnessed first-hand how this approach led to serious inefficiencies at one studio I worked in. In the case of the visual shader editor, we paid the opportunity cost of building the editor and developing shaders in it, until we realised they had to be rewritten procedurally in a shader language. But consequences of not paying attention to the impact software has on how someone approaches a problem are affecting us every day. They manifest themselves in the high cost of maintaining, reusing and debugging “spaghetti” code; in the cost of bugs arising from poorly communicated assumptions; or even in the “not invented here” syndrome leading to repeatedly reinventing the wheel.
None of this will come as a surprise to an experienced programmer, but I will argue that too few of us take it far enough to look at code literally as a communication channel. When we code, we are not only telling a machine what to do; we are also talking to other programmers. Sooner or later someone will come along (and with enough time it may well be ourselves), who will either have to add, fix or take something from your code. If they are adding or fixing something, are they likely to miss something important? If they are using your code as an example for implementing a new feature will they take away the right idea? Or is it a hack meme that is going to spread through the codebase like cancer? People have a tendency to treat written text as official, even if it has “HACK” written all over it. It may even escape the programming domain and spread to the team’s mindspace, like that debug function you added to force reset the scene after a cinematic, because you couldn’t make it happen automatically, and now everyone is using it, whether it's needed or not.
Effective communication boils down to constructing a coherent mental model in somebody else’s head. This is why in code reviews I tend to focus more on code readability than correctness. In the limited amount of time that is usually given to these affairs, there is no way I can understand the problem better than the person whose code I’m reviewing (and if I did then that probably wouldn’t speak well to their abilities), but if I can make sure that the algorithm is clear and doesn’t trip anyone with obscure assumptions, I’m effectively securing more thorough reviews in the future. If a problem crops up in testing, any number of people should be able to read, understand and debug it. In my ideal world, a moderately experienced programmer should be able to read any code like a book. Just like you can open a book on any page and understand what it says, so you should with computer code: all information to understand it should be in there, either in code or in the comments.
So, is writing pretty code the be all and end all to good software engineering? Certainly not. There is more we can do to make software development easier, faster and cheaper, especially in video game development which, compared to other areas of software engineering, is still a bit of a cottage industry. What I mean by that is that in a typical game project, a considerable amount of work is spent on recreating the functionality that has been already done countless times not only in other studios, but often even in the same studio. A major cause of this is lack of standardisation which, thankfully, is finally being addressed by the growing use of off-the-shelf engines. It is easy to see how standards help the creation of reusable software and economies of scale that make such software much cheaper to buy than to make. What is perhaps less immediately obvious is that standards also establish an ontology like the one we talked about earlier. They give the programmers an intuition of how things are put together, where to look for things and what to expect of any piece of code found in the codebase. It cannot be underestimated how dramatically this reduces the time required to make changes.
Teams that use well-written engines already benefit from it, but even they can realise the same benefits by standardising the internally-developed code. An area that is typically neglected in this regard is gameplay, because it is considered too custom for a particular game to bother making it reusable. As it turns out, this is far from being the case. Games are boringly predictable in the kind of realities they model. They always have a number of objects located in a 2 or 3-dimensional Euclidean space. The objects are always assemblies of sub-objects that stick together and rarely move apart (NPC dismemberment being a notable exception). This fact was indeed noticed and reflected in a typical class hierarchy employed by games, with a class containing an object’s transform sitting somewhere near the top. Unfortunately, a semantic network, which a class hierarchy technically is, is not ideal for modelling game objects. In fact, it forces programmers to make arbitrary choices in how they implement their objects, and “arbitrary” means unintuitive and unpredictable.
This problem and a solution to it was described in my GDC Canada talk in 2009. Simply put, it postulates that instead of inheriting properties and behaviours, we should be aggregating them. It is often called the component-based architecture, although these days I like to call it the “generic objects methodology.” I took the approach from Ian Millington, with whom I and a few others at Mindlathe Ltd, created one of the first AI SDKs in the industry. Interestingly, back in 2000 it seemed more elegant than practical. Only later, after I re-implemented it in Radical Entertainment’s Titanium engine, did I realise what its real benefit was. Once it was introduced, the codebase became orders of magnitude clearer. All gameplay code was organised into interchangeable units with extremely narrow interfaces. One didn’t have to fumble about in spaghetti code anymore trying to guess where a function is implemented and how it’s called. Not only that; the methodology gave everyone an intuition how to implement any new functionality, which, in a virtuous circle, made it easier for others to refer to it, and so on. It translates directly to shorter development times and fewer bugs.
Although these days the merits of the generic object methodology are unquestionable, it is still surprisingly easy to apply it incorrectly or for the wrong purpose. It may be tempting to see it as a “cool” new architecture that is more powerful, expressive, or simply elegant than the old C++ class hierarchy. But if this is all you see in it, you are likely to miss out on the significant benefit described in this article: establishing a clear and universal way of thinking about your game. This is because to establish such a vision, you not only need to enable it, you also have to enforce it. If your architecture endorses alternative ways of implementing functionality, inevitably both will be used and the choice between them will be arbitrary. For example, if you permit inheritance between components, the message of aggregation over inheritance will become muddled and the benefits of the former ultimately reduced, as programmers switch randomly between paradigms. Similarly, I’m strongly against exposing the composition of generic objects to the rest of the game, as this leads to increased coupling, in other words reduced portability of components.
Enforcement of such conventions is bound to stir controversy in many a team, as many will argue that it stifles creativity or freedom to pick the most optimal solution. Yet, as many of us are discovering, that freedom comes at the price of spending ever more time reading ever larger and ever more complex codebases. Increasingly, we have to contend with the limitations of the human brain and take it into account when organising our code. Fortunately, it doesn’t take a cognitive scientist to do this; all we need to do is remember that the compiler is not the only audience of a computer program.