Sponsored By

Hardware and technology are changing rapidly, and game development teams and budgets have consequently grown to keep pace. However, the methods for creating game technology have not kept up. By using his own Sammy Studios as an example, Keith tells us why this needs to change.

Clinton Keith, Blogger

August 1, 2003

25 Min Read

It's not every day a studio gets to reinvent the game development wheel by setting up a brand-new technology group. Combining awareness of tomorrow's trends with the knowledge gained from game development's past, Sammy Studios chose to invest in a new technical infrastructure.

Hardware and technology are changing rapidly. Game development teams and budgets have grown to keep pace. However, our methods for creating game technology have often not kept pace. We waste millions of dollars and years of time to create games that don't meet the gameplay or visual quality bar that consumers expect.

In many cases, this situation is due to methodology that we use to create game technology. Ad hoc methods used to support a few artists or tune a simple game mechanic are not suitable for teams of 30 people or more. Also, given the growing size of programming teams, ad hoc leadership and team communication styles are no longer effective.

The most apparent effect of these problems is an increasing amount of thrashing, or wasted effort within a team. Content creators (artists and designers) are often delayed by key technology; even if they do have a working pipeline, they are delayed by long iteration time between making changes to assets and seeing them in the game. On the programming side we wind up chasing frequently changing goals and conducting death-march efforts to keep pace with schedules created from some dim, optimistic past.

These are not the best conditions for creating the games we wish to create. Such conditions leave little time for exploration of what will make the game fun. At worst, they sap the passion of the developers to make a great game.

The game development industry needs to mature. It needs to develop technologies and methods for freeing up creative and content roadblocks. It needs to keep schedules realistic and provide time for refactoring our code and updating our assumptions.

To that end, this article describes building a technology foundation for a new game development studio. It's about taking all the team members' accumulated experience and trying to get it right from the start. It's about addressing specific technical infrastructure problems that can prevent us from making the best games possible. Although it is far easier to start with a new group and blank slate, every one of these problems described can be addressed by any existing programming team. The solutions presented are based on our collective experience over our careers and may be ideal for your organization, but the goal is to present a starting point. Many of our lessons learned have come from failure as well as success.

The Goals

Our goal is simply to make our development teams as productive as possible. We want to give the content creators (designers and artists) fast and intuitive control through tools which allow them to discover gameplay and resolve production problems as early as possible. We want to establish a methodology that will allow the programmers to work in an effective team environment. Our focus is to create the technology and processes that will support these goals and provide the basis for a number of development projects running in parallel. The approach we've taken is to invest heavily in a technology infrastructure from the start. This meant creating a sizable Engine and Tools Group from the beginning.

The following article describes the decisions we've made regarding technology, tools, and methodology. It's about our solutions to common problems given the opportunity to start from nothing but a commitment by Sammy Studios to invest in a technical infrastructure.

Technology

Technology is the foundation for development. We want to architect this foundation to make it both flexible for prototyping and robust for production.

Data-driven design. Game design requirements are very dynamic, and our technology needs to be designed to handle this. Game behaviors and tuning parameters must be iterated frequently to produce the best results. Game engines often do not support this approach. A common practice is to embed the behavior of the game's entities too deeply into code. As a result, a programmer will end up spending a great deal of time making small code changes and building a new version of the game for a designer. To address this, programmers might create simple text-format files for storing frequently changed parameters, but don't make the parser robust enough to handle format changes and backward compatibility.

Another problem is depending too much on object hierarchies for behavior. Anyone who has written a large object hierarchy knows that moving object behavior around the hierarchy can produce a great deal of problems in the long run. An example of this is moving AI behaviors around the hierarchy until you end up with AI behaviors in base classes or a great deal of cut-and-pasted code. Both of these solutions create a fragile code base that becomes increasingly difficult to maintain.

A data-driven design can solve these problems. The system that we created is called an Actor Component System. This system allows groups of components, or basic objects of behavior, to be aggregated together to form actors within our games. The components that make up actors are driven by XML data files which the designers or artists tune with a Maya plug-in editor. Components and actors communicate with each other through a messaging system that allows the data contained in the components to be loosely coupled.

For example, say you have a locked door. The designer may want you to change that door to have it unlock when there is a specific class of NPC in view, which would require adding an "eye" component to the door. When the eye component "sees" an NPC it recognizes, it broadcasts a message to its parent actor indicating that the door should unlock and open. The benefit of this approach is that you don't have to have all door objects contain eyes, and eyes don't have to know to what they are attached. A simple scripting system glues the logic together (for example, seeing a particular NPC would trigger a door-open action). Making this change in an object hierarchy behavior model would be more challenging.

There are problems to be aware of with a data-driven design. You can easily give the designers too much control or provide too many controls to adjust in this system. This can result in unforeseen combinations of behaviors that can generate a great deal of problems. We address this issue by having programmers combine components into Actors templates ahead of time. You don't want to create a system that attempts to remove the programmer from the design loop.


figure_01.jpg

Figure 1. Three-level tool hierarchy based on iteration time, depth of data manipulation, and interface complexity.

Middleware. As a new studio, middleware was an obvious choice for us. Halfway through the current console cycle is not the best time to be creating a new technology base. Creating your own core technology requires a time-consuming process of hiring specialists in programming graphics, physics, audio, and all the rest, for all platforms. The amount of time it takes to create this core engine adds a great deal of risk to development.

We chose to leverage mature middleware wherever possible, which has accelerated our prototype development. Middleware vendors provide plug-ins and exporters for Maya or Max, allowing us to focus programmers familiar with the SDKs for these programs on extending functionality for our own use.


TIP: Focus your interface development in areas that will be used the most. A value that is rarely set can be given a text field. A frequently tuned value may deserve a custom slider control.

 

Middleware must be carefully evaluated. We've rejected some middleware packages after our evaluation determined we could not meet our goals with them. Middleware that does not have source code licenses adds a great deal of risk and has been a big basis for not using certain libraries. Middleware that has not been used on a published game is also a risk. Such risk might be acceptable if you were replacing some existing technology, but in our case we didn't have technology to fall back on. Also, some middleware can be suited for prototyping but not for production.

Engine design. People are often confused by our effort to develop an engine after we have chosen to use middleware. Such confusion stems from misunderstanding what a game engine really is. It's not a renderer, but rather a framework and wrapper for the various subsystems (including middleware) in the application. It unifies resource management, control, sound, networking, and gameplay with common interfaces that allow them to work well together.

Engine design is often neglected, which can lead to problems. When middleware and platform implementations are not well insulated, replacing them can create major headaches. Subsystems that are not insulated from one another can create a web of cross-dependencies that build up during development and take more and more of the programmer's time to maintain. When subsystem interfaces are created independently, it becomes anyone's guess as to how systems will all work together properly.

The solution is to architect the engine and framework as early as possible. For us this began with an agreement about the coding standards and project structure. A design phase defined the top-level design, the interfaces, and several use cases describing the top-level flow of a game using it. Our framework consists of a number of subsystems that inherit an identical interface. This interface defines each phase or operation of the subsystem from startup, reset, simulation update, rendering, and shutdown (among others). These subsystems are treated as tasks with their own priorities. This framework allows us to control the game flow at this highest level of code rather than having lower-level systems having to "know" about each other.


TIP: Having a single interface where everyone on the team can control the version of their assets and launch the game from one place is very useful. In the past I have written independent tools to do this, but we were able to integrate this tool into the Alienbrain client. Using the Alienbrain instant messaging system can bring everyone on the team immediately up-to-date with any changes to the assets, executable, or exporters. Links to new files can be sent and automatically updated.

 

Insulating the higher-level code from the lower levels is important. This includes creating wrappers or defines for middleware-specific types and isolating platform specifics through common interfaces. Proper interfaces are the key to solid engine design.

Generic networking libraries. Online networking is a popular feature these days, and it's important to address it early, as it's not easy. Leaving network development to later in the project will create a lot of refactoring in your game object behavior. These objects need to be developed and tested with networking technology in place.

We created a generic network layer very early and have benefited in many ways. It allowed us to test new behaviors in the networking environment as soon as they were written and fix problems that are best solved when the code is fresh in the mind of the author. There were also a few surprising benefits as well: By allowing early network play, our designers had early insights on potential AI behavior. In addition, we have fully leveraged this technology for our tools, creating robust tools that run on the PC and work and communicate with the games running on the consoles.

Tools and Pipeline

Our goal is to give content creators fast and simplified access to the technology. The more times they can iterate and the less time spent waiting for fixes, exports, and programming changes, the better the game will be.

Tools. Tools for development are essential, but their development can easily be mishandled. They can limit content quality and production flow if not properly developed. This is often due to limited resources being dedicated to tool production early on. Tool development can also be too ambitious, providing complex, deep tools that do not meet the expectations of professional artists who are used to mature interfaces. Tools can place a major burden on users by introducing complex steps or latency between creating assets and seeing them in the game. They might depend on parallel functionality in the game that could be changing rapidly and end up requiring heavy maintenance. At worst, an asset that works in the tool might not work at all in the game once it is imported.

Our approach to tools is to create them at three levels. These levels correspond to how tightly coupled the assets and data the tool manipulates are with the game (and how fast changes happen) and how deep the user interface is. Figure 1 shows the relationship among the different types.

The top level consists of plug-ins and extensions to Maya and other commercial tools. Maya has hundreds of man-years of development in its user interface, there is a rich pool of talent that knows how to use it, and its interface is extremely customizable and extensible. This is what the artists and designers use to perform the large-scale operations of creating levels and geometry and setting up gameplay. They spend most of their time in this environment, and so their tools need to be solid.

The mid-level tools are MFC applications linked to our engine. An example of this is a tool that allows us to create and tune our animation finite state machine (FSM). FSMs are often defined in code or in obscure text files that designers or artists cannot manipulate. A user interface such as MFC allows your tool programmers to create capable interfaces rapidly. The artist is graphically manipulating the FSM and its parameters, and can see the immediate progress within the game view. There is nothing that can be lost in translation or code duplication between the engine and the tool when they are linked together.

The lowest level of tools we develop are those which run directly on the console. These tools manipulate data that is dependent on the hardware. An example of this is our mip-map tuner which allows the artist to select a texture within the game running on a PS2 and tune the mip-map (l and k) settings in real time. The networking layer allows this tool to be run on a PC that is on the same network as the PS2. Once the artist is happy with the settings, he or she saves the parameters out to the asset pipeline, which uses those values for all subsequent PS2 exports.

Another important feature is to make sure the interfaces are as uniform as possible, for example, making all your camera controls work the same as Maya's. Keeping a dozen or so tools under proper control creates problems. Properly versioning the tools and the assets they create is a major requirement for the pipeline, which I'll address next.


Renderware Graphics 3.5 introduced platform-independent XML format (RF3) and export templates that control how the RF3 is converted to platform-dependent assets. This is a great example of an intermediate file format that made the pipeline far more extensible.

 

Asset pipeline. Channeling the flow of thousands of assets (source and exported) through a system that maintains many revisions can be a major challenge. The problems are too numerous to list individually, so I'll generalize them:

  • Maintaining revision control not only of the assets but of the executables. When everyone has different versions of the game, it's hard to track down problems but easy to lose tuning improvements.

  • Old assets that are no longer useful clutter the system long after they should be retired.

  • Numerous paths for adding assets exist. No permission system exists to protect the data.

  • No meta-data exists to control the asset export. For instance, what would you do if you needed to change the scale of every exported asset?

  • Bad data (assets that can crash the game, for example) needs to be caught before it goes out to the rest of the team.

The first step in creating an asset pipeline is to visualize what you want it to do. We flowchart the path for assets through each system we want to create and work with the artists and designers to develop case studies of how specific areas of a pipeline will work. The goal of this flowchart is to identify and remove bottlenecks for the artists to create scenes and see them in their final form.

Many developers have created custom asset management tools that required major investments. The impact on budgets and schedules due to bad asset pipelines certainly justified the expense. However, there are some recently released commercial applications that make such an investment in homebrewed solutions no longer necessary. We chose Alienbrain as our base asset management system. Alienbrain came with Maya integration built-in and an extensive COM-based interface that allowed us to integrate it with our engine and tools.


figure_02.jpg

Figure 2. Three-level tool hierarchy based on iteration time, depth of data manipulation, and interface complexity.

One other key element of the system is the use of an intermediate XML file format that is exported by the tools (Figure 2). This intermediate file format is an additional file that is exported into the pipeline. It contains all the data that you would potentially be interested in. This gave us two major benefits:

First, assets can be re-exported from an automated system if we wish to change some basic value. For example, when we wanted to rescale our geometry, we changed one float in one template and hit one button to re-export everything.

The other benefit is that exported assets can be deleted and regenerated every night. Together with meta-data-driven asset tracking, this is a useful system for culling old assets that are no longer used.


During development there are many improvements to the technology or methods made to make life easier. How do you track these improvements? A single document that collects descriptions of these is a great help.

Think of what you would want to hand to new programmers joining your team. You want them to come up to speed on your team's practices as efficiently as possible. If they need to know it, it should be in the best practices document.

The Best Practices Document

The major ongoing issue of an asset pipeline is that it is constantly changing. With the addition of tools during development it is easy to introduce problems and pathways that make it harder to use. Revisiting the state of the pipeline and fixing problems is a must, as asset pipelines are never truly "finished."

Methodology

The effectiveness of a programming team is determined by how well they are organized and how well they work together. A team that is not moving toward a commonly understood goal and sharing the same practices is not going to be very productive.

Shared practices. Creating a game is a team effort that should be supported by certain practices. Such practices include sharing tool improvements and improving the ability for programmers to understand each other's code. Code that is hard to read, poorly documented, or full of bugs hinders efforts to streamline programmer productivity. Improvements to the technology and development tools need to be shared widely enough to benefit all programmers on the team.

To help solve this, we use a best practices document. This document is a collection of all the standards and practices that have been established. This document is constantly updated to include improvements or refinements to the system.

The best practices document includes coding standards, setup instructions, naming conventions, documentation requirements, commit practices, and descriptions of useful utilities and tools for the programmers. Revisions to this document happen continually; whenever someone sends out a useful macro for debugging, I'll have them include the information for that in this document.

A best practices document alone is not sufficient, however. Other practices such as code reviews and pair programming exist to ensure that the entire team is continually following these practices and that good practices are being promoted. If these practices are followed, you'll find that everyone's code quality will improve and maintenance will be reduced. Programmers write better code when they know more people are going to look closely at it.


A lot of automation to control the commit practices can be built. This automation makes the task of committing and sending mail to the team as painless as possible.

 

Commit practices. Source version control tools are essential, but they can introduce as many problems to a project as they solve. It's very easy for programmers in a rush to commit untested code changes that bring the entire project to a halt. It's not uncommon to see teams spending half their time fixing problems related to this issue.

We've set aside a PC that is our commit test target. Before a programmer makes a commit, he or she first reserves this machine. Following the commit, the test PC retrieves those changes and rebuilds all configurations of the game. When all the builds are successful, the target PC is released and the programmer sends a note to a team list describing the changes. This catches most of the problems committed, but not all of them. Daily build tests catch many of the rest.

Nightly builds. A common problem occurs when you're not sure what version of the game or assets is being used by members of your team. An artist might have a crash problem on his or her machine, but the problem cannot be replicated on a development system. Trying to figure out such puzzles wastes a large amount of time.

Earlier I mentioned that we re-export all of our assets overnight. This is done on the PC that is used as the commit test target. The tool that creates these builds also embeds version numbers in the executables and the game (for run-time version testing). Each morning the assistant producer runs the game that was regenerated overnight and goes through a regression test. Any problems must be solved immediately. Once a working set of assets and executables are identified, they are copied up to a network drive. Everyone on the team is informed (using Alienbrain instant messaging) that they can update to these versions.

The benefit of this is that the team can copy known working assets and executables to their local drives and start making changes. If an artist introduces a new asset that breaks their local copy of the game, then they know they caused it and that that they cannot commit this new asset. The same goes for programmers changing code. In such a situation the artists are encouraged to seek a programmer to solve the problem.

Leadership. Programming teams are often led by someone who does not yet understand how to lead. That person has shown a great talent for programming and was probably promoted with no instruction on how to fill the lead programmer role. This situation can lead to disaster for the team, because the lead will continue to focus on programming and not leading the team.

Leads need to spend half their time managing the effort, dealing with problems that are affecting the team, planning to avoid future problems, and making sure everyone is working toward the same goal. During milestone crunch times, they will need almost all of their time free for putting out fires. As a result, leads should not assign themselves key tasks around critical milestone deliverables. Leads should focus on mentoring and taking a global view of the technology being developed by the entire team. There is no way a lead programmer alone can create enough useful technology that would offset the benefit gained by having someone focusing on team issues.

From Investment to Returns

Many of the problems described here are common to every developer. Our solutions were developed based on our current circumstances and collected experience. These same solutions may not apply to you, but the problems still need to be addressed. Creating and justifying the expense for infrastructure can be an uphill battle with management; the value added by infrastructure cannot easily be tracked by counting games sold. A solid infrastructure does not ensure a hit game; rather it reduces the number of obstacles that get in the way of creating a hit game.

 

Suggested Reading:

Brown, William J., and others. AntiPatterns: Refactoring Software, Architectures, and Projects in Crisis. John Wiley & Sons, 1998.

Cerny, Mark, and Michael John. "Game Development Myth vs. Method." Game Developer vol. 9, no. 2 (June 2002): pp. 32-36.

Game Programming Gems, vols. 1-3. Charles River Media, 2000-2002.

McConnell, Steve. Rapid Development. Microsoft Press, 1996. (Or any of McConnell's other books.)

Meyers, Scott. Effective C++, 2nd Ed. Addison-Wesley, 1997

 

______________________________________________________

Read more about:

Features

About the Author(s)

Clinton Keith

Blogger

Clinton Keith was a video game developer for 15 years and introduced the game industry to agile/Scrum/lean development. He is now an independent agile coach and certified Scrum trainer who conducts workshops at studios and in public. He is the author of "Agile Game Development with Scrum" which will be published in early 2010. His website is www.ClintonKeith.com. He will also be speaking at the GDC 2010.

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like