Sponsored By

Building a Mindset for Rapid Iteration Part 2: Some Patterns to Follow and Pitfalls to Avoid

Following <a href="http://www.gamasutra.com/view/feature/3645/building_a_mindset_for_rapid_.php">his initial take</a>, EA veteran and Emergent VP Gregory completes his look at rapid iteration by examining methods for seeing asset change swiftly in your games.

David Gregory, Blogger

May 15, 2008

13 Min Read
Game Developer logo in a gray background | Game Developer

[Following his initial take, EA veteran and Emergent VP Gregory completes his look at rapid iteration by examining patterns that can help development teams rapidly make game changes and see them reflected in the playable product.]

In Part 1 of this series, we discussed the reasons why rapid iteration is so critical to your chances of success in building fun into your game, and some of the contributors to increasing iteration rates as teams, projects and toolsets grow ever larger.

Content transformation "expense" was defined as time elapsed before the change can be seen in the appropriate medium, usually a game engine or an engine-derived viewer.

With the goal of increasing efficiency on a game project, we started by looking at content transformations as the first optimization point. Now let's dive into the details of where you can squeeze significant time out of your processes: the development team's tools and practices.

Patterns to Follow

If the whole is equal to the sum of the parts, then the iteration rate for each individual developer on the team makes a big difference in your overall iteration rate. Make sure that each developer is working in the most optimal environment possible.

Get a Handle on Your Development Workspace(s)

Maximizing productivity is a lot about the details of a developer's day. Minimizing disruptions is important, be they attendance at unnecessary meetings, or just interruptions that break the flow of concentration in the middle of a task.

For example, you need to be able to Context Switch between development workspaces quickly, on the same machine. You may be asked to work on a feature, and at a moment's notice, fix a bug you aren't set up for. How do you minimize the interruption?

The development workspace is the collection of data, software, tools and utilities that achieve a number of transformations on data. For instance, your compile workspace includes source code, compiler, linker, environment variables, registry entries, project files, solution files, etc.

Your artist workspace includes digital content creation tools, the last known good pipeline tools for your game team, and the last known good target environment for you to check out your work in.

A lot of workspaces are set up as global singletons, making it impossible to switch workspaces on a single machine. This makes it very hard to set up your machine to debug a problem from another branch, and it makes it very hard to keep your build process the same on both your local machine and the build farm.

Notorious "problem child" software includes anything requiring an installation procedure, or anything setting up registry entries or global environment variables. This includes anything installing itself in the global assembly cache. Any configuration with hardcoded drive letters, or absolute paths, is generally a no-no.

The best configurations are usually file-based and script-based, and can easily be moved around from base directory to base directory, and can be distributed simply by syncing from a source or content control repository, or from another distribution mechanism if required.

Plan for Team-Wide Change

Game development is all about dealing with change. Development teams will often customize anything they can to achieve the desired result. There are some things you can do to try to limit the risk of making those changes.

For one, you should reduce your variables when introducing new functionality in the pipeline. Change as few things at a time as possible. That means staging your changes and lining up test plans for each change.

For example, you should not introduce a new texture compression method to your texture packing tool at the same time you're adding cube maps. Let one get tested and introduced to the team before the other is rolled out.

You should also set up a way of reliably testing new pipeline tools without affecting the entire team. Your content build farm should be able to run with last known good tools, or with untested tools. Results of an untested tool content build should not be released to anyone other than the person(s) testing the results.

Remember that not everyone is using the same versions of data and code. You should introduce data versioning and serialization capability into your non-shipping application.

Train your engineering staff to maintain version compatibility at least one version back from the last known good build, or whatever range you are comfortable with. This can be achieved without affecting the performance of the shipping application.

Follow Good Configuration Management Practices

Your pipeline should have a good Audit Trail, with logs, metrics and content lineage. You must be able to easily tell the difference between the data you build locally and the data you get from a build system. Don't commingle. Compartmentalize your data.

Reduce Compiling and Linking Times Where Possible

Compiling and linking times are huge on projects the size of a modern game development effort. Do what you can to shorten these times.

There are lots of tools available to engineers to make things go quicker; pre-compiled headers, incremental linking, and distributed build systems, to name a few. (Note the pitfalls below.)

Perform the Smallest Number of Transformations Possible

Understand the roles on your team, and what build results each person MUST see to determine if their change worked. For example, do you need animators to run your lighting build?

Transform Only What You Change

Transform the minimum amount of data that you can get away with. If you edit a texture, transform the texture, and nothing else. In fact, if you only need to change the alpha channel of the texture, make sure your pipeline supports outputting only the alpha channel (assuming you store it separately).

Do the minimum amount of data transformation (measured by time elapsed) that will still put the data in a form that the target application can read. Decide what's important that you see on the target. Is real lighting important for every change made? If you eliminate some steps, can you still do your work?

Don't do dependency-based content builds during host-target iteration. It will not scale. Large amounts of content will take forever to check. It's better to intelligently build only what has changed. Your "do nothing" build should literally be, do nothing.

Note that you likely want a dependency-based build in a build farm, where you are building lots of content, have all of the intermediate data (hopefully local to the machine that needs it), and when an iteration time measured in seconds is not required.

Location Independence and Format Flexibility

Your game engine (in non-shipping form) should be able to read data in a shipping/optimized form and many non-shipping/non-optimized forms. It also should be able to read it from different physical locations.

For example, optimized stream creation can be very expensive and require tons of source content and/or intermediate data to be available. It will always be better to be able to read the data without ever doing the stream creation.

Be able to retrieve records from several locations on the target and from remote locations, and have this be transparent to the upper layers of game code.

Load Only What You Need

Game engine load times are bad, especially during development. Viewing tools sometimes have bad load times too. Pay for the load once, and then load only the data that has changed.

In addition, to make it easier to "try things out", get a network connection going between the host and target, and set up a reflection mechanism so you can change data in running objects on the target without changing what's on the hard drive. This is great for particle system tuning, sound tuning, and in lots of other situations.

Keep the Data Moving

Always enforce a preference of memory over disk over network, but above all, measure!

For example, don't write data out to disk with one custom tool, only to read it back in with another custom tool -- when the two tools might have been architected to work together and act on the data entirely in memory.

Sometimes this is not possible because an external tool is not architected to allow the data to move in this manner. However, when you're building your own tools and are in control of your own destiny, make sure you don't "leave money on the table," so to speak, by overlooking these important opportunities to optimize.

Only Compress and Compile When You Must

Compression and compiling can be time-intensive functions. Assume that your target application can handle an uncompressed or uncompiled version.

If loading the raw form of data into the target application is shorter than the compression or compiling time, plus the time to load the compressed or compiled data, then you're better off not compressing or compiling in the first place. This goes hand in hand with format flexibility, as mentioned above.

Reduce Complex Dependencies

The more data you require to perform a transform on your machine, the more likely it is that you will have to retrieve or build that data to perform the transform.

This is sometimes unavoidable, but you should try to minimize it wherever possible. This comes into play particularly in data packing steps, where many records are packed into one big file. Avoid data packing steps during host-target iteration.

There's another way to look at the same thing, from a hard drive perspective:

Size of data written = Size of data actually related to change + Size of incidental data

There's no hard-and-fast rule, but if you have to write a ton of data to the hard disk, and only some of it is actually related to the change you made, you have an inefficient pipeline.

Potential Pitfalls

Beware the Quick Fix

Treat the problem, not the symptom. Your process change may have unintended side effects.

For example, there is a general Configuration Management guideline that build and automation engineers will tell you: Try to keep your local build environment as close as possible to, if not identical to, your build farm environment.

The pipeline installed on your local machine should be the same as it is on the build farm. The process by which the game and tool code is compiled should be identical on your local machine and the build farm. If those things diverge, you lose the ability to perform the "Find" step above effectively.

These days, distributed build systems are a very well-regarded optimization. They seem quite reasonable on the surface. However, those systems do break down periodically, so they may not be the best variable to introduce into a build farm, which needs reliability and repeatability above all.

Also, they don't produce exactly the same executable that a standard compile and link would. That means that if you introduce them into a local desktop build, and you don't introduce them into your build farm, you have now broken the Configuration Management guideline mentioned above.

There is the potential to run across some doozy of a bug found in the build produced by the build farm, and find out that you have trouble reproducing it because your executable is built differently.

So, use your best judgment here. There is no absolute right answer.

Beware Inserting Complex Process into Local Iteration

In Part 1, we mentioned the complex check-in "gates" that a developer might have to work through to get his or her change into the game. In many cases, teams will apply the build and test farms to this problem. You can do this, but make sure that your build farm is ready to scale to be part of every engineers workflow.

You need to plan for the heavy usage that you'll undoubtedly experience near major project milestones. If you don't plan for peak usage, you will iterate the slowest when you want to iterate the fastest, because everyone will be checking in at the same time and efficiency will plummet.

Don't Let One Person Cripple the Team

This one seems obvious. But this situation can occur a lot if you don't plan ahead. If it's not possible for someone to check in a mistake and fix it without taking down the team, you are asking for this problem. Everyone will have to wait while this person fixes his or her mistake.

WYSIWYG is Great, But Don't Try to Do Too Much

The closer you get to WYSIWYG across your entire range of tools, pipeline and runtime combined, the better off you'll be. That is for sure.

However, some teams have introduced workflows that have created an untenable situation, resulting in loss of productivity rather than gain.

For instance, in some cases, to shorten iteration time, teams have tried to recreate much of the look of the game in an environment outside the game, typically some sort of viewer application.

They do this for what seems like a good reason: the time to load the game and advance to a place where the content can be seen is dreadful, especially during development.

But there will be no way they will be able to keep up -- the game will introduce a new feature, and that same feature will be missing from the viewer application. It will always lag, and will always be a source of frustration and pain. (See Plan for Team-Wide Change above)

Build your tools, pipeline, and engine so that they can work together to carry the change into the target environment as quickly as possible, in a form useful for the person making the change, following all the principles above.

Each tool and each person on the team is a potential contributor to -- or improvement of -- the overall project iteration rate. By focusing on the factors that add time to your pipeline, you can increase your efficiency and more quickly and reliably produce a fun game.

Read more about:

Features

About the Author

David Gregory

Blogger

David is VP of Technology and Chief Architect at Emergent Game Technologies. Prior to Emergent, David spent 11 years at Electronic Arts in various roles from Software Engineer to Director of Software Development. He was one of the key players in launching the expansion pack model for The Sims franchise, and led the engineering team for the critically acclaimed and multi-million selling title, The Sims 2. Prior to Electronic Arts, he was employed at Software Toolworks/Mindscape, and Bethesda Softworks. He holds a Bachelor of Science in Computer Engineering from Virginia Tech.

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like