IGDA Wisconsin held a panel discussion of sound design in video games. The meeting, held in the basement dining hall of The Roman Candle Pizza in Middleton, WI, was moderated by sound designers from High Voltage Software, Raven Software, Human Head Studios and an independent designer currently working on a project for Frozen Codebase. The message of the evening dealt with how sound design affects the experience of gameplay as well as how we can best use the talents of the designers on the development team.
Since the previous speaker of the evening had already introduced iPhone development, a convenient segue was made to sound design in iPhone development. The technology for iPhone audio is similar to PSOne development. This makes things simple because only a single stream of .wav files can be used at one time. It does make it a little more challenging to layer sounds, however. Also, developers have noted a framerate hit when working on a stream, so there are some fairly large constraints.
The next talking point regarded the use of third party middleware vs. in-house proprietary toolsets. In other words, is it easier or better to work with ready-made audio toolsets over a specifically built program at any given studio. The opinions of the panel were mixed. While some have used or are successfully using middleware such as Vicious Engine 2, FMOD Ex, and audiokinetic Wwise, others have used proprietary software with mixed success. Proprietary software can be crude, but it can also be custom and a pleasure to work with.
It would seem to depend on the software and the company. FMOD, on the other hand, has sound and access properties that are available to anyone with scripting access. It also features it's own ATI plug-in capability for ease of use. Audiokinetic Wwise, used in games such as Assassin's Creed 2, integrates a complete audio authoring tool giving designers a wider array of options than other toolsets. Meanwhile, tools such as Unreal Engine feature embedded sound files which are limiting and more difficult to work with.
It was pointed out that the game industry lacks any real set of standards in sound development. This is significant because industry standards create growth and direction. Middleware gets us closer to a standard in the same way that programs such as Maya and 3ds Max have for 3D graphics. It is essential that sound design works more toward more dynamic sound ranges such as those demonstrated in Grand Theft Auto IV.
In GTA IV, conversations can be heard clearly on the street, but a missile flying by the player's head and exploding can knock the player out of their seat. This is the kind of dynamic sound range that is necessary to avoid white noise and the condition in which a player may find themselves trying to hear instructions over music and sound effects. Ducking is one method designers use to create range. With ducking, ambient sounds are temporarily diminished so that voice instruction audio becomes clearer and crisper.
Great sound design offers game developers a multitude of benefits. Games that sound good improve the player experience and make the game look more polished and graphically amazing. The right sound effects can actually make assets in the game stand out. This is an important bit of information for artists.
Working closely with the sound design team can make the asset the artist has spent so much time on "pop" on-screen. Dynamic sound that more closely resembles the daily lives of players is more immersive. There is less suspension of disbelief so the experience is simply less jarring. Bad sound effects can easily take the player out of the experience because it tends to be so distracting.
The one thing the sound design panel conveyed clearly is that it is important for all members of the team to communicate clearly and early with the sound design team. Give them time to help make an animation or level stand out. Tell them what "feel" is desired from the sound effect or moment in the game and trust them to help make that vision a reality.