Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs or learn how to Submit Your Own Blog Post
Audio Implementation Practices to Ease Development Stress
Every game has so many audio assets and so little time to program them before launch. But that's where a smart technical sound design approach can mean the difference between a total development nightmare or a dream come true.
Sound effects, music, and dialogue, oh my! Every game has so many audio assets and so little time to program them before launch. But that's where a smart technical sound design approach can mean the difference between a total development nightmare or a dream come true.
Game audio implementation may not sound super exciting, but technical sound designers actually hold an incredible amount of influence over how smoothly a game comes together. We're not just audio traffic cops, moving assets from point A to point B—we're planners and systematic thinkers who can prevent massive development nightmares before they occur—if given the time and opportunity to put smart processes into place. This alone deserves a very serious discussion early on in any development.
The tips I'm about to share can spare you a massive amount of stress and steer your implementation in a strong and confident direction before and after your big launch.
Structure Your Project With Handoff in Mind
Organization is the heart of implementation, and even if you're a team of one at the start, it's a good idea to approach how you structure your project with a shifting scale and scope in mind. Maybe you'll end up growing your team along the way, or your hard work may become someone else's responsibility to port to new systems down the road.
Whatever the case, being able to provide some form of documentation on the methods that keep your implementation running smoothly can help bring any new contributors or collaborators up to speed and aligned from the start. No more mid- or post-game shuffles to compromise on mismatched approaches.
Set Clear Routing Expectations
Having a clear and communicable system in place can help more than other technical sound designers—your sound engineers, voice directors, and programmers can benefit too! Strong internal routing procedures help ensure all contributors understand how and what the following role needs them to provide, ensuring everyone has a stake in supporting the next person in the development line while keeping key decision makers, like audio directors, in the loop.
For sound designers and composers, this could mean aligning on the formatting, bitrate, etc., that the implementation expert needs. And no matter how that receiving technical sound designer prefers to prototype audio mixes in blueprints, they should already know whether the developers anticipate using blueprints differently and can then implement to the devs' advantage in the game engine.
Standardize Descriptive File Naming
There's no one right way to name game audio files, but a consistent and systematic structure to file naming can create some significant advantages for any project. While today's interactive audio softwares like Wwise do offer robust search functions that take file metadata into account, you can help foolproof your searches by elevating some of the key details into file names themselves.
One of my past projects saved an enormous amount of time and stress simply by including whether audio was mono or stereo in the file name. Stereo may sound especially great when tooling audio in a studio, but our implementation required audio to be in mono when all was said and done. By including the channel info in the file name, we were able to differentiate our working files instantly without error.
A smart and descriptive file naming structure can help limit the risks of relying on metadata to fulfill your asset searches. I've found that I can categorize and include a lot of important detail by using camelcase capitalization (e.g., HeroNameAction1Stereo…) which has the added benefit of eliminating disruptive and spatially demanding underscores, making sure every character counts.
Avoid Big Sweeping Changes
Bulk actions may seem convenient, but they should be used sparingly and with an understanding by your top creative decision makers that a significant degree of control over your end product may be relinquished to a program or preset.
Consider this horror story. While preparing a game to be ported to the Nintendo Switch, an implementation team ticked a box to compress all game audio to 8-bit/24kHz without notifying the audio director. Needless to say, the project was not prepared with that level of fidelity loss in mind, and the results were disastrous. In another unfortunate case, a team compressed an entire project's library to 44.1kHz and lost the entire top end of their painstakingly designed audio. It's enough to make you scream.
To get the time saving benefits of bulk actions but maintain some control over your output, I recommend grouping your audio with different conversion settings or presets already in mind. It's far more sustainable to assign conversion settings to targeted subsets of your audio instead of expecting all dialogue, SFX, and music to all accept the same sweeping preset equally well.
Use HDR Systems Wisely
HDR (high dynamic range) systems are a super useful tool for compressing a broad range of natural audio within the limited broadcast capabilities of speaker systems, but it comes with a cost. As with any compression, HDR systems make compromises to the integrity of your audio by collapsing the full possible range of 190 decibels into a span of less than 96, resulting in less weighty file sizes but a lower overall fidelity of sound coupled with loss at both your top end and floor.
As I see it, HDR systems are most useful once your audio has already been fully mixed, and largely for dialogue and music. By engaging HDR, you relinquish a good deal of control to a program or algorithm that remaps your audible sound at any given moment based on the loudest volume it detects. If you've implemented three volume changes on one complex sound, It's better to use makeup gain while in your given project to achieve your ideal implementation and inform more holistic decisions around using HDR in the end.
Where HDR systems shine is when expediency is of high importance, such as the creation of a game port for systems that offer less processing power—but mostly for dialogue and music.
About the Author
You May Also Like