Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs or learn how to Submit Your Own Blog Post
What 2D game developers fear about surround sound and why you should do it anyway
Why 2D games seldom feature surround sound output? Should they? How can this be done correctly? We try to look deeper into those questions and provide some useful answers for the aspiring game developer.
What 2D game developers fear
In 2D game development it’s common practice not to use surround sound. In this article I will try to explain why this happens as opposed to the film industry which uses it with success, why anybody should try it and also share some solid guidelines on how to do it the right way. So, if you are a game developer and you have second thoughts, or you never really thought about it, now is the perfect time to create your game featuring surround sound. Apart from making your game far more exciting for the players, it can really help separate your game from the rest of the competition.
Let’s take it from the start.
For years now, both in game development forums online and discussions that I had with many game developers in person, it really impressed me that they all fear surround sound. I know “fear” is a powerful word but I use it because they really express fear.
What they feel is fear of the unknown, and not because they don’t understand the technology or they don’t have a good idea how their game should play sound. Their decision not to “go there” with their game, is a logical one, it’s a production decision, where experienced people with good logic decide not to include this feature in their final product.
The problem is that more channels surely offer more immersion, but as we know from engineering, more features means also more issues, problems or ways for things to go wrong. It’s a know philosophy in computer programming, that by adding 1 feature in your program, you introduce 2 bugs. Funny and true.
The common 7.1 surround setup
Before we continue I think it’s a good idea to first define, which is the common surround setup that we are going to base our logic upon for the rest of this article.
I choose the 7.1 surround setup (see picture below), as this is what is most commonly offered by sound card outputs and also from that format, we can easily derive the lesser ones.
For this setup, we use 7 speakers, usually all full-range, and one sub-woofer that plays back the low frequency effects and sometimes the low end frequencies of music.
The first speakers are the front left and right, followed by one in the center. Those 3 are located in front of the player. Then 2 surround satellites are located in the back and 2 more on the sides of the player, making up 4 satellites for the extra surround content. Finally, there is the sub-woofer that carries the low frequencies.
At the last part of the article series, I give specific instructions on what each speaker should play, but first we must define the common problems that come with using surround in your 2D games, defining the fears of any developer in a concrete way.
A common format many sound cards support is the 7.1 surround. The common setup is 3 speakers in the front, 2 in the back and 2 more on the sides, with the sub-woofer usually located in the front too, but not necessarily as humans cannot easily pinpoint low frequencies in space. So the sub can be placed anywhere, as long as the location does not compromise the quality of the sound.
In film industry, the immersive quality of sound was utilized early on and we all now enjoy the sound surrounding us while watching our favorite films.
Common issues found in games with surround sound
To understand what is happening in the real world, we can look into 3 of the most common issues found in games with surround sound:
Differences in user’s LFE setup
Many of them have a specific implementation for the LFE (Low Frequency Effect aka “sub-woofer” or simply “sub”) channel. That creates a condition issue for almost half of the players that have a different routing type in their system. So half the times, a player will not hear correctly the low frequency effect, if present. The problem gets deeper, if the developers didn’t use the LFE channel properly and some sounds play their sub frequencies from the LFE while some others play it back from the front left and right speakers, assuming that those speakers are full range. That creates more issues with the sound feedback from the game to the player and has a negative impact to the game’s audio consistency and therefore its perceived quality and immersion strength.
Center speaker internal routing
Other games have similar problems with the center speaker. In some cases there is no sound at all. In other cases, the voice doesn’t play from the center speaker but certain GUI sounds do. The center speaker is a problematic one as developers don’t seem to agree on how to use it in combination with dialog, ambience and the reverberation output of the level.
Side channels in 7.1 configurations
Many developers have only a 5.1 setup and they forget to include the side speakers in some of the sound events their game features. That creates strange mixes for players that use the extra 2 side speakers of the 7.1 configuration. For example, level ambience doesn’t render from those speakers, yet standalone 3D sounds are, creating bizarre soundscapes that don’t reflect the nature of the game’s environment, or the movement of the player’s camera (and the listener object).
The bad results
All those and many other issues that appear with surround sound, can easily break immersion and even worse, when the player loses because of said inaccuracies in sound feedback, she becomes angry and the psychological flow breaks. And these issues occur independently of the budget of the production and the size of the team, they are equally distributed to any level of game production.
Regarding 2D games
The examples of issues mentioned above, are here to show us, that surround sound can be a pain to implement in your game, especially when it comes to indie developers or small teams. And even if you do, there are scenarios that it will work against the experience.
Now imagine how much of a task can this be for a small indie development team, that wants to create a 2D game. It’s a difficult task if you don’t have someone on your team to work exclusively on that solution and any development tools or game-logic that will create the framework that will allow good translation to various gaming setups.
Add to that the fact that for some – unknown – reasons, when someone develops a 2D game, never thinks about surround sound anyway. As if surround sound is something that can only benefit a 3-dimensional game. Yet, in all the history of entertainment media, we have countless examples that show that there is no such link. Take film for example. A traditionally 2D medium, with a linear display of 3 dimensions, but not really any option for the viewer to change angles, turn the camera and experience the environment in any way. Furthermore, films have no need for surround sound to guide the viewer or give feedback for decision making and acting upon the environment. No! Movies are all about immersing the viewers in the story and they use surround sound to do it right. Any educated sound designer or music composer will tell you that if you use surround sound correctly you will achieve great immersion for your audience.
In film industry, the immersive quality of sound was utilized early on and we all now enjoy the sound surrounding us while watching our favorite films. So, we see that in movies surround sound is combined with a simple screen-based experience from early on. We can understand why this is not happening today with 2D games, but on the same time, we can see the potential benefits of using surround sound for 2D games. - Image by Krists Luhaers on Unsplash.
So there are two common issues that we face when we try to setup surround sound playback for our game.
The first one is that there are no hard rules about how to use surround channel configurations. And that is a good thing, as it allows artistic expression to come out of the medium. But it leaves a lot of uncovered ground for mistakes to be made.
The second one is that surround sound mixing is usually done using specific management for different groups of sounds withing one project. Game authoring tools and frameworks usually offer a generic way of mixing surround or even worse don’t offer anything out of the box.
Why you should do it anyway
Actually the benefits of using surround sound output in your 2D game are relative to your game’s aesthetics in general. Only you, the creator, can think of how you will use it and what will play through it. But I will give you some major reasons why you should do it anyway, to show you the bigger picture.
Unique selling proposition
A unique selling proposition (aka USP), is a marketing term referring to the unique features that a product has, in order to differentiate from the rest of the similar products offered in the market. One of the highlights of your game if you will. Imagine having that at the “back-of-the-box” description of your game for your happy players to discover while browsing for games to play. Not to mention that it creates even more curiosity, your players thinking “What? A 2D game with surround sound? This must be special!”, and click the “buy” button.
Fantasound (1938 – 1941) was a stereophonic sound reproduction system developed by engineers of Walt Disney studios and RCA for Walt Disney’s animated film Fantasia, the first commercial film released in stereo. The neat feature of Fantasound, was that under the stage, operators assigned to each movie character, were panning the sound according to the location of the character in the movie screen, in real-time by following a timed script. When Mickey was moving to the right of the screen, the audience was hearing Mickey’s singing sound moving along with him in stage-space. That was one of the commercially successful beginnings of immersive sound in the history of entertainment media. - image by Wikimedia Commons.
Surround sound can be a strong feature that sets your game apart from the competition.
Extra interface feedback
Those extra surround channels can be useful. Sending various feedback information to the player is always a task worth exploring, as you develop better multimodality for your game.
Moving a sound from the speakers near the screen to all the speakers, as something is coming towards the player.
Big gameplay events play back from all over the place. You know, the orchestral piece that signifies the end of a level with great success, or the hungry sounds of zombies that finish off the player right before the dreaded game over screen. Yikes!
You have any fancy ways to transition between levels? Portals, light-speed traveling, time-jumps? Play with sounds that transition between speakers and amaze your players.
Super powers or power ups that just got picked up or gained? Magnify the effect by playing the sounds around the players.
Spice up your game’s environment outside the screen
Oh, there are many thing that you can do here. It’s up to you really, but here are some ideas:
Is the player’s character entering a cave? The reverb that plays back the reflection from all the sounds and surrounds the player, will put the player in the same cave, augmenting in a way the player’s physical environment.
Did the player blow up something on the screen? The debris should fall all around the player’s room.
Do you have any elements in the level supposed to be all around the place? Birds, ghosts, gun shooting? Surround the player with them.
Do you have 2 environments simultaneously within the game? A space battle simulator with sounds coming from outside the spaceship and sounds coming from inside the spaceship’s cockpit? Maybe a car racing game with events happening from both inside and outside of the vehicle. Use reverb to separate and deliver in multichannel to surround the player.
Separate diegetic versus non-diegetic sounds with ease
Use different reverb for diegetic versus non-diegetic sounds and pass that through all the speakers. That will immediately inform the player, that the sounds are outside the on-screen action. Very useful for:
Narrator voices.
Graphical user interfaces located outside the game’s world, such as on-screen energy bars, counters, scores, etc.
Graphical user interfaces located in the game’s world but supposed to carry feedback outside of the on-screen action. Like a strategic command’s room HUD in a tower defense game, which should feature different reverb effect and no the same as the environment shown where the game’s action takes place.
And my favorite, the player’s internal monologue. Always a great way to deliver information, progression tips and great humor, for the game creator that will use this feature wisely.
That powerful low frequency effect
This can be used every time you introduce some world-altering event in the game. Big earthquakes, meteor shower, world destruction, end-of-days stuff.
You can also drive your players mad with anticipation by slowly rising the volume of a low frequency loop, as something evil this way comes. Drive the loop’s volume with the player-to-evil entity distance to make the player explore the level in fear.
After the release of several free and paid utilities that render any surround sound to a fully spherical binaural experience, your game’s surround sound output can also be enjoyed by players with stereo headphones. The two most famous utilities are the Windows Sonic for Headphones and the Dolby Atmos for Headphones. More on that at the third part of this article series. - image by by Stem List on Unsplash.
Surround sound gives you the power to create awesome gameplay with clear user feedback and unique user interface mechanisms through sound.
How to do it right
By adhering to some simple rules, you can create a solid surround sound experience for your 2D game, that translates well in all possible speaker configurations and platforms. Those rules protect your game’s sound from falling into the common pitfalls that we discussed earlier in this article.
To do so we need to approach this through an organized grouping of the sounds based on their functionality, which is always my preferred way of organizing sounds in a project.
The grouping that I have chosen to use in our example is of the most common ones where the sounds are grouped as:
Common sounds
Big events
Alerts
Music
Noise prints
Let’s take them one by one.
By the way, the screenshot of the game used in this article’s audio routing schematics, is from “R-Type”, one of my favorite side-scrollers.
Common sounds
Common sounds from a 2D game are considered all the sound effects that are part of the action of the scene. Those do not include music, noise prints, alerts and big world altering events.
Common sounds consist a big part of the game’s audio content. I’m talking about the sound effects from weapons, footsteps, pickups, jumps and all those sounds that are linked with common events of the game.
So an easy way to distinguish them from other sounds is to think for which event they are being played back in sync. If the event is common, then the sound should be probably considered a common one.
As we see from the image above, common sounds would be safe if:
Played back from the main front left and right channels.
A percentage of them is routed through the reverb that simulates the acoustic environment of the location ( level, room or landscape) that the action takes place in.
The reverb output (sometimes also refereed as “wet”) is routed to the front speakers.
The reverb output is routed to the back and side channels as well, to give the reflections a way to surround your players.
As you may notice, the common sounds are not routed at all to the front center or sub channels. That is by design, to ensure the same experience with all playback scenarios.
Big events
Big events are considered sound effects that play an important role in your game. Earthquakes, boss-going-down explosions, a major power-up just got picked up. Anything that would make a difference in the world or story.
As you can read in the caption of the picture above, those are sounds coming from events that play a major role in the world, story or general gameplay. Usually those signify that something big just happened or something big is going to happen. That last one is my favorite. Using this kind of sounds in your game can create unforgettable anticipation and raise the levels of adrenaline for the player in anticipation of something big to happen. I still remember the sound of the spider approaching in Limbo that reminds me what a great game it is. So there you have it, use the big event tactic wisely together with sound and you will not regret it.
As we see from the image above, big events will be safe if:
Played back from all the channels except the front center.
A percentage of them is routed through the reverb that simulates the acoustic environment of the environment that the action takes place.
The reverb output is routed to all the channels except the front center.
As you may notice, the big events are not routed at all to the front center. That is again by design, to leave space for the sounds that are going to be routed in the front center channel. More on this later.
Alerts
Alerts is a special category. It contains all the sounds that should be elevated from the rest of the audio content, as it contains very useful information for the player. Although some common sounds work as alerts, remember our definition of common sounds and you will be able to easily distinguish between them.
I consider alerts to be a special category that fits many kinds of sounds. By definition they are sounds that should alert the player for something. But, if you think about it, every sound does just that. The footstep sound informs the player that the game character just made a step. Sounds from picking power-ups inform the player about the successful action of picking the power-up.
Alerts are sounds coming from events that are paramount to pass on to the user. Those are events that the user should really get feedback clearly. In a shooting game, if the bullets end and the player should re-load the weapon, this is usually of paramount importance (unless by design it shouldn’t). These sounds also don’t vary much, are given directly and require immediate action.
Voice should also be included in this category, that is why I don’t have a voice category in the proposed organization. With voice is easy to separate. If the voice is to inform the player, then it is an alert. If the voice is just a sound effect, then it is a common sound. There is also a common ground that we should mention here. When the voice is an alert that also has to carry localization information in the feedback to the player, like the voice of a zombie that you don’t see, but it is approaching the player. In this case we have a hybrid, in which the sound is both a common sound and an alert. So, a middle solution should be used based on your game design and tests.
As we see from the image above, alerts would be safe if:
Played back from the front center channel in order to be delivered as clearly as possible to the player.
A good percentage of them is routed also in the front left and right channels, as safety when the front center channel is not available, or as in some bad cases, mis-configured. For example, the speaker holds the balcony door open during summer, or it’s a permanent side-stand for the bookshelf above the desk. :-P
At the schematic above you can see that I also use reverb to process a percentage of the alert sounds and rout that on the front, back and side channels as well. But in the schematic I also have a question mark at the send-bus to the reverb. This is because, you should choose if that happens depending on whether the sound is diegetic or non-diegetic.
Diegetic
What “diegetic” means (and it doesn’t have anything to do with how many calories the sound contains per 100grams), is that the sound is taking place within the scene of action, so it is originating from the environment that the game’s action takes place. In that case, the sound should also pass through the reverberation simulation of the environment. That will make it fit with the rest of the game’s environment content.
Non-diegetic
On the opposite side, “non-diegetic” means that the sound is not originating from the environment that the game’s action takes place. Like the narrators voice we hear from “somewhere else” in various games and movies. In that case, the alerts should not pass through the game environment’s reverb simulation. In fact these sounds should have their own separate reverb added, to highlight their origination from a world beyond.
This extra reverb output could be routed to all the channels and could be made to reflect the special case of the alert that we are hearing.
Here are some examples, just to give you an idea:
Narrator in a high fantasy setting: Longer reverb as coming from a king’s throne hall, or short as coming from a cozy bedtime story setting.
Player character’s inner-voice (thoughts): Reverb with minimal reflectivity for the detective kind of stories, or reverb with long echoes if the character is reflecting the story from a future point in time. More reflection on the narrative, more reflection on the room reverb, kinda funny how things worksometimes, right?
Alerts coming from the control center far away: Reverb that simulates the room of a high-end military control center, passed through a distortion module to add that walkie-talkie corruption we all love.
Music and noise prints
Music is self-explanatory and it’s one of the best ways to convey emotions, enhance the drama or the action of your game. Noise prints is one of the main methods to psychoacoustically create immersion of the brain in the game’s environment/world. If you are not familiar with the term “noise print”, you can read more in this article.
Music
Music is a very important part of any storytelling effort, no need to introduce or explain this content group.
As we see from the image above, music would be better if:
Played back from all the channels except the front center and sub. That way we ensure that it surrounds the player.
Has no relation to the environment’s reverb, unless it’s diegetic, in which case a percentage should go to the environment’s reverb and routed to the same surround channels as the original (dry) content.
Of course music should be produced in order to fit this philosophy. Many music tracks are usually produced in stereo which is not good for higher channel counts, as you don’t just route the stereo channels to the rest and expect it to work miracles. Also music should be produced with reverberation that fits the game’s aesthetic. There are some tricks that you could use, but that’s a big subject and will be included in a future article.
The music libraries that we produce at my company SoundFellas Immersive Audio Labs, actually include a quadraphonic surround version for all music tracks that are included in each library. That makes them ready for use in any surround channel configuration with simple drag-and-drop and if you wanna go to higher channel counts you can route the rear channels to the side channels and it will work nicely, as we mix them specifically with game audio in mind.
Noise prints
Noise prints are treated the same as music with a small difference because they belong to the environment that action takes place in, so they should be:
Played back from all the channels except the front center and sub. That way we ensure that it surrounds the player.
Played back through the environment’s reverb processing and the wet should be routed to all the channels except the front center and sub.
We include noise prints extracted from our environmental recordings for all the ambience libraries that we produce at SoundFellas, you should check them out.
Surrounding the player with sound is always a good idea and can elevate any kind of game.
Conclusion
You may develop a game for devices that doesn’t support surround output, like smartphones and tablets. Or you may believe that most people play games using headphones. Either way, those two statements are not arguments against using surround sound output for your games. On the contrary, those two statements offer an argument on using surround sound. With the popularity of immersive audio that virtual reality brought to the digital entertainment media markets, some old but solid technologies were revitalized that allow for surround sound rendering through simple stereo headphones. Enter binaural audio rendering and it’s core algorithm HRTF (Head-Related Transfer Function).
Simply put, there are many software frameworks that allow the developers to deliver their surround content to simple stereo headphones, while at the same time keeping the surround output, by rendering using a psychoacoustically calculated image that translates to a fully immersive format when delivered directly to the ears of the listener. The most popular ones at the time of writing are the Google Resonance Audio, Steam Audio and Oculus Audio. Binaural auralization is also present in many game audio middleware software and tools like Fabric, Fmod and Wwise to name a few. Of course many game authoring solutions also support virtualized rendering multichannel audio to binaural format. You should read the documentation that is available for all those software solutions, to find the one that fits your business model and your project’s needs.
Even if you don’t support surround rendering for stereo headphones from within your game, there are also operating system plugins that allow the players to render your game’s surround sound output to binaural stereo, and enjoy that through their stereo headphones. For example at the time of writing, the two most popular for the Windows operating system are the Windows Sonic for Headphones and the Dolby Atmos for Headphones.
You can hear how immersive binaural stereo renders can be at the Google Resonance Audio SDK for the Web examples page.
With the power that most gaming devices feature, the optimization that current technologies offer and some extra work from yourself, there is no excuse why your players should not enjoy a full surround sound experience. Either while playing through desktops with 7.1 speaker setups or through mobile devices using headphones, your players can enjoy a fully immersive experience, with the power of sound.
Surrounding the player with sound is always a good idea and can help you:
Create more immersive experiences without depending on visuals.
Make the audio layers of your game distinguishable: i.e. environment / alerts / narration.
Gain devoted players that will come back for more.
Personally and together with my team, I utilize any technique that makes sound more immersive, that is why I produce and deliver our ambient and music libraries in stereo, binaural and surround formats, targeted for drag-and-drop use in scenarios like the ones described in this article.
I believe that every developer with a little work in the beginning of the production, can create easily translated surround sound experiences, which together with the new standards (i.e. Dolby Atmos, etc.) and the way you can link external devices (i.e. Philips Hue, etc.) with your games, can offer a new level of high quality entertainment for the players in any gaming environment.
I hope this article will inspire the 2D game developers and motivate them to use surround sound for their games.
Be it a full 7.1 living-room gaming setup, an enhanced subwoofer equipped gaming laptop, or a smartphone with headphones, your players will surely thank you!
So, what do you believe about all that? What is your own experience while developing a game? Was it easy to create multichannel sound or not? Do you have any ideas or experience to share? Let me know in the comment below. I will be happy to hear from the vibrant community of game development!
Read more about:
Featured BlogsAbout the Author
You May Also Like