Audio Postmortem: Scarface: The World is Yours
Just what went into the $2.5 million audio budget for Sierra's Scarface: The World is Yours? Audio director and frequent Gamasutra contributor Rob Bridgett discusses the ups and downs in this extensive audio postmortem.
Introduction
A video game based on the Scarface license was always going to be a huge challenge to get right. The license itself has a huge following, not only among the original fans of the movie, but among a newer more urban audience who identify directly with the character of Tony Montana in terms of their own experience of rising from nothing, particularly among the hip hop community.
A great deal of expectation surrounded the game all the way through production and audio was certainly no exception to pressure and scrutiny. Everyone involved, being huge fans of the license, made sure that they kept themselves under enough pressure to get this game sounding the best it could all the way through the project.
While we knew we wanted to create a very ‘cinematic’ experience for the game, the creative vision for the game from a design viewpoint was always to ‘Be Tony Montana.’ This meant not only getting the main character right in terms of a vocal performance, but also all the things that surrounded and reflected Tony’s personality such as the score, the licensed music, the sound design and the final mix.
All these elements had to represent Tony Montana’s point of view, and allow the player to feel as though they really were this character. It is fair to say from the outset that this was a character-driven audio direction and that this gave us our cinematic approach.
Scarface: The World
is Yours
Developer:
Radical Entertainment
Publisher:
Vivendi Games
Development Time:
Three years
Sound Director:
Rob Bridgett
Sound Programmer:
Rob Sparks
Sound Designer:
Randy Thom
Sound Mixer:
Juan Peralta
Composer:
Marc Baril
Sound FX Editorial:
Mac Smith,
Roman Tomazin,
Cory Hawthorne
Voice Direction:
Eric Weiss, Rob King,
Chris Borders
The entire development team varied in size over the course of the project, at one point swelling to around 100 people. On the audio side we had a Sound Director, Sound Implementer, Sound Programmer and the support of our Advanced Technology Group who created, and maintained, all the audio tools required for our work. Our sound department also comprises of an internal recording engineer, a sound effects designer, Foley team as well as editors and an in-house composer.
That said, even with all those internal resources, there was a great need for outsourcing on the project for editing, music licensing, voice casting and voice direction in order to deal with the sheer quantity and global scale of this audio production.
What Went Right
1. Dialogue
Designing a flexible and reactive dialogue system that immersed the player was a huge challenge, and one of the core game features we had to get right. The dialogue had to be a cohesive part of the Scarface universe, so inevitably there needed to be a certain amount of fowl language and a great deal of humor.
Designer involvement with the dialogue system was needed from day one of the project and we got this support and involvement in the form of the project’s design lead, Pete Low. Design was therefore involved in script and character development for each and every character, particularly in establishing the emotional range of dialogue that would be required from Tony himself.
Each character that was designed had around 10 categories of reaction, and for each of those categories they had around 10 variants of line that could be played each time one of those events occurred. This meant that each character had around 100+ lines, not to mention all the cinematic lines and mission specific dialogue that were required. A great deal of the additional dialogue for the in-game characters was written by writers local to us in Vancouver, they essentially churned out a huge quantity of situational one-liners for hundreds of characters resulting in over 33,000 individual lines.
2. Recording
Getting the character of Tony Montana right was the core goal of the dialogue; if this couldn’t be done, then we had no game. A grueling process of auditions commenced in 2004, which encompassed the whole of North America. There were only two or three sound-alikes that could have fit the bill and these auditions were collated and sent off to Al Pacino for final selection.
By far the finest, and the one who was selected in the end, was Andre Sogliuzzo, who, as fortune would have it, turned out to have been Pacino’s driver around ten or so years ago, which gave him all the knowledge he needed to accurately mimic the character of Tony Montana.
With Tony cast, it turned to the supporting cast around Tony to be of as high a quality as we could get. Names like James Woods, Michael York, Cheech Marin, Robert Loggia, Steven Bauer and Al Israel soon began to mount up to what was eventually a great cast.
The recording itself was split into several different phases based on the three different locations for production and the three different times we needed to carry out the recordings. Recording took place in our studio in Vancouver, at Technicolor Studios in Burbank, and at Vivendi Games’ own LA Studio early in 2005. We also had to record at various other locations around the world depending on the location of the voice artists we needed, such as The Sound Company in London where both Ricky Gervais and Lemmy were recorded, J.A. Castle Studios in Syracuse where Richard Roundtree was recorded, and Sony Studios in NY where Huey Morgan et al were recorded. Myself or VO director Eric Weiss, Rob King or Chris Borders would be on site to direct and do b-roll interviews, and often the whole thing was done via ISDN where applicable.
One of the great things we were able to do was to have all our notes on the sessions entered directly into the script via laptop in digital form directly into Excel and exported digitally to html files; this meant that we were able to upload the unedited sessions and a digital version of our session notes to our dialogue editors, where they could begin editing right away. There were no hard to understand handwritten notes and there was no need for the editors to be near a fax machine, this meant that our dialogue editors could be anywhere in the world.
The digital format also meant that for file naming the editors could simply copy and paste the filenames from the digital script, rather than having to manually enter the filenames from a printed page.
3. Score
The score went through several incarnations at the beginning of the project. Initially, the composer Marc Baril and I played with the idea of having more up-to-date ‘game’ style music rather than copying the DNA of the original movie. We looked to similar scores from movies such as ‘Blow’ by Cliff Martinez as a guide for the style, using predominantly Cuban rhythms and percussion ideas.
Eventually after trying a score in the game that was totally loyal to the style of Giorgio Moroder, we decided that this was 100% Scarface and nothing else would do, even if it sounded a little cheesy to our ears at first. Using a Jupiter 8 synthesizer, in particular its arpeggio capabilities, formed the basis of nearly all the music that was written, and the new game score is remarkably loyal to the original film score.
Augmenting the new score is also the original score itself, which is used for nearly all the cut scenes in the game. This was attained from the original tape reels at Universal. At first, our ears had to get used to the retro factor, but it didn’t take long. Any true fan of the Scarface movie will instantly feel at home when they hear the score for the game.
4. Licensed Music
Licensed music is used in two ways by the Scarface game. ‘Diegetic’ or ‘source’ positional streams play radio music throughout the world from various locations such as The Babylon Club, The Venus Bar and Coco’s Bar. There also exists a playlist based music player menu which replaces non-diegetic music on the game’s soundtrack.
The concept and development of the licensed music went from being in-game radio positional streams only, to the use of a fully fledged playlist of tracks that could be customized via a mix-tape feature, meaning songs could be played at any time in the game, whether in a vehicle or on foot. Because the licensed music was beginning to play a large part in our game, we needed a solution that allowed the user to have as much control over the soundtrack as possible.
This was also born out of concerns that so many people playing this game would have radically different tastes, going back to our audience being a mix of hardcore Scarface fans of the original movie, who would expect only Giorgio Moroder Score and the songs used in the movie, the newer Urban Hip hop audience who would expect more up to date hip hop and rock music, and people who wanted to experience Tony Montana’s new storyline but in as authentic a way possible, who would choose the period Cuban and Latin music as well as the 1980’s period music we used to define the time and place of the game story.
Technically, this presented a great deal of challenges in terms of management and balance of music player and the score. Positional radio streams, score and ambience needed to be switched off each time the tape player is switched on. This helped to reduce streaming issues, and more importantly, any cacophonous results of two pieces of music playing together at the same time.
We could have improved the implementation by having an option for either ‘Moroder Score’ or ‘Licensed Genres’ to play automatically whenever you do a side mission in the sound menu. We decided to set the default to use licensed music, and the player can turn these off by turning the tape player off on these missions therefore hearing the Moroder Score playing underneath, but it would have been nice to have had an option where this could be set by the user once and for all.
5. Post Production: Sound Design and Mix at Skywalker Sound
We wanted to work with a post production sound team using a similar model to the way that movie sound is ‘post-produced’ at the end of a project. Typically in games the last month of a project is a real scramble to fix problems and to make sure everything is actually being heard; however, we wanted to bring the whole audio development environment off-site during this time so we could concentrate on quality without any of the panic and distractions that come with that crunch period at the end of the project.
Having visited several ‘Hollywood’ post production studios, the decision to work with Skywalker was pretty clear for what we needed. We knew they had done work on games before, but that isn’t what attracted us to them. They had the staff and experience we needed to really push the game in the direction of a movie. There were two things we needed to concentrate on in our post-production; the sound design and the final mix.
Post-Production Sound Design
We had an initial week of preparation work with Randy Thom in March where we sat down, reviewed the movie and went through the Scarface game running on disc, noting all the areas we felt we could improve the sounds we had in there. We came away with a lot of ambience, weapon sounds and a stack of vehicle sounds that we then spent two months implementing into the game back in Vancouver.
The second week we spent with Randy was for the real-time sound effects replacement in June, where Randy got to create sounds, have them built into the game, and then decided on what needed changing about those sounds in order for them to work how we wanted them to. We managed to iterate relatively quickly in terms of video games and both felt that this was the only way we could have worked, given that in the past, video games developers often get sound designers to create sounds without seeing the game, and certainly without being able to hear how those sounds work in context after implementation and down-sampling has occurred.
The sound effects in the game quickly began to take on the direction of the personality of Tony Montana, him being a larger than life character. A great example is Tony’s M16 in the opening mansion shootout. We worked hard on getting the enemy weapons sounding good, so good in fact that we eventually realized that Tony’s M16 now sounded less aggressive by comparison. We worked on Tony’s M16 sound for a whole day; we even gave it the largest sample rate of any sound in the game so it will cut through in that particular scene.
The Mix
In terms of the final mix, this was something we felt had never been attempted successfully in the past in video games, both from a technology point of view and from the point of view of having the whole game be mixed by someone who specializes in film mixing.
Juan Peralta, our mixer, fit the bill perfectly as he is passionate about games and has mixed a ton of movies. Also doing the mix on a sound stage with a near-field monitor set-up that has been calibrated by THX was the perfect place to mix for a home theatre system. It would have made little sense for us to use some of the bigger rooms available at Skywalker, as they are specially designed for theatrical releases.
The sound stage we were on, The Elia Kazan, is used for theatrical mixing, but the near field Genelec set up we employed is how they do DVD mixing. This made it perfect for our needs on a video game. We were pretty clear that most people now have 5.1 theatre systems in their homes, primarily for watching movies, but those with consoles are of course plugging them into these systems and expecting the same quality of sound as they get from their movie experience.
The major difference with the mix on Scarface was that we were connecting the audio levels of all the sounds in the game to a software mixing console, and then connecting that to a hardware mixing console (the Mackie Control Universal and Extender). We route every sound to various busses; for example, all non-player character dialogue goes to the ‘dialogue bus’, all Tony’s dialogue goes to the ‘Tony bus’, all bullet impacts and squibs go to the ‘squib bus’, score goes to the ‘music bus’, tape player music to the ‘tape bus’, and so on. In all we had around 20 busses. All these were mapped out in our proprietary interactive audio system called “Audio builder” developed by our Advanced Technology Group at Radical in Vancouver.
This then connects via the PC it is running on, to both the game and via MIDI to a Mackie Control and a Mackie Extender console, so all these busses appeared on the mixing board as channels. We wouldn’t have been able to mix the game in such a way without that external MIDI controller functionality – all mixing prior to this was done on-screen with a mouse clumsily moving the fader levels. It was so difficult to move the faders in that way, it felt very counter intuitive, and certainly wouldn’t have made any sense to expect a professional film mixer to use on-screen mouse driven faders.
Mixing is a very sensitive process, requiring sometimes microscopic adjustment of faders and hooking up the Mackie gear gave us this control. Juan was able to control the levels of all the sounds in the game via what was, for him, a familiar interface.
We essentially mix in-game by having ‘mixer snapshots’ called up by the game code and installed at various points in game play. For example, when Tony Montana is outside in the day time we have a mixer snapshot called ‘on_foot_day’ that we call every time this situation occurs, allowing us to set the levels of all the sounds for that moment. Every time Tony goes inside a building we call a ducking mixer which pulls down the ambience channel a little and makes it feel more like the player is inside that building.
We manage the levels, and even the pitch and amount of sound sent to the LFE, of every sound in this way. In Tony’s Rage Mode for example, we dropped the pitch of Tony’s weapon and also increased the volume (and sub!) of that weapon to give a really crazy powerful effect, which makes that mode feel very different to normal combat.
As the game is being played you can see the mixes change on the board in real-time, and whenever we wanted to adjust something we hit the ‘Record’ button, which grabs the last snapshot mix that was installed, and lets us edit it there and then in real-time. When we were done adjusting the levels we hit ‘Play’ and release the mix back into the game, where it is stored. That’s all there was to the process. It was actually one of the easier and less involved things to do in the whole game development process; the real skill was in the ear of the mixer.
The first thing we realized when we got the game on the stage is that the game was pushed incredibly loud, everything was competing for attention. Most games are pushed this way in development and are in fact only A / B compared to other games, which of course are also far too loud. Being able to compare it to a movie rather than another game at reference levels enabled us to bring everything down and that gave us the headroom to pick out one or two really important sounds that would otherwise get lost in the cacophony of our combat scenes.
The head-exploding squib sound for example is a sound that we simply couldn’t hear no matter how much we crunched the wave file, even though it was playing at full volume. We had an ‘explosions bus’ routed to stand up above everything else and so were able to send this one sound to that bus. This whole process raises interesting questions about calibrating game audio. There is currently no measure, or reference level, for games; the sounds you put into a console are certainly not the sounds you get out. Games have always had to be equal to or greater than other loud games. This introduces over-compression problems and a vast reduction in dynamic range, which are already great problems in games, without being exacerbated by games continually getting louder.
A lot of what we were doing at the mix stage also involved us going back over our cinematic scenes and putting in new sounds, remixing the music and dialogue levels and re-bouncing those into the game. All our sessions for these were run on Nuendo and for any additional sounds we passed the movie over to our sound effects editor Mac Smith, who would work on the additions in Protools and pass us back the file containing the new sounds.
I’d then be able to import those new sounds straight back into Nuendo and bounce them out as a 6 channel or Pro Logic 2 encoded file. We added a lot of sound that felt missing, from footsteps and Foley sweeteners, to door squeaks, and this made a huge difference to the quality of those final movies.
We essentially concentrated our first three days of mixing on the first three hours of game play; we wanted those first moments that the player picks up this game to be of the highest quality. After this we went through the entire game, playing through every mission and making tweaks in real-time when they were necessary.
Most of the decisions we made for those first three hours carried right through into the rest of the game. For example, all the levels of Tony’s weapons against those of his enemies are cloned any time you encounter enemies, so the daunting task of mixing a huge game like this became much easier once those early changes started to spill through the rest of the game.
After our two weeks of mixing the PS2, the difference between what we arrived with was quite amazing. You can hear every sound; dialogue, music, the chaotic bullet fly-bys and the bullet impacts on the walls behind you. The old version of the game feels much weaker and sounds much muddier. The older weapons, that we had thought felt very powerful, sound very weak by comparison to the huge sound we now found ourselves hearing.
We deliberately spent two weeks mixing our target platform the PS2 up front. We then migrated all those changes over to the Xbox version of the game for a final week of tweaking for the Dolby Digital 5.1 offered by the Xbox. The extra separation you get on the Xbox through the discreet surrounds is quite something, and the Xbox definitely deserves the time spending on it to get it sounding as good as it can from an audio point of view. It has many advantages over the Pro Logic 2 encoding of the PS2.
6) Working with THX
THX’s involvement in the project, particularly during post-production, proved to be highly valuable. The THX Games certification not only encompasses audio but also the visual environment in which the artists work. THX certification is designed to ensure game developers always work in highly standardized environments with calibrated equipment, whether that’s a PC workstation (for texture artists, etc.) or a large mixing studio, like the ones at Skywalker Sound.
The THX engineers visited Radical as we were entering our Pre Alpha stage of production and took measurements that enabled us to calibrate all the art lead’s monitors, and led to the establishment of a THX room on the game team’s floor in which any artist could drop by and check their work on calibrated equipment.
The THX Professional Applications Engineer, Andrew Poulain was on site when we set up the mix stage at Skywalker in order to ensure the room and equipment was calibrated correctly, which again proved invaluable for our mix to take place as we were making a lot of critical artistic decisions about the audio in this environment, and we had to know that what we heard was entirely accurate.
What Went Wrong
1. Design Changes During Production
Though unavoidable and clearly for the greater good of the game, the change of direction for the project midway through development brought about by a six month extension to our Alpha date, and presented challenges for the dialogue system and for the flexibility of the content we had already recorded.
These changes meant that many scenes that had been written for the story were cut completely, and although some scenes were re-appropriated they did not make as much coherent sense as the full scenes they used to be. Many characters were also cut from the game, as well as many side missions for which very specific characters had been created, cast and recorded. Those characters now were only to appear in the game world as pedestrians, which made them seem a little odd without their context.
With all the ripples that the extension gave us, these changes led eventually to a much more streamlined and solid product. The extension in the amount of time we had also allowed us to plan and execute the post-production mixing, and thus gave us a huge gain in terms of final audio quality.
2. Cinematics Production Cut Off Too Late
Production of the huge amounts of cinematics that we have in the game was eventually cut off around two weeks before we went off-site to Skywalker to mix the game’s audio. This gave us a mere two weeks to work on Foley performance, recording and editing for those scenes.
Due to the huge amount of cinematic cut-scenes in the game, we had to prioritize the more important ones to receive the attention of full Foley, as there simply was not enough time to perform Foley for all the cut scenes we had. Our internal Foley team, Scott Morgan, Cory Hawthorne and Roman Tomazin, worked for a solid week in performing the Foley, and then a further week editing and bouncing down the Foley mixes for integration into the sessions which contained SFX and dialogue.
This practically left no time to do mix-downs of the final sessions including dialogue and sound effects, so the team were put under a great deal of pressure to bounce out and mix all the cinematics for the game in both Pro Logic II encoded versions and in Dolby 5.1 six channel mixes. These were all bounced out over the course of two or three long evenings and the intention was not to touch these mixes once we got to Skywalker.
However, once arriving on the stage we found we needed to add more sounds and balance the sounds in some of the cinematics, so as we came to them we re-bounced them on the mix stage. A dedicated month for Foley and premixing the cinematics is a must for future productions of this scale.
3. Dialogue
Recording the amount of dialogue we did, in excess of 33,000 lines, was a huge undertaking. Recording wasn’t completed until March 2006, totaling almost a year and a half of VO casting, recording, editing and implementation.
One of the things that contributed significantly to this amount of time was the extension to the project half way through the first phase of our recording, and therefore new designs and ripples in the narrative meant new characters and new scenes, and a good amount of callback sessions were required half way through production.
Improvements to the dialogue system soon became evident when we realized the huge amount of content we had to manage. A simple, dedicated database system would need to be developed to enter, sort, organize, print, edit and debug all the dialogue. We used Microsoft Excel to manage the entire dialogue on this project, which although workable, proved very hard to manage and debug, making dialogue management a full-time job.
It also proved inflexible later on in development when we needed to re-appropriate lines of dialogue to be used in new situations, as our naming convention dictated the use of dialogue in the game to a great extent, and meant we had to duplicate and rename content in order to get it used in new places in the game. The development of a flexible system, which treats functionality independent of filename and which packages the files needed per character only at the build process, would help tremendously on similar scale projects.
In Conclusion
The big qualitative win for the audio production was certainly in the post-production phase carried out over a four-week period off-site from development. This gave the audio a chance to shine and to receive the attention and polish that it had needed. Everyone involved at Skywalker gave the production a huge burst of enthusiasm at the end and made a huge contribution to the overall quality of the game’s audio.
Another huge driving force behind the whole off-site post-production period was having our audio programmer, Rob Sparks, on site with us, not only to underpin the work we were doing, but also to fix regular audio bugs as we went.
Music licensing and casting were also major contributors to the success of the audio on the title for which Steve Goldman (music licensing), Eric Weiss, Rob King and Chris Borders (Casting) must receive mention. As was the support offered by Vivendi’s marketing department, our Executive Producer Peter Wanat, our Senior Producer Cam Weber, design lead Pete Low, and art director Michel Bowes who all really understood the power of sound on this title.
Data Box
Staff: | 4 full-time internal audio staff and around 50 contractors |
---|---|
Audio budget | $2. 5 Million |
Development time | 3 Years |
Street Date | October 2006 |
Platforms | PS2, Xbox, PC |
Hardware: Radical Entertainment: | Mackie Control Universal and Mackie Control Extenders |
Motu Ultralite Soundcard | |
Creative Audigy2 ZS Soundcard | |
Marantz SR4300 Decoder | |
JBL 4408A Monitors | |
Toshiba Satellite Laptop PC | |
Hardware: Randy Thom | Pro Tools HD4 on a Dual 2.7 Ghz Power PC G5 Mac |
Wacom Cintiq tablet | |
Neovo 21"LCD | |
60" Pioneer Plasma monitor | |
Meyer HD-1 studio monitors (LCR) | |
15" custom subwoofer | |
M&K (Miller and Kreisel) surround speaker array | |
Digidesign ProControl 16-fader control surface | |
Soundweb 9088 Networked Signal Processor | |
Hardware: Skywalker Edit Suite | Pro Tools HD v6.9 on a Dual 2.7 Ghz Power PC G5 Mac |
Blue Sky Sat 6.5 speakers | |
Hardware: Skywalker Mix E (Elia Kazan) | Dolby DP654 decoder (provided by THX) |
Genelec 1032As for surround monitoring, with M&K MPS-5310 Subwoofers | |
Euphonix System 5 for summing. | |
Long wooden board with a Scarface towel on it for Aesthetic purposes. | |
Software: Radical Entertainment | Nuendo 3 with surround plug-ins, Sound Forge 8, Vegas Video 6, Waves Platinum Bundle. |
AudioBuilder (proprietary engine) | |
Max MSP (batch processing patches built by Scott Morgan) | |
Software: Skywalker Sound | Pro Tools HD |
Waves Gold and Waves Platinum | |
Pitch 'n Time by Serato |
Read more about:
FeaturesAbout the Author
You May Also Like