I started playing the violin when I was four, and learned to program games when I was 7, in basic on my family's spectravideo. It was fun - I could only make text adventures but that was enough.
Fast-forward. I'm 22 years old, a games company wants to poach me from my job writing back-end code in XSLT for a legal publishing firm. Unfortunately they're taking too long to decide whether they want me and the hours they want don't suit me - I'm already finding it difficult to fit my current job in with my ability to stay creative and write music in my spare time. I decide to go back to Hamilton, live in my parent's garage for a few months and write the music for my friend's film instead, using found objects from the garage as well as the usual instruments.
Later on, I get CFS. For those of you who don't know, that's chronic fatigue syndrome. So, 26 years old, my BSc in Computer Science sitting many years behind me, gathering dust. I work on music instead because it doesn't hurt my brain which is initially fogged-out due to the CFS. Lo and behold, I start landing some work with a guy who writes music and does sound for computer games. Imagine my surprise when I end up doing soundwork for the same company that initially wanted to hire me as a programmer, only to find, they don't exactly treat me with the same respect as when I was going to be doing programming for them. Music, sound - that's way down the scale. Way down.
My employer gets cancer. Before he dies he reveals to me that he didn't get paid for quite a bit of work by that same company, and they cut some music of his from the game without telling him. Now before you get all judgemental, imagine if that was graphic design work. Imagine you had to cut an animation sequence that took three or four days to produce. You'd tell the person, right? You might even apologize, say hey, sorry about that. It's no big deal, but it's courtesy. But for some reason music and sound techs in particular always get the short end of the stick.
Before I finish with this games company, there's some leftover work that needs to be done on the game. I'm young, headstrong. There's some overlapping sound stuff that needs to happen. They want an audio solution detailed. I tell them it needs to be done a particular way - I know sound, they don't - I understand the context. They tell me, no, we're doing it this way. I say, okay, and quit. My employer's dead, after all. The other guy that was working under him takes on the work as requested, and from his description it was the most difficult project he'd ever worked on in his life because of the hamfisted approach the developer took to the audio problem. Because they didn't listen to the people who actually knew stuff.
You might be thinking now, this is one guy's experiences and this is just one games company, but no - look across the industry and you will find terrible experiences, relentlessly, from sound tech guys. You will hear stories of both games companies getting burned, and sound engineers getting and composers getting burned, getting treated like crap, like their work doesn't matter. Everybody I've met who has tried to make it in that side of the industry has either died or quit. And it got me to thinking - why?
After that experience I decided not to do that again, or even try. I built my music studio instead, and I've been operating it for the past 7 years - with some ups and downs, as musicians are moody buggers (I should know, I am one). And it turns out that anal-retentiveness and attention to detail that you develop as a programmer is great for sound engineering.
But they are very different professions - programming is essentially mental problem solving at a high level, and that's the carrot - you solve the problem, you get the reward which is the feeling of "I'm good and smart", a bit of a self-esteem kick. There's an aesthetic and structural side to it of course too, but if you don't enjoy solving mental problems, you're not going to be a very effective programmer. I recently got back into programming and have spent my spare time in the past year developing a high-level game engine in SDL/C++ which I plan to release for free.
Sound engineering is a far less cognitive, far more intuitive artform - it's no less difficult - it's just difficult in a different way. Do you have to listen to the same piece of music 100 times on a daily basis? That's what a sound engineer does. The carrot is getting something that is aesthetically pleasing or that fits the solution the client has asked for. It requires skills that programming doesn't give you, that take years to learn and which cannot be picked up intellectually, for the most part.
There are a lot of terms you need to understand and work with as a programmer. Things like RAII, POD, classes, overloading, etc. It's the same for audio engineering. There's eq, flanging, ducking, emulation, chorus, reverb and about 100 other concepts that you not only have to understand, but be able to work effectively with. Getting into music terminology and understanding is even more of a headache. I have never understood why sound, typically, gets the shortest shrift in games, when working with it is so complex and skilled. I'm here to change that, or attempt to, by showing you some alternate ways to look at music and sound, and different ways to communicate with your respective musicians and sound techs.
It weirds me out substantially when I read articles on gamasutra discussing very basic sound things - utilising volume dynamics, for example - as if they're revelatory - this signifies to me that sound teaching in the industry is currently at a low ebb, as I consider these things to be common public knowledge, not even remotely sound-engineer-specific knowledge. There was someone writing a while back about how one game kept the volume down until there was a 'principle moment' where the motions peaked. Seriously, has that guy never watched a film before?
Film is very similar to games in relation to sound, in many ways. For example, in games sound was the easiest thing to solve in terms of realism. We got realistic sound very quickly, whereas realistic graphics took a long, long time to develop - we still don't have it perfect, but we have that 'good enough' level that we got with soundblaster 16 soundcards maybe 20 years ago, now. So, quickly, sound became less of an interesting problem to solve, and like I said earlier, a good programmer likes solving problems.
If the problem of implementing sound on the programming side is simple, there's a line of unconscious assumption that it must follow that the procedure or problem of developing that sound content and it's attributes is also simple - which is not of course the case. If you genuinely think that, if you think there's not depth and complexity there, then you're going to give short shrift to it, and you'll lose that depth and complexity in that area of your game. No question about it.
As I said, in terms of aesthetic content, film is very similar to games. Visuals are very important in film and are the most difficult challenge to solve. Directors are typically highly visual people, and that makes sense - we live in a highly visual culture, and we notice sight more than sound. We are more selective about hearing sounds and we've evolved this ability to suppress sounds that we don't consider important, at least consciously.
But we don't suppress sounds Unconsciously and that's where the majority of soundwork and in particular, music work, takes hold. Music is the emotional backbone of film, and sound the grounding element, moreso than visuals. Here's a test you can do for yourselves to prove this. Grab any random film, but since I'm from NZ I'm going to say one of the Lord of the Rings films as they have music in almost every scene, turn off the sound and watch a scene using subtitles. Give it 5 minutes. Turn the sound back on and the subtitles off and watch the scene again. The more emotional or action-packed the scene, the better.
Notice the difference?
A good composer (and sound engineer) can think outside the box to bring together the disparate emotional and content themes of a film or game, and unify the experience unconsciously. A good example would be Alien 3. Regardless of how you feel about it as a film, the composer (Elliot Goldenthal) brought together themes exploring it's multidimensional nature - the spiritual aspect, the biological animalistic side, the mechanical, and the human. He expressed that in the music in varying degrees where appropriate throughout the film to cement the emotional undertones of each scene. Obviously we can't always do that in games because the agency of the player means the emotional context of the game is dynamic, unpredictable. But we can do better than we're currently doing.
Back in the era of silent films, music was played on piano at each theatre, essentially providing musical wallpaper that occasionally followed the action of the film, if the player was up to it. But obviously things have progressed since then. Themes aside, there are four ways of scoring a scene. You can do what the piano players of the silent movie era did and follow the action. This is most obvious in action movies or battle scenes, where the throes of conquest dictate the emotional narrative. However you also get it in comedy - particularly cheesy comedies, as a way of cementing the joke.
The next way is to score the character. This is usually a way of communicating the underlying emotion of the characters where that emotion is not openly displayed on their faces or through their actions. You'll find this in all sorts of styles, but particularly in drama scenes. The third way is to use diagetic music ie. music which actually appears to occur within the world you're in in the context of the game. An obvious use of this is radio or music playing, and often composers and directors will do clever blends where diagetic music bleeds into non-diagetic music and vice-versa.
The final way is to score the narrative - that is, to use identifiable themes or moods to communicate the emotion of what the story is communicating overall. This is more or less telling the audience how they should feel in the context being described. This can include musical foreshadowing and other subtle hints to the viewer as to what is going to happen next, or what may happen next. There are some great films which actually play against the action of the film to score the underlying meaning of the film. This usually works well because it provides such a contrast.
So why are games still stuck in the silent movie era for music? Now, I don't mean within cutscenes, which are at best a reward of gameplay. Those typically have a more cinematic quality and hence follow standard film rules. In mean in-game, and those scenarios, at the moment, follow two rules: either follow the action or create emotional wallpaper. We tend to use the most brute-force tactics, the heavy-handed-est approach, to scoring and soundwork for games. It's because games have for a long time been action-oriented, because human elements like subtlety, drama and softer emotions are harder to achieve in a game form - and it's taken the industry a long time to figure out how to achieve those.
Typically the decisions are made by the game's creative director, who may or may not have a talent for musical identification, but in my experience generally does not understand the in-depth details of musical and sound communication. As a result, most of the time decisions made tend to the most obvious and superficial calls - on the musical side that equates to snare rolls for a military scene, piano for a touching emotional scene, 4-on-the-floor techno for anything energetic and orchestral strings for... well, everything really.
This is why I believe Jon Blow (dev for Braid, the Witness) remarked that he didn't feel he could find the kind of music he was looking for from regular games composers - most are so used to using these tropes (most've which got retired by the late 1980's in the film industry) that it's become habit to churn out hackneyed crapola. There are exceptions of course, and most of them are coming out of the indie scene nowadays - here's a few examples of games which broke the mold a little:
Star Control 2. Instead of opting for a traditional approach to procuring music, the makers of star control 2 set up a competition in the fan community where you submitted a song and if you were one of the ones selected, you got a prize. The result of which was a diverse and eclectic variety of styles and forms to reflect the eclectic and diverse range of creatures in the game. It added both hilarity and diversity to the game experience and managed to pull off a true 'playing the character' approach to music in games while also playing the action at points.
Half Life 1 & 2. The first truly popular modern game to use music sparingly, as an added touch for specific scenes. In movies, abbreviated musical silence is felt by an audience as making a scene less fantastical, more gritty and realistic. That approach certainly worked here.
Knytt Stories. This game uses the same 'incidental' approach to music, but in a different way. The music more often is used to create mood for a particular area, rather than describing the action.
Grand Theft Auto. Obviously the later games in this series use non-diagetic music ie. radio, like a lot of driving games. While in a sense this works, in another sense it's sort of blase. If the player gets to chose their own music, (a) there's no sense of authorship on the part of the game ie. there's no emotional mood being communicated to the player and (b) the music doesn't do anything more interesting than what it can do in real life. However, for many players this was an enjoyable aspect of the game.
Tiny and Big. A more recent indie effort, and sorely underappreciated, this game uses what one might call crossover jazz/samba-infused pop to lend a more quirky, saturated emotional feel to the game. Rather than being purely musical wallpaper, the game uses music tapes - new groups of songs - as rewards for finding certain objects in the game or completing certain quests. Okay, so this is essentially non-diagetic music - but with a twist. Using music as a reward is an interesting example of a use of music that is not possible in any other medium. But it only really works if the music is interesting and stands out from the game, doesn't sit in the background like most games music. This idea is used even more pointedly in the following game.
Savant: Ascent. A collaboration between a small game studio and an aspiring dubstep composer, this highly innovative little 2D rail-shooter uses game progression as a way of unlocking new tracks by the composer, which neatly tie into the new power that you also receive. Now, this may not work if you don't like dubstep - but the whole appeal of the game is that it appeals not just to those who like quirky indie titles but also those who like the composer in question. It is, to my mind, the closest thing to a music video that the games industry has come up with.
Not that I'm suggesting even for a moment that games should be in the service of music by any means, I'm simply pointing out examples of where along the way people managed to break away from the default three-flavour delivery of looping rock, techno or orchestral that is our neopolitan approach to games music.
So, different directions, different ways of thinking about music - but you still need a way to communicate what you want and the problem I find is that most people outside of the sound business use incredibly inspecific terms to describe what they want - this in no way limited to games. Think about it like this - if I were programming in C++, and someone came along and said, 'yeah, what you're doing is good, but I just need it to be... um... a little more ... 'robust-ish'"?
That's pretty much useless. I mean I can dig through my code and try and find areas where it's not as concrete as it should be, but if what the person actually meant underneath that high-level description was "I want your code to follow RAII rules and also not leak memory here and here", they could've saved me tons of time by saying exactly that. And the exact same thing as that happens with sound and music.
So suppose you're making a video game about mushrooms. And you're making the music, and the director comes to you, and he says "okay, so this is a game about mushrooms, so I want a tune that sounds like a mushroom" and you go okay, I think I know what a mushroom sounds like, and you go away, and you make a mushroomey sort of a song. But that doesn't work. When you're working with a top-level descriptor, it's very subjective - what you think of as a mushroomey sound is more likely than not going to be completely different from what I think of as mushroomey.
And when the art director comes back to you and says "what is this, you don't you know what a mushroom sounds like, a mushroom sounds like deetoodeetoodee, not ooompahpah". Well, it's their own fault, because high-level terms when it comes to music or sound are just a form of interpretive dance. When you say "mushroomey", or "middle-of-road" or "energetic" what you're really doing is goddamned jazz-hands.
What you need is more specific terms. Let's start with an easy example that everybody can follow - music genre.
Say you know you want electronic music in a game. That's not useful. That's interpretive dance again.
You need to break that down again into subgenres - so what's under electronic music; there's trance, house, psy-trance, ambient, melodic electronica, glitch, techno, 80's industrial, dubstep and so on and suchforth. When you give one of those labels to a composer, sound designer or whoever, you're giving a very clear-cut description of the area within which you want the sound to be produced. How do you know what those sound like?
Well you listen to a bunch of it. You find stuff from those genres and you explore. You musically-Educate yourself. How can you communicate with a lisp programmer if you don't know lisp? How the heck are you going to communicate with your sound tech if you don't know sound tech? Don't get me wrong - you don't have to be Good at sound tech - that takes way more years than you have time for. I get it. You just have to be able to Communicate.
This is going on a bit too long, so I'm going to sum up:
(a) Don't assume that your sound and music guys are dumber than you just because they don't program. Nobody likes an arrogant programmer. Programming is hard, sound and music are hard. They're hard in different ways. If you don't think they're hard, you've never had to produce something genuinely good in those fields.
(b) Learn to communicate with sound and music techs. Give yourself a basic education in the terminology, genres, ideas and background and understand what those things mean and how they affect the way things sound. Don't worry, being good enough to actually do it to any level of subtlety is a whole 'nother thing.
(c) Just because you don't value audio as much as other aspects of the medium, doesn't mean your sound and music people don't. Those people care about those things more than visuals and gameplay, or they wouldn't be in that field. If you value their work like they value their work ie. how most people value visuals or gameplay, you will get reciprocal respect, unless they're also idiots.
(d) Everybody is stupid and uneducated in some respect. I can't bake well, garden effectively, rewire a fridge to save my life and I suck at car maintenance, but I can program a bit and do sound work. If you assume you're just 'generally smart', you won't recognise the things you're genuinely stupid at until it's too late and your cake is flat, your spinach is dead, the meat in the chiller has gone off and you're stranded in the middle of the sahara desert. Don't let that happen to your game audio.