Audio often gets the short end of the stick during the mobile game development cycle, but researchers at Nokia are looking at ways to start from music and audio to create compelling mobile game content. Jukka Holm, of the Music Technologies Group at the Multimedia Technologies Laboratory at Finland's Nokia Research Center, presented examples of how game designers could use music as the starting point for video game content creation.
Holm began the talk by discussing music-controlled games. Now that modern handsets have the power to spare, they can analyze music and audio data. Analysis yields control information to send to the game engine, which is then mapped to controls to game parameters. Even as handsets become more powerful, there must be a divide between MIDI and wave audio analysis, Holm explained, but the same paradigm roughly works in both realms.
He offered up three game ideas which, unfortunately, we were not able to see due to time constraints. The first was Bug Man, where the music creates the movement of the insects in the game. In CDEFGABC the player must collect falling blocks into a container. The positions the blocks fall toward are chosen by the musical notes present in the MIDI data that is sent. The last idea he presented garnered a number of laughs; the Striptease game features a dancer who moves between states of undress as the player chooses which music files for her to dance to.
He then presented a demo video of AudioAsteroids, which showed the game progressing through five different levels, each with its own different MIDI music. From Thus Spake Zarathustra to fast-paced drum and bass, the relationship between different enemy entrances and individual instrumental activity was very clear.
Nokia Research in Finland
This was all done using software from Nokia called Virtual Sequencer. Available for Symbian 6.1 and 7.0, Virtual Sequencer provides MIDI analysis for music in real-time, driving both game content control parameters and the music playback engine simultaneously. Games using digital audio fare less well than MIDI driven games, but some of the same content creation techniques still apply. In the wave audio version of AudioAsteroids, Holm explained that the control parameters the music created were generated non-real-time. He also explained that the pitch detection is much more difficult, especially considering the compression that is often used in today's pop music. It was also clear from the demo that the relationship between the music and the created game content was much less clear. "Eventually, we want heavy metal to be more difficult than 70s funk," Holm explained.
The last part of Holm's presentation presented the idea of Shared Sound Worlds. Rather than treat each player as part of a single sound stage or as two distinct sound stages, components are shared between the individual handsets in a single world of sound data. In an example showing a volleyball game involving four people with individual handsets, the volume on each handset reflects the individual's player from the ball while the pitch reflects the altitude of the ball. For the player who is closest to the ball, he must "hit" the ball back up when it is within a range of pitch.
Holm also showed off a tennis demo that used a cell phone with a built in accelerometer. As the tennis ball came closer, the pitch of the ball's tone became lower, and when it was low enough, the person in the video swung his phone like a tennis racket to hit the ball away.
There are still other uses for audio in mobile devices, Holm said after the talk, and developers will soon have the tools to bring these more novel ideas into the marketplace, but it's clear that some intriguing research is going into the hitherto largely unexplored ideas for mobile audio.