Waltham, MA-based Blue Fang likes working with animals. The developer of the top-selling Zoo Tycoon
franchise for Microsoft is now working on World of Zoo
The passionate team has developed an interesting collaborative tactic to help bring their animals to life, and are finishing up World Of Zoo
for an apparent end-of-year release on Wii, DS and PC.
In this interview, we sit down with AI lead Steve Gargolinski, senior scientist Bruce Blumberg, and animation director Lee Hepler to discuss their unique philosophy on AI, the relationship between their departments, and how their teamwork means more believable characters.
You guys have said you're proud of your interdisciplinary collaboration. You overlap AI and behavior with animation -- how does that work?
Lee Hepler: We [build the behavior graphs] as an integrated part of animation, so that all those things sort of inform each other. Tons of animations end up on the cutting room floor, so to speak, because they work well as animations but they don't fit with the actual behavior that we're offering them to.
Bruce Blumberg: Yeah, we are focused on the end user experience, and that end user experience is that... we want them to feel as if they're interacting with sentient beings.
And so at the end of the day, animators are all about convincing people that the characters that they're seeing have an inner mental life. So, they're the experts [on] that. Our job on the AI side is to support art in communicating that inner world, if you will, of the characters.
And Steve has a great line about AI is really about providing the opportunity to show sweet animation -- because it's really the animations and how they move that's really communicating, "Oh, I want something from the player," or "I'm sad," or whatever.
Steve Gargolinski: Yeah. One of my main philosophies on working on this is just to get out of Lee's way as much as possible, and let the cool animations come through. Especially with our target demographic, when making game AI, there's always a lot of draws to do complicated stuff that the user might not notice.
We really try to focus in on giving the stuff that kids and the more casual market would be interested in. So, the three main things we're really focused on are showing kids sweet animations of these animals, letting them see cool things for them to do.
The second thing would be making sure that things always respond to the player. We really put a lot of work into making sure that when you're doing something, the animal will pay attention to you, and it never felt like he was ignoring you or didn't see you, or anything like that.
And then the third thing is to just make sure that they don't look stupid. No circling, no obviously blatant dumb stuff that really takes away their organic credibility, which was... I know Lee likes to talk about this, all the illusional life stuff and how the organic credibility lets players build their...
Lee Hepler: Well, yeah... the basic mechanics of the world are constantly reminding you that this is just a jumping, fake animal. You can have that lead into the coolest animation in the world, but if the walk cycle and the locomotion through the world and the little second-to second-responses [aren't there] ... you've ruined the illusion before that can have an effect.
So, it's really, really important to have the animation working and getting a really solid palette of clean, well-functioning basic AI that the player doesn't question or even think about because it's so well-realized.
Bruce Blumberg: I think one of our real philosophies from day one was the notion that our job is really to provide scaffolding so that the player can build these rich models about what's going on in the character.
And so they're the ones who are creating the story, and what we're doing is providing the hints to where the scaffolding allows them to do that. So we've really gone, just as Lee says, for if it's not readable, don't do it. If the player can't see it or if we can't communicate it, don't build the machinery under the hood.
Speaking of machinery, how do you build that?
Steve Gargolinski: The challenge we had in figuring out what system we needed to support all this stuff is we're dealing with eleven very different animals in eleven very different environments, and we need to supply the player with intimate reactions with all of them.
So, our approach was... We implemented something called the Animal X Engine, where we decide what we wanted the high level play patterns to be, the base logic that all the animals would share. So the player, if he moves from animal to animal, would be able to identify some things that they have in common.
The ones who are hungry get happy when you feed them. The ones who like to play get happy when you toss a ball into the exhibit, things like that. And then what we did is we used Havok as our middleware. With using Havok Behavior, we were able to make a pretty serious abstraction between the game logic that I was just talking about and what we actually ended up using for animations to act out the game logic.
So, with a setup like that, it gave Lee a lot of flexibility for how he could fill out different sections of the graph to really give a lot of different variety to a lot of different animals without really changing base logic so that we wouldn't use organic credibility by having something broken happen at some point along the line.
How does your cross-collaboration between animation and AI work simply from a practical standpoint?
Bruce Blumberg: One of the things that has worked really well at Blue Fang is the AI team, which is Lee and the artists and all, are all in what we call the pit, so we're right next to each other.
One of the things that was really great is we would often find artistic solutions to technical problems, and then technical solutions to artistic problems.
Did you go from, say, waterfall to agile? Or did you just decide to be collaborative and open, and this methodology grew naturally out of that?
Lee Hepler: I think a big part of it is we started working pretty well together, and we started getting results. Quite frankly, at the beginning, it was like an R&D project from the get-go. We were figuring out stuff on the fly, minute-to-minute, and there was just really no way to schedule in a traditional way what we were trying to do.
We started to work together, we worked well together. We delivered and promised big chunks of work as opposed to broken down.... We were delivering the overall functionality. On the production side, there was a lot of trust involved in allowing us to do that. I think we were all really into it, which is what made it work.
I have to feel fine about just going up to Steve and saying, "Steve, this is totally broken. This looks like junk."
Steve Gargolinski: He's usually right. [laughs] He's always right.