Sponsored By

Interview: Mixamo On The Adjustable Approach To Game MoCap

VC-funded online animation provider Mixamo is trying a new approach to mocap game characters, with adjustable sliders and tweaks built into their animations - Gamasutra talks to them about their approach.

Christian Nutt

December 16, 2009

8 Min Read

[VC-funded online animation providers Mixamo is trying a new approach to mocap game characters, with adjustable sliders and tweaks built into their animations - Gamasutra talks to them about their approach.] Earlier this year, Mixamo debuted its online character animation service. The web-based service contains a library of motion-captured character animations that can be dynamically altered via sliders that control different aspects of the character -- both broad concepts like emotionality and fine concepts like speed of movement -- and applied to character models you upload. These animations are then downloaded and used in games -- and tweaked and massaged by animators to add personal touches. The company supports BVH, FBX, and Collada formats, and will record new animations at the request of clients. To find out more about the service, we met with the company's founders, Stefano Corazza, CTO and Nazim Kareemi, president and CEO, got a demo of the service, and asked about its options, limitations, and future plans. Where did the animations that are on the service come from? What is the process by which they are created and end up on the site for people to access? Stefano Corazza: First of all, we track every single motion search that the users do on our site, so we know exactly what the customers are looking for. Then, we have a process that, usually within a week turnaround, we will capture those motions we are looking for. And we capture them in a new way. We don't capture just what they ask, but we capture on a wide range of speed, exaggeration, and style. We give these high-level controls to the user, to create the motions that they really want. You have the sliders on the site that affect the behaviors of the animation. You take motion capture, but then you procedurally modify it. SC: Yeah. We're very data-driven, so we capture a bunch of different variations of that motion, and then we use mathematical models to learn how -- in this case, a human -- is walking, for example, in a happy way, versus a brutal way, versus a sad way, with speed and stride. We use mathematical models to learn from this particular set of motion capture data. Do you pay per animation, or do you have a subscription? Nazim Kareemi: The business model is very simple. When you upload your character, we'll retarget [the animation] toward your character, and then when you download it, you just pay per second of animation. We have simplified it a little bit. For a very simple motion, it doesn't cost you that much. If it's a complex motion and difficult to do, then we charge a little bit more. NK: One unique aspect over here is that if you look at the sliders, each slider is at 100 value. There are three sliders -- hundred times hundred times hundred -- so you have a million possible combinations and the motion you're getting is most likely to be very unique. And on top of that, when you upload your character, we retarget it to your character. So when you download it, you have really customized motion applied to your character. And then on top of that, once you go to Maya and Max, you can do your own personal touches and add your own creativity. So by the time you're done, it's completely unique to you. Say someone buys a particular motion from you and they use it in one game. Can they just perpetually use it in as many games as they want for the rest of time? Once they own it, it's theirs? NK: Yeah. Once they download it, it's theirs, and they can do whatever they want with it. The only requirement that we have is that they don't compete with us, and they don't resell it on the web. Other than that, it's theirs. The fact is, you can suddenly build a library of motion-captured animations that a lot of people wouldn't have access to otherwise. And since they're editable, they can also tweak them. NK: Yeah, I think the big difference that Stefano mentioned is that we watched someone move, and from there, we learn how a person is moving. Based on advanced mathematical models, we can allow you to customize and recreate the motions the exact way you want. Some of it is based on deep mathematics that were developed in a bunch of different universities, including Stanford, Max Planck Institute in Germany, and Toronto University. We have people from all of these universities. Either they're at Mixamo working for us, or they're working as advisers and consultants. SC: And their process is 100 percent data-driven. Sometimes you see procedural engines trying to make some good motion, but it will never be as good as the original data. That was the big philosophical decision we made. Also, we make the work of animators a lot easier by being able to apply those motions not only to the skeleton of their characters -- on an FK type of rig -- but also on the control rigs. So on the IK skeleton in 3DSMax, we support that. This is the first time where, within a minute, you can bake motion into your control rigging through Max from a large catalog like we have. Who do you see as the target of this? Is it everyone, from large studios on down? Is it a cost reduction? What's the advantage? NK: The idea is that the advantage is time. Today, animators work all day long. In a day, one animator produces three seconds of animation. If you go to places like Pixar, they take a whole week to produce three seconds of animation. But over here, you can come to our website and get three seconds of animation in a matter of minutes, rather than spending a whole day. Time is the main advantage, and since you save time, you also save cost. This allows animators to focus on the creative aspects, so that the monotonous part is taken away. That's what we are doing. Cost savings are a huge aspect of what everyone's trying to do, and there's a lot of solutions everyone has been looking for to save money. This generation, costs have ramped up. But a lot of people are finding that the solutions to do that aren't as easy as they'd hoped they would be. NK: But in our case, you can come to our site and try it out and realize that it's very simple. It's as simple as moving sliders up and down. Also, the real breakthrough of what we have done is we've taken the process of creating animation to a very high level of abstraction. Today, what you do is you do a keyframe at a time. Here, instead of doing a keyframe at a time, you're doing a direct level of abstraction. Just as a director will tell an actor to run fast, run slow, look scary, look relaxed, or look happy, that's what we're doing. You can see the productivity is a very high level. SC: That's why we are basically democratizing the process of creating animation. You don't need to spend five years learning Maya. You can just do it in a very intuitive way that everybody can have access to. So if you wanted to make an XBLA game, a user could get animations from you guys, and you can use the preexisting avatars as the character. Suddenly, you're off the ground a lot faster than you every would've been able to be a year ago. NK: And at the same time, the animations are very compelling, because all of this information is coming from motion capture data. It's just as high-quality as something that EA would do at their own motion capture labs. If you're a second tier studio and you can't afford to do your own motion capture, you come to Mixamo and then you can get motion capture data very quickly and very cost effectively. You talked about how you're going to update the database based on requests and searches. Does there have to be a critical mass of a particular animation or search before you'll update it? NK: No. We're talking to game developers. If they're developing a particular kind of a game, they can tell us what kind of motion that they want, and within a week, we can put them up on the website. We've done a bunch of different games. Someone has done a zombie game, so we have a huge number of zombie moves. Then another person is doing a first-person shooting game, so we have a bunch of first-person shooter moves. As people start developing different kinds of games and we start covering them, we will have all of those motions up here. SC: Right now, we have close to 300 motion models there. In the future, we are planning to expand to quadrupeds, so all kinds of animals and creatures. Do you have your own motion capture studio? Do you actually have and control one to allow you to do the fast turnover? NK: Yes, that's right. We actually have two motion capture studios. One is a state-of-the-art markerless system. In a markerless system, you don't need to put markers on it, so we can capture animals. No ping-pong balls. NK: No ping-pong balls. We also have a marker-based system if you want to capture your fingers and things like that. SC: And we have top-notch people there. The director of the motion capture in The Polar Express and Beowulf works for us and helps us with the marker-based pipeline for the top quality possible. The markerless system that Nazim was talking about is the first one in the world where we don't need a blue screen and can capture things like animals, which you just can't put markers on. NK: And you can go outdoors and capture tennis tournaments. You can take this thing out and not put any markers out, and see the real players playing.

About the Author(s)

Christian Nutt


Christian Nutt is the former Blog Director of Gamasutra. Prior to joining the Gamasutra team in 2007, he contributed to numerous video game publications such as GamesRadar, Electronic Gaming Monthly, The Official Xbox Magazine, GameSpy and more.

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like