Lessons learned developing for depth camera interfaces
By Tal Raviv, Head of Production, Side-kick
Over the past three years we have developed games and demos for depth camera interfaces.
We grew with the technology capabilities and learned by testing interface ideas including ones that were marked for years as"bad".
We are now working on multiple projects that focus on the depth camera as their interface.
The new motion interface is not a gimmick., it is a major revolution that has to be part of the game to succeed. Players should feel natural using it even if it’s their first time playing and they can achieve that with the right design.
We wanted to share a few lessons learned, production wise, in hopes that it will help and establish a knowledge pool for everyone who is going to develop for these exciting new motion based interfaces.
The Challenges of Producing Gesture-based Games
I’ve spent the last 17 years trying to bring order into the chaos of videogame productions that I was responsible for, with relative success – while it never became easy to create games that fully meet their quality, budget and timeline goals, I gained a decent level of confidence in my ability to do a reasonable job at that and kept my butt from getting kicked.
Then came gesture games.
Platform changes occur every several years and they always have an impact on how games are produced. However, I don’t recall any new platform that has brought up so many new challenges to deal with, and so many changes that need to be made in the production process.
For example, the development process of gesture games includes lots of trial and error – many game elements, from the core gameplay mechanic that drives the entire game, to small features such as power-ups, simply don’t work as planned (on paper). This constantly happens to us even with the aggregated years of experience that we have with gesture games design and development.
Another example is office space. Even if your office was perfectly suitable for ‘traditional games’ development, it will now present lots of issues – developers and game/level designers need much more space to test their work, especially with games that utilize full body gestures or 2-player multiplayer (or both, which is of course the worst,space wise). In addition, the work space needs to be clean from anything that introduces ‘noise’ to the system – people sitting opposite to each other, laptops, doors, etc. This basically means reorganizing the entire office, adding more space and even moving to a different office if you had no spare room in your current office.
There are critical QA issues–since a QA tester cannot continuously test any of our games for more than half an hour before becoming completely exhausted, more QA staff is needed and the QA work needs to be split wisely between actual testing, writing bugs, assisting developers in debugging, etc. Realizing that QA testers (well, most of them anyway) represent actual players in terms of their physical endurance, led us to a game design understanding that our gesture games must offer a well-balanced combination of hectic action with more relaxed sections where the player can get some rest.
A big new headache is the continuous need for focus groups. The fact that different people will perform any gesture (even the most obvious one) in endless variations means that you cannot really validate a gesture control without having many people with varied backgrounds and levels of exposure to gesture games (at least as varied as your target demographic group). You must test the game with each new gesture. So these ‘focus groups’ are an essential tool throughout the development lifecycle and they happen very frequently. Unfortunately, over time it’s becoming harder to find ‘fresh’ people, or taking up the time of people who have already been exposed to gesture games.
Over the past year we have adopted several methods that help us bring some order into this new chaos of gesture games development. First and foremost is the fanatical use of the Scrum project management model throughout the project teams. While I believe that Scrum is generally a better methodology for games development than waterfall, this is even more evident for gesture games development, and here is why:
1. Documentation – the chance of a monolithic game design document to become completely irrelevant is much higher (and it happens much sooner) than in traditional games. Scrum allows the designers and the developers to progress in small steps while constantly adapting the general design and developing the detailed design according to the continuous realization of what works and what doesn’t.
2. One of the things that Scrum is good at is getting the entire team involved with all aspects of the game development, on a daily basis. This is proving to be an effective tool for getting feedback from as many people as possible (functioning as a sort of focus group) regarding what works and what doesn’t.
3. The Scrum principle of having a ‘potentially shippable’ build at the end of every sprint forces us to break the long development process into many small milestones that allow us to test and verify each small increment to game, thus avoiding situations where a lot of development happens before finding out that a critical game element does not work. (In gesture games a lot of game elements are related to gestures). We have found, that 1-week sprints work best for us from this aspect, while notcarrying a lot more planning and retrospect overhead compared to 2-week andeven 3-week sprints.
4. Switching to gesture games development completely changes the velocity of the team, even if it’s a team that’s veryexperienced in traditional games development. Scrum allows us to measure andadjust to the changes in velocity much faster than with waterfall development.
Another highly useful habit we adopted is prototyping first, even for small gameplay elements, and even whenyou are deep into the production cycle. We have found that there is significantadvantage to building almost every new gameplay element as a prototype before finalizing it with art, sound, animations, etc. In most cases it provides about90% confidence (in the ability of that feature to work) in ~50% of the work.
It is also very important to take into account that some percentage of these prototypes will not work the first time and will require another attempt (or two) before reaching a satisfactory state. This requires an adjustment in the high level planning of the project, simply by realizing that more trial an error means more development time. This is well justified time though, considering the alternative is continuing to invest time and effort in a game that does not progress in the right direction and may totally fail because of that.
Depth cameras and motion sensing devices are a new control for players. As game producers and developers we have got to respect that revolution, and mix innovation and creativity into an introductory phase, where simpler mechanics are tested and introduced.
The opportunity is to create new experiences and ultimately new genres of gameplay that are controller specific and effective in creating new and engaging fun factors.
Side-kick is a production house specializing in video game systems that use human gestures as the game controller. The company was created by veteran games industry entrepreneurs and the technical team behind key strategic game demos used in presentations for the PrimeSense camera, the leading “depth camera” technology.
Side-kick’s close ties with motion control technology pioneers provide the company with unique access to new features, faster development cycles and the enhanced ability to create ground-breaking games. Side-kick is backed by Wekix, Kima Ventures, Jasmine Group and private investors. For more information, please visit: http://www.sidekick.co.il