In this reprint from the May issue of Game Developer magazine, Nate Ralph takes us through this year's newest tools and middleware, straight from the Game Developers Conference show floor.
The Game Developers Conference is a great opportunity for tools and middleware developers to show off their latest and greatest to dev studios large and small. Here's a taste of some of this year's offerings.
Havok’s middleware technologies, particularly Havok Physics, have been a mainstay in video games for years. The company is determined to take its technology further still, by giving it away to smaller teams interested in developing for mobile platforms.
Havok's calling its new initiative Project Anarchy: It's a full-fledged game engine that will be available for download in this spring. It's capable of deploying games to iOS and Android devices, completely free of any cost or commercial restrictions. The tool offers access to Havok's litany of middleware applications, including physics, animation, and AI tools. Complete game samples (and their source code) will be included to give budding developers a taste of the tool’s capabilities, and tips on how to get started.
Project Anarchy has a few features meant to ease the pain of cross-platform mobile game development, including viewports that mimic the size and resolutions of a number of different kinds of hardware, and remote input so devs can push their project to a mobile device and test it with a real-time debugger. Havok also plans to launch a Project Anarchy community site, complete with Anarchy’s source code, and will encourage the community to develop extensions and share their customizations with other developers.
Great facial motion capture can add life to a game, but the added time and expense involved can be daunting. Faceware's latest update to its Retargeter application adds a few features aimed at speeding up an animator’s workflow.
The Faceware facial-capture process starts with Analyzer, the company's tracking and analyzing software. Load a video into the application, assign virtual tracking points to key elements of the face—spots like the eyes, eyebrows, and mouth—and Analyzer will create a file with motion-capture data that you can use in Retargeter to tweak your character rig.
New to Retargeter 4.0 are the Expression Sets and Autosolve features. Expression Sets are "default" facial animation poses—looking left and right, or frowning, for example. You’ll need to mimic these default expressions on the character rig you’ve assembled in an animation studio like Maya. Here’s where the magic happens: The new Autosolve feature combines that data collected from Analyzer and the Expression Set you create, and automatically creates an animation that matches your actor’s performance. From this new "starting point," animators can make tweaks or determine what needs to be redone with minimal fuss and loads of time saved—the entire process should only take a few minutes.
Faceware believes that its tech will ease some of the pain and expense of implementing facial motion capture, by requiring less effort than typical rigging situations and allowing for much faster iteration should you need to make corrections. The company also claims that its software will work with footage from any video capture source, though it does sell head-mounted cameras. It will also work within your workflow: Faceware's Analyzer and Retargeter can be downloaded for free from the company’s site, so you can take it for a spin yourself.
Luxology's latest update to modo 3D modeling and animation suite is all about speed. There's less in the way of raw new tools, but Luxology is convinced that modo fans and neophytes alike will be smitten by performance bumps, new, cleaner layouts, and refined workflows.
Modo 701 does offer a few new tricks. The revamped dynamics simulation layer allows animators to create realistic animations with accurate physics modeling. At the show, an artist emulated rubble masonry by creating individual stones, and "pouring" them into a door frame by starting the animation and letting gravity do the work. Modo now also offers the ability to sculpt and direct particles at will, creating some rather impressive effects with little effort.
Performance has been improved dramatically over modo 601, from the selection tools and animation playback down to interacting with schematics or firing up the preview renderer window. It’s all aimed at saving time; all of those seconds spent waiting for objects and scenes to load add up, after all. Modo 701 also serves up a revamped interface focused on click reduction: The hundreds of potential keybinds users are already familiar with have been bolstered with new layout-selection tools, pop-up menus that allow you to collapse most of the UI for increased viewport space, and a few customizable workspace options borrowed from applications like Photoshop.
Donya Labs's Simplygon aims to take some of the grunt work out of building level-of-detail (LOD) models out of textured, detailed assets. Its automated LOD-building tools cut and simplify polygons, creating entirely new textured, low-resolution meshes without requiring hours of an artist’s valuable time. And Simplygon 5.0 is all about speed and accuracy, introducing new tools that improve Simplygon's automation processes. For example, the new Vertex Reposition function adjusts an LOD model's vertices to get a tighter silhouette and improved texture reproduction, resulting in less jarring differences between LOD models and the original assets. The new Smart Improve feature compares original assets to the LOD Simplygon creates, and automatically tweaks the geometry to reduce visible differences between the two. LOD models created in Simplygon 5.0 can also take advantage of symmetry-aware polygon reduction, which aims to maintain symmetry in a model across a user-defined axis.
Simplygon's LOD tools have also been updated to offer skinning support, preserving animation and skinning data from applications like Maya and 3DS Studio Max when creating low-poly LODs. Donya Labs believes this will be especially useful for maintaining an animated asset's fidelity when optimizing models for mobile devices, or rendering large crowds.
Hansoft has gone social. The project management system's 7.0 update (released in November 2012) is the company's biggest yet, revamping the user interface and adding a slew of features that will seem familiar to anyone who’s spent time on a social networking site. These social features are designed to improve efficiency: As teams grow, they lose the ability to "get together" in a single shared space. A lack of effective communication can lead to headaches, and Hansoft hopes the new update will make coordinating and communicating more efficient, without forcing large teams to rely on a multitude of tools.
Collaboration is central to Hansoft 7.0, and the application now features a news feed (not unlike Facebook's). Teams using Hansoft can keep track of projects and one another by "subscribing" to the users and groups they're interested in, which essentially filters updates. Here's an example: An executive producer might subscribe to updates from all of her team leaders, getting a glance at and commenting on their progress without necessarily being inundated with updates on every project their teams are embroiled in. The application has also added chat functionality (with optional conversation logging). All of the application's modules can also be popped out of the main interface and placed anywhere on your PC's desktop should you prefer to have chat conversations or just the news feed visible at all times while you work in other applications.
Up next for Hansoft are Dashboards, which will offer a bird's-eye view for project leads and the like. Dashboards will be capable of monitoring all the data that’s entered into Hansoft, and should offer an idea of a project's total progress and milestones in simple, digestible chunks. Also coming down the pipe are Mac and Linux versions of the software, so developers who'd rather not use Windows systems can also get on board.
Geomerics might not be a household name, but there's a chance you're familiar with its work; the company's dynamic lighting technology, dubbed Enlighten, was used in EA DICE's Battlefield 3
. At this year's Game Developers Conference, Geomerics announced that Enlighten would be integrated into both the Frostbite 3 engine (for Battlefield 4
) and Unreal Engine 4. As of the latest Enlighten update, Geomerics has extended runtime support to include just about everything under the sun, including the upcoming Sony PlayStation 4, Windows, Mac, and Linux PCs, current-generation consoles, and both iOS and Android mobile devices.
Also new is support for a broader range of lighting models. We were given a tour through an artist’s rendition of an ancient ruin to see the effects firsthand: Light spilled into rooms and caverns as the sun rolled across the sky, and crept back out casting shadows all the while — an ample demonstration of how Enlighten tackles static and dynamically lit environments. As befitting a video game-inspired ruin, brightly lit torches burned seemingly without purpose; knock them about, and cobblestones glowed from the reflected light, bouncing indirectly off cavern walls. This served as a case study for the new dynamic specular effects, a computationally expensive process that Geomerics claims it's been able to achieve in real time on mobile devices. On the production side of things, a new plug-in for Maya adds real-time previews of Enlighten's lighting tools into Maya's viewport, so designers can experiment and get a feel for how their models will appear in-game before making any commitments.
Geomerics has also expressed interest in working with more mobile developers to bring Enlighten-powered lighting effects to small screens, but remarked that it isn't ready to make its technology and support staff available to smaller independent developers quite yet, as some training is require to implement Enlighten into a game developer's tool chain. We were told that the first licensees creating content for mobile devices are actually developers already creating content for consoles and PCs, since they're already familiar with the tools.
Perceptual Computing SDK
Intel's Perceptual Computing initiative went into beta in fall 2012, with an SDK release that called on developers to create applications and games that made use of voice recognition, close-range hand and finger tracking, and facial analysis. During this year's Game Developers Conference, Intel announced the release of the Perceptual Computing Software Development 2013. The crux of the update is that there's no longer a beta tag, and software can now be developed for commercial purposes.
The Perceptual Computing SDK intends to change the way we interact with our PCs, but roadblocks abound: The SDK only works with Intel CPUs on Windows 7 and Windows 8 PCs, and the gesture-tracking controls are currently only supported by a $150 gesture-tracking camera (sold separately). That said, the ability to develop applications using the SDK for commercial use could prove promising. The SDK provides a few sample-use case scenarios and tools to develop applications that are capable of monitoring speech, or tracking faces and hand gestures, and can be downloaded for free. The standalone camera will be available to consumers in stores later this year, and Intel plans to implement the technology from the standalone camera into Ultrabooks at a later date. That $150 could prove to be a sound investment, should consumers be quick to embrace Intel’s vision of a finger- and face-friendly future of computing.
Motion-capture specialist OptiTrack's latest offering, the Prime 17W motion-capture camera, offers a 70- by 51-degree field of view, with a generous 50-foot camera-to-marker range. It can capture 360 frames per second at a 1.7 megapixel resolution, but costs $3,700. That's a hefty sum, and the costs will only continue to climb as you add more cameras for capturing useful motion data in 3D spaces, or if you opt to use OptiTrack's Motive:Body software.
Should you choose to take the plunge, OptiTrack's powerful hardware and software could potentially streamline your mocap workflow. Setup is simple: Actors strap on their markers and assume a neutral pose, and the OptiTrack system will create and calibrate a trackable skeleton. On the GDC show floor, the company demonstrated the Prime 17W's precision with a steady stream of actors dancing for the cameras, capturing precise footwork and finger tracking that uses a minimal number of markers to approximate hand gestures and the like.
Extreme Reality aims to slash the cost of developing and releasing motion-friendly games by eliminating one of the most expensive components: hardware. The company's Extreme Motion software uses 2D cameras with as low as 0.3-megapixel resolution counts to create accurate skeletal models of players, and insert them into games that utilize Extreme Reality's tech. At GDC, the company gave demonstrations of the software's speed and accuracy — even in a setting with subpar lighting — on meager laptop webcams. Assume a neutral pose with your arms raised in the air, and the software will quickly recognize your skeleton and map your movement data to whatever developers see fit — from games to adding simple gesture controls to user interfaces. Extreme Reality's software also works with the front-facing camera of Apple's iPad, with plug-in support for Unity3D on iOS. The software currently supports Windows and iOS devices; Android support will be coming later this year.
Nate Ralph is an aspiring wordsmith fascinated by games, hardware, and most everything in between. You can find more of his musings in 140-character chunks at @nateralph