Sponsored By

Speech Graphics unveils lip synch service for game characters

UK-based startup Speech Graphics announced its new technology that analyzes audio to predict how muscles will move on a face to produce those sounds, then syncs that with the animation of a game's 3D characters.

Eric Caoili, Blogger

February 9, 2012

1 Min Read

UK-based startup Speech Graphics announced its new technology that analyzes audio to predict how muscles will move on a face to produce those sounds, then syncs that with the animation of a game's 3D characters. The lip synch solution will be initially offered as a service, and is meant to save development time and resources. As part of its voice-based animation service, the company takes facial models and audio assets provided by clients, and then produces synchronized animation curves. Speech Graphics provides its output in Maya or 3dsMax. Speech Graphics notes the technology uses a universal physical model that works across all languages. It intends to demonstrate the software in English, German, French, Spanish, Japanese, Korean, and Russian at March's Game Developers Conference (Booth #1843). The startup aims to offer better results through its technology than what can be currently achieved with motion capture. Speech Graphics also adds that prices for its service would scale depending on the amount of dialogue in games.

About the Author(s)

Eric Caoili

Blogger

Eric Caoili currently serves as a news editor for Gamasutra, and has helmed numerous other UBM Techweb Game Network sites all now long-dead, including GameSetWatch. He is also co-editor for beloved handheld gaming blog Tiny Cartridge, and has contributed to Joystiq, Winamp, GamePro, and 4 Color Rebellion.

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like