Sponsored By

Tooling Around: Pathfinding With Kynogon's Kynapse

Gamasutra's latest middleware firm profile talks to Pierre Pontevia, CEO of Kynogon, developers of the pathfinding AI SDK Kynapse, as used in major games such as Real Time Worlds' Crackdown, Turbine's Lord Of The Rings Online, and Lionhead's

Alistair Wallis, Blogger

March 30, 2007

11 Min Read

For the next part in Gamasutra’s ‘Tooling Around’ feature, which profiles and interviews middleware and tools developers about their products, today’s interview is with Pierre Pontevia, CEO of Kynogon, developers of the 'large scale AI' pathfinding SDK Kynapse. The product grew out of the French company’s experiences working with Criterion on the RenderWare AI plug-in. Now onto version 4.2, Kynapse offers a number of features for 3D pathfinding, like path planning and smoothing, as well as a number dynamic features, like object avoidance and spatial reasoning. Additionally, it is designed to aid in the creation of large scale AI, and includes hierarchical 3D pathfinding, an automatic generation of hierarchy and perception mechanisms for a large number of individuals. As well as the stand alone SDK, Kynapse is currently available to be integrated into VR-Forces, as a plug-in for Virtools and as part of Epic’s Unreal Engine 3. Recently, it has been used for Real Time World’s Xbox 360 title Crackdown, and will be featured in the upcoming Lionhead Studios title Fable 2, amongst others. We spoke to Pontevia, and asked about Kynapse, the current state of AI for video games, and the company’s working relationship with Epic, Sony, Nintendo and Microsoft. When and why was Kynogon formed? Kynogon was formed in 2000 by Jacques Gaubil and myself. First, we thought there was room for improvement in AI: the gaming industry had made tremendous efforts on rendering and we thought that offering better interactivity—mainly physics and AI—would become one day a focus for the industry. I think this is happening with this new generation of consoles. We also believed that the core expertise of a game developer was not to develop complex physics solving or good pathfinding, but rather to tell stories. So the industry needed technology experts that would help game developers focus on their core business. As an analogy, the car makers that succeed are the ones that are able to design nice cars and integrate best in class technologies, not the ones that manufacture complex injection or braking systems. What were the aims and goals of the company at this time? Our goal has always been the same: to provide AI technology to the game development community. The only thing that has evolved is that at the very start of the company we thought we would offer service together with the technology, mainly for integration. We realized in 2002 that we were able to develop a generic architecture that made the integration smooth and service was not relevant any more. We then offered remote support as it better suited our clients’ expectations. How did you realize the need for a product like Kynapse? Our vision is that it is impossible for a game developer to maintain best in class technical expertise in all domains that are required to develop a game. Not only does a game developer have to offer strong content but, from a technology perspective, he also has to be at the top in very complex domains such as physics, rendering, AI, audio, etc. On top of that, the hardware platforms go through radical changes every 5-6 years! We thought that game developers would need help. How would you describe the product? We differentiate “high level AI” from “low level AI”. High level AI has to do with decision making—why is a character doing this or that? Low level AI is more about how the character will do what has been decided. We believe the high level AI to be more gameplay specific. Kynapse focuses on what we call low level AI with key components such as: - 3D pathfinding supporting very dynamic terrains, very large maps, high number of characters, etc. - Spatial reasoning: the pathfinding is telling you how to go from A to B, spatial reasoning is telling you why B is an interesting destination. We dynamically analyze the 3D topology to identify interesting locations such as hiding places, access ways, threatening zones, etc. - Team coordination. - Automated production chain: AI data are automatically generated; they can be hierarchical to handle very large terrains. They can also be streamed at runtime or take advantage of next gen hardware architecture. What was the development time on the product, and what challenges did you run into in preparing the product for industry use? Kynapse is the result of more than seven years of development. The first challenge was to be able to offer a product and not pieces of code. It took us three years to have a first version and an additional year to have a mature technology. At the beginning we were working with Criterion Software as the developer of RenderWare AI and they helped us acquire the expertise on what a product is. It is not only a set of features; it is also tools, installers, code examples, documentation, tutorials, performance, multiplatform capabilities, etc. Another challenge was to properly understand the market needs. AI is a misleading word. People very often understand complex decision making mechanisms (artificial life, neural networks, etc.) while what the industry needed was 3D pathfinding and spatial reasoning. The third challenge was and always is to reach great performance level on game consoles. CPU and memory have always been scarce resources and we constantly strive to offer the greatest performance to our clients. How has the product developed over the time you've been producing it? First, I would say that Kynogon is a client-focused company and that is what made our success. We have built strong support teams in multiple locations and we are very reactive to our clients’ requests. We also constantly talk with game developers. For example, all our engineers have at least visited once a client team on site and we keep them exposed to the way our clients work, to their concerns, etc. Even our R&D team is regularly talking to our clients via conference calls, emails, etc. So, we discuss a lot with our clients or prospects and they are the ones telling us what to do, where to focus. Whenever we have an idea, we usually realize a small prototype to test this idea. For example, our latest prototype is pathfinding in extremely destructible terrains. A path is computed over rubble of hundreds of objects. An object is not blocking per se, it is the rubble of objects that creates the topological complexity. It is still a proof of concept and we are testing the market. How have you acted on feedback to improve Kynapse? Again, we are very attentive to our clients and prospects requirements and the main input to Kynapse roadmap comes from them. Then the alchemy is to be able to integrate their inputs within our roadmap. We have to be flexible and reactive to the market expectations but we have to keep our roadmap consistent otherwise, there would be a risk not to deliver. It is a constant discussion between our development and marketing teams. How does the product work on a technical level? Kynapse technology relies on its unique capability of computing automatically a 3D topological description of the world called the “pathdata” out of the collision world and using it efficiently at runtime. Once a Non Player Character (NPC) is aware of its 3D surroundings, it becomes possible to provide it with 3D pathfinding and 3D spatial reasoning capabilities. More specifically on 3D pathfinding, Kynapse provides a very strong framework able to deal with very dynamic terrains, streamed data, computations over several processors, flying NPCs, etc. At last, as it is an SDK, Kynapse provides also a lot of freedom for the user to customize part of the code called the modifiers to perfectly match the specific requirements of the game. What does your status as a middleware partner with Microsoft and Sony involve? It is key for us to provide multiplatform technology. For example, we had a PS3 launch title with Sega and we had to be very reactive every time a new version of the hardware was coming out. Our clients target multiplatform titles to optimize revenues so we have to provide multiplatform technology. Microsoft and Sony help us to get the best out of their hardware. This new generation of consoles required a lot of work not only on the algorithmic side to make all our key algorithms “batchable” but also on the data side, to be compatible with memory constraints of PS3 for example. Why has the partnership with Nintendo not gone beyond the GameCube? We have a very good relationship with Nintendo and I hope we will have Kynapse on the Wii soon. How has the integration of Kynapse into the Unreal Engine 3 affected the development of the product? First, Kynapse has been designed to be easily integrated. We had a lot of different experiences on various Kynapse integrations. Some of them in the serious gaming had been quite weird. On the other hand, Unreal Engine 3 is an open engine that perfectly welcomes external modules and customization. So integrating Kynapse in Unreal Engine 3 has been pretty straightforward and has not required massive work. We did not have to completely re-shuffle our roadmap! In fact, it helped us. It offered us a very good new testing environment, which is something we are always looking for. Although I must say that we are also frequently using Trinigy’s Vision engine for our examples and tutorials. Do you feel your product works best when combined with other middleware? Kynapse is a generic AI engine. It has been designed for specific hardware consoles not for specific environments. Some of our clients are using proprietary engines, others are using Unreal, some are using Havok, others PhysX, etc. We are middleware-agnostic. More generally, Kynapse has been designed for easy integration. A good example of this is the integration of Kynapse within Unreal Engine. We are part of Epic’s Integrated Partner program and we offer an integration of Kynapse within Unreal Engine. The development team can leverage Kynapse full set of functionalities and tools very quickly. There are 3 levels of integration: low level integration, data generation integration and the integration in the game editor. Low level integration is where we exchange information with the game engine, the physics engine and the animation system. The interfaces are clearly identified and the glues very easy to put in place. It usually takes less than a day to do this low level integration. For data generation integration, Kynapse offers an automatic generation of the AI data, the data that will be used by our pathfinding or special reasoning algorithms. This is a service that can be integrated in the game production tool chain. In Unreal, data are generated directly within Unreal Editor and they can be manipulated as packages like any other Unreal data. At the game editor level, the integration really depends on how the developer is working and what he wants to achieve. For example within Unreal Editor, the developer can call Kynapse seamlessly for example using Kismet or the “MoveToward” function. What are some of the more notable examples of the product's use? Alone in the Dark next-gen, Battlefield 2: Modern Combat, Black, Cell, Crackdown, Fable 2, Fall of Liberty, Rogue Warrior, Sacred 2, Sonic the Hedgehog next-gen and The Lord of the Rings Online: Shadows of Angmar. Who is currently using the product? Activision, Bethesda Softworks, Digital Illusion CE, Electronic Arts, Lionhead Studios, Sega, Spark Unlimited and Turbine What do you see as the next evolution of Kynapse? From a pure AI perspective, we have been working in 3 directions: - Multicore/multithread: We have been working a lot and we keep on working on Kynapse algorithms and data so that they are optimal for next generation hardware. Offering the best performance to our clients in a never-ending task. - AI in dynamic terrains: the tremendous progress that has been made on the Physics side requires advanced AI. Spatial reasoning and pathfinding must be able to handle very dynamic, destructible terrains with rubbles for example. - Large scale AI: for a true immersion of the player, worlds need to be alive. Cities, malls, metro stations, etc. need to be populated with sometimes thousands of credible pedestrians, cars, buses, animals, etc. I also believe that a tighter integration between physics, AI and animation is the future. As an example, if you want to properly animate a character, you need to use the three different techniques: animation to give a style to your character, physics so that you take into account Newtonian forces that can affect the character’s skeleton (if a ball hits the character on the shoulder or in the belly, the effect will not be the same), and AI in order to include intention in the animation (the character may move his hand to catch the ball.)

About the Author(s)

Alistair Wallis


Alistair Wallis is an Australian based freelance journalist, and games industry enthusiast. He is a regular contributor to Gamasutra.

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like