Sponsored By

While these are just Unity's suggestions, they may help jumpstart some useful conversations within game development teams about how and why advanced AI tools and applications are used.Â

Alex Wawro, Contributor

November 28, 2018

2 Min Read

The team at Unity published a brief blog post today outlining what they see as the six principles which should guide the development of "ethical" artificial intelligence tools and systems.

While these are just suggestions, they may help jumpstart some useful conversations within game development teams about how and why advanced AI tools and applications (like machine learning and natural language processing) are used. 

However, Unity's main focus seems to be on AI used outside the game industry in fields like healthcare, engineering, and media. It makes sense given Unity's ongoing efforts to push its game development tools into the hands of creatives in other industries, and the comparative lack of cutting-edge AI techniques (since game AI is typically designed not to excel at its task, but to foster a good experience for players) employed in game dev.

"These principles are meant as a blueprint for the responsible use of AI for our developers, our community, and our company," reads an excert of the blog post. "We expect to develop these principles more fully and to add to them over time as our community of developers, regulators, and partners continue to debate best practices in advancing this new technology."

Without further ado, here are Unity's six guiding principles for AI:

  • Be Unbiased.
    Design AI tools to complement the human experience in a positive way.  Consider all types of human experiences in this pursuit. Diversity of perspective will lead to AI complementing experiences for everybody, as opposed to a select few.

  • Be Accountable.
    Consider the potential negative consequences of the AI tools we build. Anticipate what might cause potential direct or indirect harm and engineer to avoid and minimize these problems.

  • Be Fair.
    Do not knowingly develop AI tools and experiences that interfere with normal, functioning democratic systems of government. This means saying no to product development aimed at the suppression of human rights, as defined by the Universal Declaration of Human Rights, such as the right to free expression.

  • Be Responsible.
    Develop products responsibly and do not take advantage of your products’ users by manipulating them through AI’s vastly more predictive capabilities derived from user data.

  • Be Honest.
    Trust the users of the technology to understand the product’s purpose so they can make informed decisions about whether to use the product. Be clear and be transparent.

  • Be Trustworthy.
    Guard the AI derived data as if it were handed to you by your customer directly in trust to only be used as directed under the other principles found in this guide.

This comes well over a year after Google DeepMind established its own research group, the DeepMind Ethics & Society unit, to establish and solve the big questions posed by cutting-edge AI development. That effort continues even as Google DeepMind continues to train AI agents to excel at games like Quake III Arena, StarCraft, and Go.

About the Author(s)

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like