Sponsored By

GamerGPT: What to Consider When Considering Generative AI in Gaming

We briefly cover how AI tech and law intersect with generative AI in video games, starting with a survey of issues related to the right of publicity, intellectual property (“IP”), and privacy.

+1
Andrew Tibbetts, David I. Schulmanand 1 more

November 15, 2023

10 Min Read
Image courtesy Pexels user Sora Shimazaki.

As generative AI continues gaining prominence, video game studios are assessing how to leverage generative AI. What generative AI tools can do and how they fit into existing workflows are key considerations. However, new opportunities can be accompanied by legal implications that need to be weighed and managed. Below, we briefly cover how AI tech and law intersect with generative AI in video games, starting with a survey of issues related to the right of publicity, intellectual property (“IP”), and privacy.

The rapid and ongoing innovation in generative AI has left legal analysis rushing to catch up. Multiple currently pending lawsuits are pioneering new laws regarding generative AI. Terms of service are shifting as companies adapt to user behavior. We provide here a “snapshot” of issues in the current U.S. landscape, based on terms of service for several generative AI tools frequently used in gaming, and on legal circumstances in the U.S., both current as of mid-2023.

Generative AI Tool Capability and Functionality

Given popular misconceptions, it may be helpful to cover first what generative AI tools are and how they operate. Generative AI tools include one (or more) models trained using pre-existing data. Most tools then accept input data that is statistically analyzed with the models to generate a particular output. Depending on the content and format of the input data, and the statistical analysis that is undertaken, different outputs are generated.

The output of generative AI programs is thus often a result of a probability process run on the inputs and based on the training data. Like any statistical analysis, sometimes the result is correct—and sometimes it is not. Generating a correct result depends on complete and correct input, as well as properly formatting the input for the tool. An imperfect input can generate incorrect output. Further, generating a correct output, even for a correct input, also depends on the system having been trained on a sufficient amount of high-quality examples that correspond to the particular task the user is looking to perform. With insufficient training data, even for a correct input, an incorrect or undesirable output may be generated. Accordingly, though generative AI tools will often generate some output for a given input, the output may not be correct or align with the user’s goal.

As an initial matter, then, it is important to understand what training data was used to train a tool, and what analysis the tool is able to perform, to avoid misusing the tool or choosing the wrong tool for a task. Additionally, it is important to note that while AI often gives good results, it also often gives bad ones and shouldn’t be used without review and oversight.

Companies should also be cautious with generative AI with respect to functional impact, in order to avoid potential defects and substandard product performance based on the use of generative AI. AI-generated audio/visual (“A/V”) content could include hard-to-detect errors. Importantly, content derived from generative AI may not be as efficient as content from other sources, such as efficiency in file size for A/V content, or efficiency in execution speed or execution memory usage for code. When a large amount of A/V content is to be generated, source files may become too large. Code generated by AI may not be adapted to run in particular target environments. A good rule of thumb is to use AI content as a starting point, not a finished product.

Right of Publicity

The right of publicity should be top of mind with gaming companies given this year’s actor and writer strikes. Some AI tools have the capability of replicating the voice of a person, or creating A/V content that may look like or mimic a person. Doing so may raise right of publicity questions. Companies should therefore obtain explicit permission if they are going to mimic a person’s visual or audible appearance in any way. If a stock photo is to be used, the stock photo license should be reviewed to determine how the license relates to usage of a generative AI tool.

Intellectual Property

Generative AI tools raise a variety of IP concerns. Below, we briefly cover different categories of concern.

Rights in Data You Provide the Tool

For each tool and each input to be provided, the rights you are granting and how that impacts your and the tool provider’s rights in the input should be understood. Terms of service for generative AI tools typically identify the rights that a user providing input (a prompt, maybe together with initial data), and the applicable tool provider receiving the input data, will have with respect to the input. If a tool can retain your input and gets a broad license to it, you may lose the ability to control how your data is used. This could be a particular concern if the terms give the tool provider proprietary rights. There is risk if the tool can use inputs in training since any training data could be potentially output in response to a future input. This could result in input data being provided to competitors. The risk may be small in many cases, but may be a particular concern if trade secret data is input to a tool, because it may jeopardize the “secret” status. In addition, rights grants to tool providers may run afoul of agreements with sources of the input data.

Your Rights in Data Outputs

Before relying on a tool, companies should understand their rights with respect to output. When a generative AI tool produces output, the tool’s terms of service will control ownership and licenses for that output. Ideally, a company would retain full ownership of any output it generates via the tool. While some tools offer full ownership, others grant only a license to the output. One tool provides ownership unless a user’s company has gross revenue over US$1M, then provides only a noncommercial license unless a paid account is obtained. It is also important to know whether output usage rights (ownership or license) are exclusive, particularly if the intent is to commercialize the output—the ability to stop others from using outputs may be limited if the tool grants only non-exclusive rights.

Importantly, there could be questions regarding the ability to protect IP rights in output. U.S. tribunals have held that copyright only extends to expressive content created by a human. Where a human was not involved in creation, that content may not be copyrightable. If output content cannot be copyrighted, others may be free to use it. As such, to limit risk for key game content, limit AI use or clearly document how a human provided creativity to the work done by AI.

Patent questions may be less likely to arise. Patents cover functionality, and inventorship rights attach to the humans who conceive of the inventive aspects of the functionality. Even if generative AI were used to create functional aspects of a game (e.g., software code), arguably that code would be merely implementing a task already identified by an individual and input to a model, and thus the inventive work would have been done by that individual. Nonetheless, it is a risk to be aware of.

Tool Provider Rights in Data Outputs

Terms of service will also control rights a tool provider receives or can grant to others with respect to output data. Some tools receive rights to use the output solely for managing the tool and quality assurance, while others use it in training, and still others may share output with third parties. It is important for companies to understand the tool providers’ rights and the rights they may provide to others, particularly if it is a company’s intent to use generative AI to produce content that will be used commercially. As mentioned above, if the tool provider has rights to the applicable content, that may compromise the commercial value. That may particularly be the case when the tool can grant third parties rights in the output, which can give rise to the possibility of competitive products.

Third-Party Rights in Data Outputs

Generative AI tools often rely on training data obtained from third parties, and generative AI output may mimic that training data either by duplicating it or mirroring some aspects. This raises concerns if the output includes copyrighted training data. This question has not been squarely addressed by existing law and is the subject of multiple pending lawsuits in the U.S. in which plaintiffs allege that an entity used copyrighted data in the training of a model, and that the trained model will output data that mirrors or is akin to that copyrighted data. Cases are blazing new law over whether unlicensed training of a model with copyrighted data is activity subject to copyright (e.g., whether it’s not copyright-relevant activity, or falls within fair use, or is copyright infringement, and so on), whether any outputs from such a model may also be subject to the original copyright (e.g., as derivative works), and whether use of any such outputs should require a license from the original author. Cases have arisen with respect to model training around software code, around artwork, and more, and have not yet been decided. It is uncertain how the courts will rule these issues. However, as a potential means of mitigating these risks, some generative AI tools make available “filtering” tools that detect whether output matches training data. Companies should use these tools if available. This risk exists primarily when the output of generative AI mimics training data. This risk may be lower in scenarios where there is a large amount of relevant training data for the generative AI and the input does not ask for example content (e.g., of a particular author/artist) to be replicated.

Privacy, Security and Confidentiality

When using generative AI tools where data is to be provided as input, companies should confirm that the sharing of the data to the tool complies with law and is consistent with the company’s applicable data sharing policies. Particular sensitivity should be given to sharing personally identifiable information, financial information, and other individual information.

Companies should also be aware of how the information provided to a generative AI tool will be shared and retained by the tool, including whether it is encrypted when communicated or stored and who may access the information while it is stored by the tool.

Inputting information into an AI tool should be treated similarly to disclosure to any third party, and the same steps should be taken to ensure confidential information, or trade secret information is only shared on restrictive legal terms and only when necessary. Absent these precautions, there is a risk of potential disclosure of confidential information, and loss of trade secret status.

Next Steps

Generative AI will play an important role in many industries in coming years, including in gaming. Game studios, if they aren’t already, will look at how to build this tech into their workflows and weigh how best to leverage it. In doing so, they should account for the legal and commercial risks that will arise, to avoid an opportunity turning into a problem down the line.

Author Bios

Andrew (A.J.) Tibbetts is a shareholder in the Intellectual Property & Technology practice group in Greenberg Traurig’s Boston office. He leverages prior experience as a software engineer to provide practical IP strategy counseling on matters related to computer- and electronics-implemented tech across a range of industries, including in healthtech, life sciences AI, computational biology, medical records analysis/coding, medical devices, and more. He can be reached at [email protected].

David I. Schulman is co-chair of Greenberg Traurig’s Video Games & Esports practice group and a senior attorney in the firm’s Technology, Media & Telecommunications practice. David’s practice encompasses a wide array of corporate legal services, and includes representation of publicly and privately held businesses, entrepreneurs, venture capitalists and lenders in all aspects of their commercial transactions. He can be reached at [email protected].

Stephanie Perron, of counsel in Greenberg Traurig’s San Francisco office, focuses her practice on transactional intellectual property matters in a wide variety of industries, including the technology, life sciences, online gaming, and consumer product industries. She has experience drafting and negotiating software and trademark license agreements, professional service agreements, technology development agreements, joint development agreements, promotion and marketing agreements, video game publishing and distribution agreements, and nondisclosure and proprietary information agreements. She can be reached at [email protected].

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like