In a recent opinion article by James Portnow entitled The Problem Of Choice, the idea was posited that there are two types of decisions that a player can be faced with in a game: "problems" and"choices".
The former is something that involves a "right answer" such as a mathematically optimal solution. Therefore, theoretically it can be solved. We are all familiar with such challenges ingames -- especially when designers make them all too transparent. The other type of decision is the "choice".
These are a little more amorphous in that there is no "right answer". Games such as Bioshock (i.e. the Little Sisters) have these elements but others such as Black & White and the two Fable games are rife with them. In fact, the entire game mechanic is built upon the idea of "make a choice and change the whole experience."
While I agree with the excellent points that James made, I believe that this same mentality can be extended to the realm of AI as well. In fact, I made this point in my lecture, Breaking the Cookie-Cutter: Modeling Individual Personality, Mood, and Emotion in Characters at the AI Summit at GDC a few weeks ago.
Specifically, I suggested that the incorporation of differences between characters can enable game design choices for us as developers which, in turn, enables gameplay choices for our audience. However, it is not simply the incorporation of personality, mood, and emotion that does this. It is often even simpler than that.
As programmers, we deal in a world of algorithms. Algorithms are, by definition, a series of steps designed to solve a particular problem. Even the ubiquitous yet humble A* pathfinding algorithm is sold as guaranteeing to "return the shortest path to a goal if a path exists." The emphasis is mine. It returns the shortest path -- the best decision.
Now that we are using A* for other uses such as sifting through planning algorithms todecide on non-path-related actions, we are subscribing to the same approach. What is the best action I can take at this time? Unfortunately, that leads our AI agents along the same path as the player... "how can I solve this game?" The simple fact that our agents are looking for the one solution necessarily limits the variety and depth that they are capable of exhibiting.
The irony involved here is that, in designing things this way, we cause our agents to approach something that should be a choice (as defined by Portnow) and turn it into a problem (i.e. something that can be solved). Whether there is any "best" decision or not, our agents believe that there is... "belief" in this case coming in the form of whatever decision algorithm we happened to design into their little brains.
The solution to this is not necessarily technical. It is more of a willingness by designers and AI programmers to allow our agents to either
a) not make the "best" decision all the time, or
b) include decisions in the design to which there is no"best" solution at all.
Unfortunately, we have established a sort of industry meme that "we can't allow our agents to do something that is not entirely predictable". We are afraid of losing control. Here's a startling tip, however... if we can predict what our agents are going to do, so can our players! And I nominate predictability as one of worst air leaks in the tires of replayability.
One of the quotes that I used in that lecture and in my book on behavioral AI is from Sid Meier who suggested that a game is "a series ofinteresting choices". It is a natural corollary that in order for the player to make interesting choices, he needs interesting options (not math problems). One of the ways that we can present those interesting options is to allow our agents to make interesting choices (not solve problems) as well.