Pawn Captures Wyvern: How Computer Chess Can Improve Your Pathfinding
There have been numerous papers published recently on how to improve the A* pathfinding algorithm. Some of these papers have come from ex-computer chess practitioners, who have been dealing with optimizing different search algorithms for the last thirty years. This paper will attempt to summarize some of these enhancements to A*, and show you why you would want to consider a "computer chess"-style A* approach the next time you have to implement A*.
Editor's note: This paper was originally published in the 2000 Game Developer's Conference proceedings
1. Introduction
Most of you with Computer Science training have probably been through the typical Artificial Intelligence lecture on search and planning. You are shown A*, with some trivial example (so your professor doesn't get lost while doing it) which shows all of the various parts of A*. You've also
sat through the proof of why A* generates an optimal solution when it has an admissible heuristic. If you're really lucky, you get to implement A* in Lisp or Prolog in an assignment, and solve a puzzle involving sliding tiles.
Jump ahead a few years, and you've been given the task of implementing a pathfinding algorithm for your game. You shift through your notes, trying to remember why you need a CLOSED list, and how to translate all the car() and cdr() instructions from Lisp into something that your lead programmer won't bring up during your next performance review. You study web sites on AI and pathfinding, try a few enhancements, and eventually come up with a solution that behaves in a very similar manner to the A* algorithm from your notes.
In an alternate universe, there are academics and hobbyists that concentrate on computer games of thought, such as chess, checkers and Othello. There are regular tournaments between programs, and the two main ways to outplay your opponent and win the game involve outsearching your opponent and having a smarter (but still computationally fast) evaluation of positions. I have heard each of these statements while chatting with other Othello programmers during tournaments. Do these statements sound like anything you've heard a programmer in your company mention?
"I don't trust C++ to generate fast code, so I'm still using ANSI C."
"I coded the inner loop in assembly. It took me two months of work, but it speeds up the program by 10%, so it was worth it."
"I've had about eight hours of sleep in 72 hours, but I've improved the performance."
Computer chess programmers been dealing with a search algorithm (cryptically called ab) for the last 25 years. They have a library of standard enhancements that they can use to enhance ab and improve the performance of their program without having to resort to learning MIPS processor machine language, or trying to acquire knowledge about what sort of positions their program handles poorly.
Academics involved in the field often quoted the desire to beat the World Chess Champion in a game of chess to get their research funding. However, IBM and Deep Blue brought the funding train to a screeching halt. Most have moved onto games that are significantly harder for the computer to do well at, such as Poker, Bridge and Go. However, others realized that A* search really is not all that different from .
When we cast aside the superficial differences between the two algorithms, we quickly discover A* and ab are actually remarkably similar, and we can use the standard search enhancements from the typical computer chess program in your pathfinding algorithm. We will be describing the subset of the computer chess based search enhancements that we use in our pathfinding code at BioWare.
Section 2 will quickly review the standard A* algorithm (so you do not have to dig out your AI lecture notes again). Section 3 will discuss the anatomy of a computer chess search algorithm, and Section 4 shows you how to put the search enhancements into A*.
2. A Really Brief Review of A*
A* [Hart 1968, Nilsson 1971] is one of the preferred methods of dealing with the pathfinding problem. A* search starts with the initial state in a main data structure known as the OPEN list. The CLOSED list represents the positions that the algorithm has already examined, and is initially empty. For each node within the OPEN and CLOSED lists, A* maintains two heuristic values: g(n), the best-known minimum cost, and h(n), the estimate of the cost to a goal state. Thus, the best node to examine at any point in the algorithm has the lowest estimated cost: f(n) = g(n) + h(n).
The A* algorithm is an iterative process. In each step, A* takes the best state s from the OPEN list and moves it to the CLOSED list. The successors of the best state, si, are generated and examined in turn. If a successor si does not appear in either the OPEN or CLOSED list, then si is added to the OPEN list. However, if si already appears in either list, we must check to see if the minimum cost g(n) has decreased. If g(n) decreases, the node si must be deleted from its current location and reinserted into the OPEN list.
The heuristic h(n) is critical for the performance of the A* algorithm. h(n) is said to be admissible if the heuristic never overestimates the cost of travelling to the goal state. This is important because if h(n) is admissible, A* is guaranteed to generate the least cost or optimal solution the first time the goal node is generated. In the case of the typical pathfinding algorithm, h(n) is the straight line distance between the current point n and the target point.
Some of the performance information referenced in this paper refers to the sliding-tile puzzle instead of pathfinding, since this has been the most popular test in academic circles for studying A*. An example of the sliding-tile puzzle can be found in Figure 1. In the sliding-tile puzzle, the Manhattan distance (the sum of the vertical and horizontal displacements of each tile from its current square to its goal square) is an admissible and effective heuristic for use in A* search.
3. The Anatomy Of A Chess Program
Now that we have quickly reviewed A*, let us deal with a computer chess search algorithm.
Games such as chess, checkers and Othello belong to a broad group of games called two-player zero-sum games with perfect information. Zero-sum implies that if one player wins, the other player loses. Perfect information implies the entire state of the game is known at any time. Scrabble has hidden tiles, and is defined as a game of imperfect information.
Two-player zero-sum games with perfect information are well known to game theoreticians [von Neumann 1944]. In any position for a game in this category, an optimal move can be determined. An optimal move can be determined via the minimax algorithm which, for a game like chess, contains a matrix that has been estimated to contain more molecules than our entire planet! However, all hope is not lost, since there are alternative formulations of the minimax algorithm that involve searching a game tree.
3.1 Game Trees And Minimax Search
The root of a game tree represents the current state of the game. Each node in the tree can have any number of child nodes. Each child of a node represents a new state after a legal move from the node's state. This continues until we reach a leaf, a node with no child nodes, in the game tree. We assign a payoff vector to each leaf in the game tree. In a generalized game tree, this payoff vector represents the utility of the final position to both players. In general, winning a game represents a positive utility or a player, while losing a game represents a negative utility. Since the game is a two-player zero-sum game, the utility for the first player must equal the negative of the utility for the second player. The utility for the side to move at the root of the tree is usually the only one given to save space.
In Figure 2, an example of a game tree for a game of Naughts and Crosses (or Tic-Tac-Toe) is given. Note that the two players take alternating turns at different levels of the tree. X moves at the root, while the opponent, O, moves at the first level below the root. A position is normally categorized by how many levels down in the game tree it is located. The common term for this is ply. The root is said to be at ply 0, while the immediate successors of the root are said to be at ply 1, et cetera.
Naughts and Crosses, like chess and checkers, has only three possible outcomes for a player: win, loss or draw. Normally, we assign the payoff of +1, 0 and -1 to a win, draw or loss for the player to move, respectively. These payoffs are given in Figure 2 at the bottom of each leaf position, with respect to the player with the crosses.
We will give names to each player to simplify our discussion. Let us call the player to move in the initial position Max and the opponent Min. At each node in the tree where Max has to move, Max would like to play the move that maximizes the payoff. Thus, Max will assign the maximum score amongst the children to the node where Max makes a move. Similarly, Min will minimize the payoff to Max, since that will maximize Min's payoff. The maximum and minimum scores are taken at alternating levels of the tree, since Max and Min alternate turns.
In this way, all nodes in the tree can be assigned a payoff or minimax value, starting from the leaves of the tree and moving up the tree towards the root. In Figure 3, we give minimax values for all nodes in our Naughts and Crosses game tree (Figure 2). These minimax values tell us what the best possible outcome for Max is in any position within the game tree, given that Min will do its best to foil Max's plans.
Once the root of the game tree has been assigned a minimax value, a best move for Max is defined as a move which leads to the same minimax value as the root of the tree. We can trace down the tree, always choosing moves that lead to the same minimax value. This path of moves gives us an optimal line of play for either player, and is known as a principal
variation. Note that in our game of Naughts and Crosses, the side playing the Crosses will draw the game, but only if an X is played in the lower central square. Playing to either square in the top row can lead to a loss for the Crosses, if the opponent plays the best move.
To compute the minimax value of a position, we can use any algorithm that searches the whole game tree. A depth-first search uses less memory than a best-first or breadth-first tree search algorithm, so it is preferred in current game-tree search programs. In Figure 3, we show two C functions which are the basis of a recursive depth-first search of a game tree. By calling Maximize with a position p, we will get the minimax value of position p as the output of the function after the entire game tree has been searched.
In Listing 1, we have left out some of the details. For example, we have not defined what a position is, since this is game-dependent. There are three additional functions that would be required to implement the minimax search: (1) EndOfGame, which determines whether the game is over at the input position, returning TRUE if the game is over; (2) GameValue, which accepts a position as a parameter, determines who has won the game, and returns the payoff with respect to the player Max; and (3) GenerateSuccessors which generates an array of successor positions (p.succ[]) from the input position, and returns the number of successors to the calling procedure.
Note that Maximize() and Minimize() recursively call one another until a position is reached where the EndOfGame() function returns TRUE. As each successor of a node is explored, gamma maintains the current assessment of the position, based on all of the moves that have been searched so far. Once all successors have been examined, the minimax value for that position has been computed and stored in gamma, which can be returned to a higher level within the tree (please refer to Listing 1).
Listing 1. The Code For The Minimax Algorithm
int Maximize(position p) {
int numOfSuccessors; /* total moves */
int gamma; /* current maximum */
int i; /* move counter */
int sc; /* score returned by search */
if (EndOfGame(p)) { return GameValue(p)); }
gamma = -4;
numOfSuccessors = GenerateSuccessors(p);
for (i=1; i <= numOfSuccessors; i++) {
sc = Minimize(p.succ[i]);
gamma = max(gamma, sc);
}
return (gamma);
}
int Minimize(position p) {
int numOfSuccessors; /* total moves */
int gamma; /* current maximum */
int i; /* move counter */
int sc; /* score returned by search */
if (EndOfGame(p)) { return GameValue(p)); }
gamma = +4;
numOfSuccessors = GenerateSuccessors(p);
for (i=1; i <= numOfSuccessors; i++) {
sc = Maximize(p.succ[i]);
gamma = min(gamma, sc);
}
return (gamma);
}
The minimax algorithm can also determine which move yields the score gamma, and return that up the tree as well. However, there is only one place we are interested in the move choice: the root of the game tree. We could write a special version of Maximize that returns a best move and the minimax value.
This formulation requires exactly the same amount of work as the matrix formulation did, but further pruning can be done on this tree. The algorithm [Knuth 1975] improves on the typical minimax algorithm by passing down bounds throughout the tree and can prune off branches that can be shown to have no relevance on the minimax value of the game tree. With the algorithm, one can search the optimal number of terminal positions required to determine the minimax value (if one always searches the best move first at every node)! In practice, that doesn't happen (why would you need to search if you already knew the best move?), so there are variants on such as NegaScout [Reinefeld 1983] and MTD(f) [Plaat 1996] that have been shown to be significantly better than in practice.
However, it would still take a modern computer millions of years to evaluate the full game tree for the game of chess if one had to go all the way to the terminal nodes. How can we control the size of the search?
3.2 Iterative Deepening
One method that most practitioners employ is to search the tree to a fixed depth, k ply from the root node, and use approximate minimax value at that level. However, the nature of the pruning algorithms (such as NegaScout and MTD(f)) yield game trees that can vary widely in size at the same nominal depth. Computer chess has real time limits, and if one exceeds those time limits, the game is lost, so having an algorithm that can generate a rational decision at any time is very important. Thus, a technique called iterative deepening Scott 1969] is used.
The idea is that the algorithm should be limited to exploring a small search depth k by forcing evaluations of nodes once they reach that depth. Once that search is done, the limit k can be moved forward by a step s, and the search can be repeated to a depth of k+s. In chess programs, k and s usually equal 1. Thus, the program does a 1-ply search before doing a 2-ply search, which occurs before the 3-ply search et cetera.
Scott noted that there is no way of predicting how long an ab search will take, since it depends heavily on the move ordering. However, by using iterative deepening, one can estimate how long a (k+1)-ply search will take, based on the length of the preceding k-ply search. Unfortunately, the prediction may be far off the accurate value. In some cases, a real time constraint (such as a time control in a chess game) may necessitate aborting the current search. Without iterative deepening, if a program has not finished a search when the time constraint interrupts the search, the program may play a catastrophic move. With iterative deepening, we can use the best move from the deepest search that was completed.
Other benefits were explored by Slate and Atkin in their Chess 4.5 program [Slate 1977]. They discovered that there were many statistics that could be gathered from a search iteration, including the principal variation. The principal variation of a k-ply search is a good starting place to look for a principal variation of a (k+1)-ply search, so the principal variation from the k-ply search is searched first at depth (k+1). This improves the ordering of the moves in the (k+1)-ply search. Usually, the number of bottom positions explored for all of the searches up to depth d with iterative deepening is significantly smaller than attempting a d-ply search without iterative deepening.
3.3 Transposition Tables
Specific information about a search can be saved in a transposition table [Greenblatt 1967].
In the minimax algorithm given in Listing 1, all of the information about a node can be accumulated including the best score, the best move from that position, the depth it was searched to. All of this information is commonly stored into one transposition table entry. Transposition tables are normally constructed as closed hash tables, with hashing functions that are easy to update (such as a number of XOR operations) as one traverses the tree. The transposition table information can be used in two main ways: duplicate detection and move ordering.
Why would we need to detect duplicates in a game tree? In reality, the game tree is a graph; some of the positions appear in multiple places within the tree. Thus, it makes sense that each position should only be explored once if the information obtained is sufficient to terminate the search. The transposition table assists in finding and eliminating these duplicated positions.
The same position in the game will always hash to the same location in the transposition table. What if the information stored in the table is the same position as the current node, and the stored result of a search of that position is at least as deep as the search we are attempting to execute? If we have an exact minimax value in the hash table for a search that is at least as deep as the one to be executed, we can use the result from the hash table and prune the entire search.
Most of the time, the duplicate detection will fail to completely eliminate the search, and we can exploit the transposition table to improve our move ordering. In the games we are studying, the best move from a previous search depth is likely to be the best move at the current search depth. Thus, we can obtain the previous best move from the transposition table, and search the previous best move before all others. In general, the move ordering benefits of combining iterative deepening and the transposition table are at least as important to the node count as the duplicate detection property, depending on the application chosen.
3.4 The History Heuristic
The transposition table only offers move ordering information about a single move in the move list. The history heuristic [Schaeffer 1989] is a useful technique for sorting all other moves. In the game of chess, a 64 by 64 matrix is used to store statistics. Each time a move from a square startsq to a square endsq is chosen as a best move during the search, a bonus is stored in the matrix at the location [startsq,endsq]. The size of this bonus depends on the depth at which the move was successful at. A bonus that varies exponentially based on the depth of the subtree under that position has been found to work well in practice. Moves with higher history values are more likely to be best moves at other points in the tree; thus, moves are sorted based on their current history values. This makes a dynamic ordering for all possible legal moves in cases where no ordering information exists.
In the programs that the author is aware of, both move ordering techniques are used. The transposition table move is always used first, since it yields specific information about that node from a previous search. Once the transposition table move has been searched, the remaining moves are sorted by the history heuristic.
3.5 Another Method Of Searching A Game Tree - SSS*
Stockman [1979] introduced the SSS* algorithm, a variant to the depth-first search algorithms for determining the minimax value. Initially, it was believed that the algorithm dominated in the sense that SSS* will not search a node if did not search it. A problem with SSS* is that a list structure (the OPEN list) must be maintained, which could grow to b^d/2 elements, where b is the branching factor and d is the depth of the tree to be searched. At the time, this space requirement was considered to be too large for a practical chess-playing program. Even if the space requirement was not a problem, maintaining the OPEN list slowed down the algorithm to make it slower than in practice.
Although versions of SSS* eventually managed to become faster than for game trees [Reinefeld 1994a], it has been recently discovered that SSS* can be implemented as a series of null-window calls, using a transposition table instead of an OPEN list [Plaat 1996]. The research showed that the drawbacks of SSS* are not true. However, it is also important to note that the benefits also disappear: SSS* is not necessarily better than when dynamic move reordering is considered. When all of the typical enhancements are used, SSS* can be outperformed by NegaScout and MTD(f).
In game-tree search, a depth-first search algorithm generates results faster than a best-first search algorithm. A* is also a best-first search algorithm. Is there a better single-agent search algorithm than A*, that uses a depth-first iterative deepening formulation?
4. Reimplementing A*
The first stage of the plan is to reimplement A* as a depth-first search algorithm. The second stage is to implement the two move ordering techniques that we described in Section 3: transposition tables and the history heuristic.
4.1 IDA*
Korf [1985] was the first researcher to emphasize why one would want to use depth-first iterative deepening in any search framework, including heuristic searches such as A*.
The implementation will be described first. In Listing 2, we see some sample code that could be used to implement IDA*. There are similarities between this formulation and Maximize (from Listing 1). This implementation, like Minimax, is simple for a search domain that has discrete steps. For each position p, we check to see if we are at a goal node, or if the search should be cut off immediately. We use the typical condition from A* search, provided that we have an admissible heuristic for estimating the distance to the goal. If the depth (number of steps taken, g in the code and referred to as g(p) in the literature) plus the heuristic estimate to the goal (h or h(p)) is greater than the cut off for this search (fmax), then we stop searching at this node. If we are not at a goal node, nor should the search be pruned, we search all successors recursively. It is important to note that we increase the distance traveled by a step at each ply of the tree.
Listing 2. The Code For The IDA* Algorithm
int IDAStar(position p, int g) {
/* g = number of steps taken to reach position */
/* fmax = determined by ComputeMoves */
int numOfSuccessors; /* total moves */
int lb; /* lower bound */
int done; /* finished search? */
int i; /* move counter */
int h; /* heuristic distance to goal */
/* Use admissible heuristic to estimate distance */
/* to the goal nodes. */
h = EstimateDistanceToGoal(p);
/* Are we at a goal node? */
if (h == 0) { return TRUE; }
/* Can we make it to the goal node in time? */
if (g+h >= fmax) { return FALSE; }
/* Can't cut off search, search all children */
numOfSuccessors = GenerateSuccessors(p);
for (i=1; i <= numOfSuccessors; i++) {
done = IDAStar(p.succ[i], g+1);
if (done == TRUE) { return TRUE; }
}
return FALSE;
}
How does one determine fmax? A second routine usually sets this value, calling IDA* over and over again until a goal node is found. In our sample, ComputeNumberOfMoves() is a sample driver routine for IDA*, and can be seen in Listing 3. When we pass a position into the ComputeNumberOfMoves()function, we expect to get the minimum number of steps required to reach a goal node. The algorithm starts with fmax = h(startposition), and then calls the IDAStar() function, incrementing fmax until the IDAStar() function returns TRUE.
Listing 3. The Code For The Algorithm That Drives IDA*
int ComputeNumberOfMoves(position startposition)
{
int fmax; /* maximum distance to search */
int bFoundPath = FALSE;
fmax = EstimateDistanceToGoal(startposition);
while (bFoundPath == FALSE)
{
bFoundPath = IDAStar(startposition);
if (bFoundPath == FALSE) { fmax += 1; }
}
return fmax;
}
The idea of IDA* is not new, and it should not be new to you either. A previous article in Game Developer [Stout 1996] mentions this in passing as something you may want to consider to improve the speed of your pathfinding.
Now, you are probably wondering whether IDA* is any worse than A*. After all, A* only expands each node once during a search, and the nodes near the top of the tree are expanded many times. A* and IDA* have been shown mathematically to search c * b^d nodes, where b is the typical number of alternatives at each node, d is the depth to the closest solution and c is a constant. The only difference is that the constant c is a little larger for IDA*.
IDA* is a little slower, but what do you gain? Well, have you seen any mention of sorting OPEN positions on a list, or inserting entries into the CLOSED list? When you use a depth-first iterative deepening approach, you don't have to store either list. IDA* uses O(d) memory instead of A*, which uses O(b^d) memory. This makes IDA* a good choice where memory is at a premium. Also note that because you have very little state information during a search, IDA* is very easy to save and restore if the AI time slice is up.
4.2 Transposition Tables
If you are using IDA*, you have lost the CLOSED list. Unlike the OPEN list, the CLOSED list has other functions. The primary function of the CLOSED list in A* is the ability to detect duplicate positions within the tree. If the same node is reached by two separate paths, IDA* will blindly search through the node both times. When the first path to the node is shorter than the second path, we have wasted search effort. We would like a technique that allows us to detect duplicates, and store information about the previously attempted depth at a given node. Thus, we want to apply transposition tables to IDA*.
A transposition table in a computer chess program is implemented as a closed hash table where older and less relevant data can be overwritten with newer and more relevant data. By taking the position p and computing a hash function hash(p), one can store the fact that the node was reached in g steps and searched to a total path of size f unsuccessfully. Whenever a node is first examined in IDA*, we can quickly look in the table and determine whether or not the node has been previously searched. If it has, we can compare the stored number of steps to reach p and the current number of steps taken to reach p. If the current path to reach p is longer than the stored path, we do not search the successors of this node. This information about g for various positions is also stored in between iterations of fmax. Whenever we reach the position by a non-optimal path, one can immediately eliminate the search for all future iterations.
We can also use the transposition table to detect duplicate positions, but we can use it to tell the algorithm which successor had the most promise during a previous iteration for each position. One measure of promise is the successor leading to the smallest h value during the current level of search. The stored move is searched first on subsequent iterations, in the same manner that we search the stored move from the transposition table first in a computer chess program.
For the typical pathfinding algorithm, depending on the structure of your nodes, you may not need a large hash table. One can derive a large portion of the benefit by having a CLOSED list that is only 2-5% of the size of the typical CLOSED list generated by an A* search on the same position.
A key component of the transposition table is the entry replacement scheme. To replace entries, one would overwrite an entry in the hash table if the node in the hash table has a longer path from the start position than the node we want to write in (store the lowest g(p) instances). We want to do this because cutoffs higher up in the tree save more nodes than cutoffs lower down in the tree. Other recent research in the computer chess community has dealt with two-stage replacement schemes for transposition tables. In one experiment [Breuker 1996], half of the transposition table was reserved for the nodes closest to the root, and the other half of the transposition table was reserved for the most recently visited nodes regardless of depth. This yielded an improvement to search size at a minimal performance cost.
How much does adding a transposition table to IDA* save us? Experiments have shown that a transposition table can reduce the size of standard 15-puzzle searches by nearly 50% [Reinefeld 1994b], with the cost of storage and access being O(1), in comparison to O(log n) searches on a data structure for a CLOSED list that doesn't lose information.
4.3 The History Heuristic
We usually sort moves in IDA* based on a static move ordering: the moves that have the lowest h values are searched first. Can we do better with a dynamic move ordering? For IDA*, move ordering doesn't really do us a lot of good, except in the last (and largest) iteration, where we find the solution. If we can find the solution early in the last iteration, we can avoid searching a large number of nodes. Thus, we want to acquire information over the course of a search that will help us bias the last search owards the solution.
A sliding-tile experiment [Reinefeld 1994b] gave a description for a history heuristic for the 15-puzzle. The history heuristic was stored in a 3-dimensional array. The three dimensions were the tiles (16) in each position (16) moving in each direction (4). To install information, the experiment counted the number of times that a move led to the deepest subtree (i.e. attained the smallest h value for an examined node within its subtree). The experiment met with some success, as the IDA* algorithm searched approximately 6% less nodes when the history heuristic was used versus the version that used a static move ordering.
We could use both the static move ordering and the dynamic information gathered by the history heuristic to generate a hybrid heuristic for ordering successors. This type of hybrid heuristic could improve the ordering of moves more than either technique in isolation.
5. Conclusions
We have implemented the techniques described above, and we are currently using them to plot paths for our creatures in our soon-to-be-released role-playing game, Neverwinter Nights.
There are many caveats to using these techniques, and it is important to be able to understand the drawbacks. The speed improvements that these techniques yield will vary depending on your application (they vary dramatically when implementing them in chess, Othello and checkers programs!) ... but you now have some new enhancements that can help you search more efficiently.
To summarize the utility of adding standard enhancements to search algorithms, let us examine another problem: finding push-optimal solutions for Sokoban problems. If you have never seen the game Sokoban, a picture of one of the 90 positions is given in Figure 4. The goal is for the little worker to push all of the round stones into the goal squares (the goal squares are shaded with diagonal lines). On the surface, this may seem as easy as pathfinding, and an easy application for A*. However, all pathfinding "mistakes" are undoable by retracing the path. One wrong push of a stone could leave you in a state where you are unable to complete the task. Thus, the need to plan the path of all stones to the goal squares is paramount.
IDA* is incapable of solving any of the puzzles with 20 million nodes searched. If we enhance IDA* with the transposition table and the move ordering techniques, 4 of the puzzles can be solved [Junghanns 1997]. If we search one billion nodes, only 6 of the 90 puzzles can be solved using IDA*, transposition tables and move ordering. If we use all of the domain-dependent techniques the researchers developed (including deadlock tables, tunnel macros, goal macros, goal cuts, pattern search, relevance cuts and overestimation), the program Rolling Stone can solve 52 of the 90 problems within the billion node limit for each puzzle [Junghanns 1999]. Pathfinding is a relatively trivial problem in comparison to finding push-optimal solutions for Sokoban puzzles, and I am happy to say my bosses at BioWare haven't asked me to solve Sokoban in real time.
There's a lot of very good academic information on single-agent search (including a special issue of the journal Artificial Intelligence later this year which will be devoted to the topic), and I would encourage everyone to look up some of these references. If you have any further questions on any of the reference material, please feel free to e-mail me.
Mark Brockington is the lead research scientist at BioWare Corp. His email adress is [email protected]
References
[Breuker 1996] D. M. Breuker, J. W. H. M. Uiterwijk, and H. J. van den Herik. Replacement Schemes and Two-Level Tables. ICCA Journal, 19(3):175-179, 1996.
[Greenblatt 1967] R. D. Greenblatt, D. E. Eastlake, and S.D. Crocker. The Greenblatt Chess Program. In Proceedings of the Fall Joint Computer Conference, volume 31, pages 801-810, 1967.
[Hart 1968] P. E. Hart, N. J. Nilsson, and B. Raphael. A Formal Basis for the Heuristic Determination of Minimum Cost Paths. IEEE Transactions on Systems Science and Cybernetics, SSC-4(2):100-107, 1968.
[Junghanns 1997] A. Junghanns and J. Schaeffer. Sokoban: A Challenging Single-Agent Search Problem, Workshop Using Games as an Experimental Testbed for AI Research, IJCAI-97, Nagoya, Japan, August 1997.
[Junghanns 1999] A. Junghanns. Pushing the Limits: New Developments in Single-Agent Search, PhD Thesis, Department of Computing Science, University of Alberta, 1999. URL: http://www.cs.ualberta.ca/~games/Sokoban/papers.html
[Knuth 1975] D. E. Knuth and R. W. Moore. An Analysis of Alpha-Beta Pruning. Artificial Intelligence, 6(3):293-326, 1975.
[Korf 1985] R. E. Korf. Depth-First Iterative Deepening: An Optimal Admissible Tree Search. Artificial Intelligence, 27:97-109, 1985.
[Nilsson 1971] N. J. Nilsson. Problem-Solving Methods in Artificial Intelligence. McGraw-Hill Book Company, New York, NY, 1971.
[Plaat 1996] A. Plaat, J. Schaeffer, W. Pijls, and A. de Bruin. Exploiting Graph Properties of Game Trees. In AAAI-1996, volume 1, pages 234-239, Portland, Oregon, August 1996.
[Reinefeld 1983] A. Reinefeld. An Improvement to the Scout Tree-Search Algorithm. ICCA Journal, 6(4):4-14, 1983.
[Reinefeld 1994a] A. Reinefeld. A Minimax Algorithm Faster than Alpha-Beta. In H.J. van den Herik, I.S. Herschberg and J.W.H.M. Uiterwijk, editors, Advances In Computer Chess 7, pages 237-250. University of Limburg, 1994.
[Reinefeld 1994b] A. Reinefeld and T. A. Marsland. Enhanced Iterative-Deepening Search. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-16(7):701-710,1994.
[Schaeffer 1989] J. Schaeffer. The History Heuristic and Alpha-Beta Search Enhancements In Practice. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-11(11): 1203-1212, 1989.
[Scott 1969] J. J. Scott. A Chess-Playing Program. In B. Meltzer and D. Michie, editors, Machine Intelligence 4, pages 255-265. Edinburgh University Press, 1969.
[Slate 1977] D. J. Slate and L. R. Atkin. Chess 4.5 - The Northwestern University Chess Program. In P.W. Frey, editor, Chess Skill in Man and Machine, pages 82-118. Springer-Verlag, New York, 1977.
[Stockman 1979] G. C. Stockman. A Minimax Algorithm Better than Alpha-Beta? Artificial Intelligence, 12:179-196, 1979.
[Stout 1996] W. B. Stout. Smart Moves: Intelligent Path-Finding. Game Developer, pp. 28-35, Oct./Nov. 1996.
[von Neumann 1944] J. von Neumann and O. Morgenstern. Theory of Games and Economic Behavior. Princeton Press, Princeton, NJ, 1944
Read more about:
FeaturesAbout the Author
You May Also Like