pBook‎ > ‎

Search Algorithm

Path Finding

posted Nov 17, 2016, 5:04 AM by Javad Taghia

Smart Move: Intelligent Path-Finding

https://drive.google.com/file/d/0BxxliwIPgBT3NndlZTZuY3lXTmM/view?usp=sharing

http://www.gamasutra.com/view/feature/3317/smart_move_intelligent_.php?print=1


By Bryan Stout

Of all the decisions involved in computer-game AI, the most common is probably path-finding-looking for a good route for moving an entity from here to there. The entity can be a single person, a vehicle, or a combat unit; the genre can be an action game, a simulator, a role-playing game, or a strategy game. But any game in which the computer is responsible for moving things around has to solve the path-finding problem. 

And this is not a trivial problem. Questions about path-finding are regularly seen in online game programming forums, and the entities in several games move in less than intelligent paths. However, although path-finding is not trivial, there are some well-established, solid algorithms that deserve to be known better in the game community. 

Several path-finding algorithms are not very efficient, but studying them serves us by introducing concepts incrementally. We can then understand how different shortcomings are overcome. 

To demonstrate the workings of the algorithms visually, I have developed a program in Delphi 2.0 called "PathDemo.https://drive.google.com/file/d/0BxxliwIPgBT3NndlZTZuY3lXTmM/view?usp=sharing" It is available for readers to download. The article and demo assume that the playing space is represented with square tiles. You can adapt the concepts in the algorithms to other tilings, such as hexagons; ideas for adapting them to continuous spaces are discussed at the end of the article.


Path-Finding on the Move

The typical problem in path-finding is obstacle avoidance. The simplest approach to the problem is to ignore the obstacles until one bumps into them. The algorithm would look something like this:

while not at the goal 
pick a direction to move toward the goal
if that direction is clear for movement
move there
else
pick another direction according to
an avoidance strategy

This approach is simple because it makes few demands: all that needs to be known are the relative positions of the entity and its goal, and whether the immediate vicinity is blocked. For many game situations, this is good enough. 

Different obstacle-avoidance strategies include:

  • Movement in a random direction. If the obstacles are all small and convex, the entity (shown as a green dot) can probably get around them by moving a little bit away and trying again, until it reaches the goal (shown as a red dot). Figure 1A shows this strategy at work. A problem arises with this method if the obstacles are large or if they are concave, as is seen in Figure 1B-the entity can get completely stuck, or at least waste a lot of time before it stumbles onto a way around. One way to avoid this: if a problem is too hard to deal with, alter the game so it never comes up. That is, make sure there are never any concave obstacles.
  • Tracing around the obstacle. Fortunately, there are other ways to get around. If the obstacle is large, one can do the equivalent of placing a hand against the wall and following the outline of the obstacle until it is skirted. Figure 2Ashows how well this can deal with large obstacles. The problem with this technique comes in deciding when to stop tracing. A typical heuristic may be: "Stop tracing when you are heading in the direction you wanted to go when you started tracing." This would work in many situations, but Figure 2B shows how one may end up constantly circling around without finding the way out.
  • Robust tracing. A more robust heuristic comes from work on mobile robots: "When blocked, calculate the equation of the line from your current position to the goal. Trace until that line is again crossed. Abort if you end up at the starting position again." This method is guaranteed to find a way around the obstacle if there is one, as is seen in Figure 3A. (If the original point of blockage is between you and the goal when you cross the line, be sure not to stop tracing, or more circling will result.) Figure 3B shows the downside of this approach: it will often take more time tracing the obstacle than is needed, making it look pretty simple-minded-though not as simple as endless circling. A happy compromise would be to combine both approaches: always use the simpler heuristic for stopping the tracing first, but if circling is detected, switch to the robust heuristic.

Looking Before You Leap

Although the obstacle-skirting techniques discussed above can often do a passable or even adequate job, there are situations where the only intelligent approach is to plan the entire route before the first step is taken. In addition, these methods do little to handle the problem of weighted regions, where the difficulty is not so much avoiding obstacles as finding the cheapest path among several choices where the terrain can vary in its cost. 

Fortunately, the fields of Graph Theory and conventional AI have several algorithms that can be used to handle both difficult obstacles and weighted regions. In the literature, many of these algorithms are presented in terms of changing between states, or traversing the nodes of a graph. They are often used in solving a variety of problems, including puzzles like the 15-puzzle or Rubik's cube, where a state is an arrangement of the tiles or cubes, and neighboring states (or adjacent nodes) are visited by sliding one tile or rotating one cube face. Applying these algorithms to path-finding in geometric space requires a simple adaptation: a state or a graph node stands for the entity being in a particular tile, and moving to adjacent tiles corresponds to moving to the neighboring states, or adjacent nodes. 

Working from the simplest algorithms to the more robust, we have:

  • Breadth-first search. Beginning at the start node, this algorithm first examines all immediate neighboring nodes, then all nodes two steps away, then three, and so on, until a goal node is found. Typically, each node's unexamined neighboring nodes are pushed onto an Open list, which is usually a FIFO (first-in-first-out) queue. The algorithm would go something like what is shown in Listing 1
  • Listing 1: Breadth-first search.
    queue			Open
    
    BreadthFirstSearch
      node   n, n', s
    	s.parent = null
    	// s is a node for the start
    	push s on Open
    	while Open is not empty
    		pop node n from Open	
    		if n is a goal node 
    			construct path 
    			return success
    		for each successor n' of n
    			if n' has been visited already,
    				continue
    			n'.parent = n
    			push n' on Open
    	return failure  // if no path found

  • Figure 4 

  • shows how the search proceeds. We can see that it does find its way around obstacles, and in fact it is guaranteed to find a shortest path-that is, one of several paths that tie for the shortest in length-if all steps have the same cost. There are a couple of obvious problems. One is that it fans out in all directions equally, instead of directing its search towards the goal; the other is that all steps are not equal-at least the diagonal steps should be longer than the orthogonal ones.
  • Bidirectional breadth-first search. This enhances the simple breadth-first search by starting two simultaneous breadth-first searches from the start and the goal nodes and stopping when a node from one end's search finds a neighboring node marked from the other end's search. As seen in Figure 5, this can save substantial work from simple breadth-first search (typically by a factor of 2), but it is still quite inefficient. Tricks like this are good to remember, though, since they may come in handy elsewhere.
  • Dijkstra's algorithm. E. Dijkstra developed a classic algorithm for traversing graphs with edges of differing weights. At each step, it looks at the unprocessed node closest to the start node, looks at that node's neighbors, and sets or updates their respective distances from the start. This has two advantages to the breadth-first search: it takes a path's length or cost into account and updates the goodness of nodes if better paths to them are found. To implement this, the Open list is changed from a FIFO queue to a priority queue, where the node popped is the one with the best score-here, the one with the lowest cost path from the start. (See Listing 2.) 
    Listing 2: Dijkstra's search algorithm.
    priority queue	Open
    
    DijkstraSearch
      node   n, n', s
    	s.cost = 0
    	s.parent = null   // s is a node for the start
    	push s on Open
    	while Open is not empty
    		pop node n from Open  // n has lowest cost in Open
    		if n is a goal node 
    			construct path 
    			return success
    		for each successor n' of n
    			newcost = n.cost + cost(n,n')
    			if n' is in Open
    			 and n'.cost <= newcost
    				continue
    			n'.parent = n
    			n'.cost = newcost
    			if n' is not yet in Open
    				push n' on Open
    	return failure  // if no path found
    

  • We see in Figure 6 that Dijkstra's algorithm adapts well to terrain cost. However, it still has the weakness of breadth-width search in ignoring the direction to the goal.
  • Depth-first search. This search is the complement to breadth-first search; instead of visiting all a node's siblings before any children, it visits all of a node's descendants before any of its siblings. To make sure the search terminates, we must add a cutoff at some depth. We can use the same code for this search as for breadth-first search, if we add a depth parameter to keep track of each node's depth and change Open from a FIFO queue to a LIFO (last-in-first-out) stack. In fact, we can eliminate the Open list entirely and instead make the search a recursive routine, which would save the memory used for Open. We need to make sure each tile is marked as "visited" on the way out, and is unmarked on the way back, to avoid generating paths that visit the same tile twice. In fact, Figure 7 
    shows that we need to do more than that: the algorithm still can tangle around itself and waste time in a maddening way. For geometric path-finding, we can add two enhancements. One would be to label each tile with the length of the cheapest path found to it yet; the algorithm would then never visit it again unless it had a cheaper path, or one just as cheap but searching to a greater depth. The second would be to have the search always look first at the children in the direction of the goal. With these two enhancements checked, one sees that the depth-first search finds a path quickly. Even weighted paths can be handled by making the depth cut-off equal the total accumulated cost rather than the total distance.
  • Iterative-deepening depth-first search. Actually, there is still one fly in the depth-first ointment-picking the right depth cutoff. If it is too low, it will not reach the goal; if too high, it will potentially waste time exploring blind avenues too far, or find a weighted path which is too costly. These problems are solved by doing iterative deepening, a technique that carries out a depth-first search with increasing depth: first one, then two, and so on until the goal is found. In the path-finding domain, we can enhance this by starting with a depth equal to the straight-line distance from the start to the goal. This search is asymptotically optimal among brute force searches in both space and time.
  • Best-first search. This is the first heuristic search considered, meaning that it takes into account domain knowledge to guide its efforts. It is similar to Dijkstra's algorithm, except that instead of the nodes in Open being scored by their distance from the start, they are scored by an estimate of the distance remaining to the goal. This cost also does not require possible updating as Dijkstra's does. Figure 8 shows its performance. It is easily the fastest of the forward-planning searches we have examined so far, heading in the most direct manner to the goal. We also see its weaknesses. In 8A, we see that it does not take into account the accumulated cost of the terrain, plowing straight through a costly area rather than going around it. And in 8B, we see that the path it finds around the obstacle is not direct, but weaves around it in a manner reminiscent of the hand-tracing techniques seen above.

The Star of the Search Algorithms (A* Search)

The best-established algorithm for the general searching of optimal paths is A* (pronounced "A-star"). This heuristic search ranks each node by an estimate of the best route that goes through that node. The typical formula is expressed as: 

f(n) = g(n) + h(n)
where:
f(n)    is the score assigned to node n 
g(n)    is the actual cheapest cost of 
arriving at n from the start 
h(n)    is the heuristic estimate of the 
cost to the goal from n 

So it combines the tracking of the previous path length of Dijkstra's algorithm, with the heuristic estimate of the remaining path from best-first search. The algorithm proper is seen in Listing 3. Since some nodes may be processed more than once-from finding better paths to them later-we use a new list called Closed to keep track of them. 

A* has a couple interesting properties. It is guaranteed to find the shortest path, as long as the heuristic estimate, h(n), is admissible-that is, it is never greater than the true remaining distance to the goal. It makes the most efficient use of the heuristic function: no search that uses the same heuristic function h(n) and finds optimal paths will expand fewer nodes than A*, not counting tie-breaking among nodes of equal cost. In Figures 9A through 9C, we see how A* deals with situations that gave problems to other search algorithms.


How Do I use A*?

A* turns out to be very flexible in practice. Consider the different parts of the algorithm.

The state would often be the tile or position the entity occupies. But if needed, it can represent orientation and velocity as well (for example, for finding a path for a tank or most any vehicle-their turn radius gets worse the faster they go). 

Neighboring states would vary depending on the game and the local situation. Adjacent positions may be excluded because they are impassable or are between the neighbors. Some terrain can be passable for certain units but not for others; units that cannot turn quickly cannot go to all neighboring tiles. 

The cost of going from one position to another can represent many things: the simple distance between the positions; the cost in time or movement points or fuel between them; penalties for traveling through undesirable places (such as points within range of enemy artillery); bonuses for traveling through desirable places (such as exploring new terrain or imposing control over uncontrolled locations); and aesthetic considerations-for example, if diagonal moves are just as cheap as orthogonal moves, you may still want to make them cost more, so that the routes chosen look more direct and natural. 

The estimate is usually the minimum distance between the current node and the goal multiplied by the minimum cost between nodes. This guarantees that h(n) is admissible. (In a map of square tiles where units may only occupy points in the grid, the minimum distance would not be the Euclidean distance, but the minimum number of orthogonal and diagonal moves between the two points.) 

The goal does not have to be a single location but can consist of multiple locations. The estimate for a node would then be the minimum of the estimate for all possible goals. 

Search cutoffs
 can be included easily, to cover limits in path cost, path distance, or both.

From my own direct experience, I have seen the A* star search work very well for finding a variety of types of paths in wargames and strategy games. 

The Limitations of A* 
There are situations where A* may not perform very well, for a variety of reasons. The more or less real-time requirements of games, plus the limitations of the available memory and processor time in some of them, may make it hard even for A* to work well. A large map may require thousands of entries in the Open and Closed list, and there may not be room enough for that. Even if there is enough memory for them, the algorithms used for manipulating them may be inefficient. 

The quality of A*'s search depends on the quality of the heuristic estimate h(n). If h is very close to the true cost of the remaining path, its efficiency will be high; on the other hand, if it is too low, its efficiency gets very bad. In fact, breadth-first search is an A* search, with h being trivially zero for all nodes-this certainly underestimates the remaining path cost, and while it will find the optimum path, it will do so slowly. In Figure 10A, we see that while searching in expensive terrain (shaded area), the frontier of nodes searched looks similar to Dijkstra's algorithm; in 10B, with the heuristic increased, the search is more focused. 

Let's look at ways to make the A* search more efficient in problem areas.


Transforming the Search Space

Perhaps the most important improvement one can make is to restructure the problem to be solved, making it an easier problem. Macro-operators are sequences of steps that belong together and can be combined into a single step, making the search take bigger steps at a time. For example, airplanes take a series of steps in order to change their orientation and altitude. A common sequence may be used as a single change of state operator, rather than using the smaller steps individually. In addition, search and general problem-solving methods can be greatly simplified if they are reduced to sub-problems, whose individual solutions are fairly simple. In the case of path-finding, a map can be broken down into large contiguous areas whose connectivity is known. One or two border tiles between each pair of adjacent areas are chosen; then the route is first laid out in by a search among adjacent areas, in each of which a route is found from one border point to another. 

For example, in a strategic map of Europe, a path-finder searching for a land route from Madrid to Athens would probably waste a fair amount of time looking down the boot of Italy. Using countries as areas, a hierarchical search would first determine that the route would go from Spain to France to Italy to Yugoslavia (looking at an old map) to Greece; and then the route through Italy would only need to connect Italy's border with France, to Italy's border with Yugoslavia. As another example, routes from one part of a building to another can be broken down into a path of rooms and hallways to take, and then the paths between doors in each room. 

It is much easier to choose areas in predefined maps than to have the computer figure them out for randomly generated maps. Note also that the examples discussed deal mainly with obstacle avoidance; for weighted regions, it is trickier to assign useful regions, especially for the computer (it may not very useful, either).


Storing it Better

Even if the A* search is relatively efficient by itself, it can be slowed down by inefficient algorithms handling the data structures. Regarding the search, two major data structures are involved. 

The first is the representation of the playing area. Many questions have to be addressed. How will the playing field be represented? Will the areas accessible from each spot-and the costs of moving there-be represented directly in the map or in a separate structure, or calculated when needed? How will features in the area be represented? Are they directly in the map, or separate structures? How can the search algorithm access necessary information quickly? There are too many variables concerning the type of game and the hardware and software environment to give much detail about these questions here. 

The second major structure involved is the node or state of the search, and this can be dealt with more explicitly. At the lower level is the search state structure. Fields a developer might wish to include in it are:

  • The location (coordinates) of the map position being considered at this state of the search.
  • Other relevant attributes of the entity, such as orientation and velocity.
  • The cost of the best path from the source to this location.
  • The length of the path up to this position.
  • The estimate of the cost to the goal (or closest goal) from this location.
  • The score of this state, used to pick the next state to pop off Open.
  • A limit for the length of the search path, or its cost, or both, if applicable.
  • A reference (pointer or index) to the parent of this node-that is, the node that led to this one.
  • Additional references to other nodes, as needed by the data structure used for storing the Open and Closed lists; for example, "next" and maybe "previous" pointers for linked lists, "right," "left," and "parent" pointers for binary trees.
  • Another issue to consider is when to allocate the memory for these structures; the answer depends on the demands and constraints of the game, hardware, and operating system. 

    On the higher level are the aggregate data structures-the Open and Closed lists. Although keeping them as separate structures is typical, it is possible to keep them in the same structure, with a flag in the node to show if it is open or not. The sorts of operations that need to be done in the Closed list are:

  • Insert a new node.
  • Remove an arbitrary node.
  • Search for a node having certain attributes (location, speed, direction).
  • Clear the list at the end of the search.
  • The Open list does all these, and in addition will:
    • Pop the node with the best score.
    • Change the score of a node.

    The Open list can be thought of as a priority queue, where the next item popped off is the one with the highest priority-in our case, the best score. Given the operations listed, there are several possible representations to consider: a linear, unordered array; an unordered linked list; a sorted array; a sorted linked list; a heap (the structure used in a heap sort); a balanced binary search tree. 

    There are several types of binary search trees: 2-3-4 trees, red-black trees, height-balanced trees (AVL trees), and weight-balanced trees. 

    Heaps and balanced search trees have the advantage of logarithmic times for insertion, deletion, and search; however, if the number of nodes is rarely large, they may not be worth the overhead they require.


    Fine-Tuning Your Search Engine

    There are also ways of tweaking the search algorithm to help get good results while working with limited resources:

    Beam search. One way of dealing with restricted memory is to limit the number of nodes on the Open list; when it is full and a new node is to be inserted, simply drop the node with the worst rating. The Closed list could also be eliminated, if each tile stores its best path length. There is no promise of an optimal path since the node leading to it may be dropped, but it may still allow finding a reasonable path.

    Iterative-deepening A*. The iterative-deepening technique used for depth-first search (IDDFS) as mentioned above can also be used for an A* search. This entirely eliminates the Open and Closed lists. Do a simple recursive search, keep track of the accumulated path cost g(n), and cut off the search when the rating f(n) = g(n) + h(n) exceeds the limit. Begin the first iteration with the cutoff equal to h(start), and in each suceeding iteration, make the new cutoff the smallest f(n) value which exceeded the old cutoff. Similar to IDDFS among bruteforce searches, IDA* is asymptotically optimal in space and time usage among heuristic searches.

    Inadmissible heuristic h(n). As discussed above, if the heuristic estimate h(n) of the remaining path cost is too low, then A* can be quite inefficient. But if the estimate is too high, then the path found is not guaranteed to be optimal and may be abysmal. In games where the range of terrain cost is wide--from swamps to freeways--you may try experimenting with various intermediate cost estimates to find the right balance between the efficiency of the search and the quality of the resulting path.

    There are also other algorithms that are variations of A*. Having toyed with some of them in PathDemo, I believe that they are not very useful for the geometric path-finding domain.


    What if I'm in a Smooth World?

    All these search methods have assumed a playing area composed of square or hexagonal tiles. What if the game play area is continuous? What if the positions of both entities and obstacles are stored as floats, and can be as finely determined as the resolution of the screen? Figure 11A shows a sample layout. For answers to these search conditions, we can look at the field of robotics and see what sort of approaches are used for the path-planning of mobile robots. Not surprisingly, many approaches find some way to reduce the continuous space into a few important discrete choices for consideration. After this, they typically use A* to search among them for a desirable path. Ways of quantizing the space include:

  • Tiles. A simple approach is to slap a tile grid on top of the space. Tiles that contain all or part of an obstacle are labeled as blocked; a fringe of tiles touching the blocked tiles is also labeled as blocked to allow a buffer of movement without collision. This representation is also useful for weighted regions problems. See Figure 11B.
  • Points of visibility. For obstacle avoidance problems, you can focus on the critical points, namely those near the vertices of the obstacles (with enough space away from them to avoid collisions), with points being considered connected if they are visible from each other (that is, with no obstacle between them). For any path, the search considers only the critical points as intermediate steps between start and goal. See Figure 11C.
  • Convex polygons. For obstacle avoidance, the space not occupied by polygonal obstacles can be broken up into convex polygons; the intermediate spots in the search can be the centers of the polygons, or spots on the borders of the polygons. Schemes for decomposing the space include: C-Cells (each vertex is connected to the nearest visible vertex; these lines partition the space) and Maximum-Area decomposition (each convex vertex of an obstacle projects the edges forming the vertex to the nearest obstacles or walls; between these two segments and the segment joining to the nearest visible vertex, the shortest is chosen). See Figure 11D. For weighted regions problems, the space is divided into polygons of homogeneous traversal cost. The points to aim for when crossing boundaries are computed using Snell's Law of Refraction. This approach avoids the irregular paths found by other means.
  • Quadtrees. Similar to the convex polygons, the space is divided into squares. Each square that isn't close to being homogenous is divided into four smaller squares, recursively. The centers of these squares are used for searching a path. See Figure 11E.
  • Generalized cylinders. The space between adjacent obstacles is considered a cylinder whose shape changes along its axis. The axis traversing the space between each adjacent pair of obstacles (including walls) is computed, and the axes are the paths used in the search. See Figure 11F.
  • Potential fields. An approach that does not quantize the space, nor require complete calculation beforehand, is to consider that each obstacle has a repulsive potential field around it, whose strength is inversely proportional to the distance from it; there is also a uniform attractive force to the goal. At close regular time intervals, the sum of the attractive and repulsive vectors is computed, and the entity moves in that direction. A problem with this approach is that it may fall into a local minimum; various ways of moving out of such spots have been devised.
  • Bryan Stout has done work in "real" AI for Martin Marietta and in computer games for MicroProse. He is preparing a book on computer game AI to be published by Morgan Kaufmann Publishers in 2001. He can be contacted at bstout@mindspring.com.

    A*

    posted Nov 17, 2016, 4:40 AM by Javad Taghia

    From: http://heyes-jones.com/astar.php


    A* ALGORITHM TUTORIAL

    Google  

    Get the source on google code

    A* on Google Code 

    Recent blog posts

    Who uses this A* code 
    Bug fixes 
    Avoiding ten common video game AI mistakes 

    Introduction

    This document contains a description of the AI algorithm known as A*. The downloads section also has full source code for an easy to use extendable implementation of the algorithm, and two example problems.

    Previously I felt that it would be wrong of me to provide source code, because I wanted to focus on teaching the reader how to implement the algorithm rather than just supplying a ready made package. I have now changed my mind, as I get many emails from people struggling to get something working. The example code is written in Standard C++ and uses STL, and does not do anything machine or operating system specific, so hopefully it will be quite useful to a wide audience.

    State space search

    A* is a type of search algorithm. Some problems can be solved by representing the world in the initial state, and then for each action we can perform on the world we generate states for what the world would be like if we did so. If you do this until the world is in the state that we specified as a solution, then the route from the start to this goal state is the solution to your problem.

    In this tutorial I will look at the use of state space search to find the shortest path between two points (pathfinding), and also to solve a simple sliding tile puzzle (the 8-puzzle). Let's look at some of the terms used in Artificial Intelligence when describing this state space search.

    Some terminology

    node is a state that the problem's world can be in. In pathfinding a node would be just a 2d coordinate of where we are at the present time. In the 8-puzzle it is the positions of all the tiles.
    Next all the nodes are arranged in a graph where links between nodes represent valid steps in solving the problem. These links are known as edges. In the 8-puzzle diagram the edges are shown as blue lines. See figure 1 below.
    State space search, then, is solving a problem by beginning with the start state, and then for each node we expand all the nodes beneath it in the graph by applying all the possible moves that can be made at each point.

    Heuristics and Algorithms

    At this point we introduce an important concept, the heuristic. This is like an algorithm, but with a key difference. An algorithm is a set of steps which you can follow to solve a problem, which always works for valid input. For example you could probably write an algorithm yourself for multiplying two numbers together on paper. A heuristic is not guaranteed to work but is useful in that it may solve a problem for which there is no algorithm.
    We need a heuristic to help us cut down on this huge search problem. What we need is to use our heuristic at each node to make an estimate of how far we are from the goal. In pathfinding we know exactly how far we are, because we know how far we can move each step, and we can calculate the exact distance to the goal.
    But the 8-puzzle is more difficult. There is no known algorithm for calculating from a given position how many moves it will take to get to the goal state. So various heuristics have been devised. The best one that I know of is known as the Nilsson score which leads fairly directly to the goal most of the time, as we shall see.

    Cost

    When looking at each node in the graph, we now have an idea of a heuristic, which can estimate how close the state is to the goal. Another important consideration is the cost of getting to where we are. In the case of pathfinding we often assign a movement cost to each square. The cost is the same then the cost of each square is one. If we wanted to differentiate between terrain types we may give higher costs to grass and mud than to newly made road. When looking at a node we want to add up the cost of what it took to get here, and this is simply the sum of the cost of this node and all those that are above it in the graph.

    8 Puzzle

    Let's look at the 8 puzzle in more detail. This is a simple sliding tile puzzle on a 3*3 grid where one tile is missing and you can move the other tiles into the gap until you get the puzzle into the goal position. See figure 1.
    Figure 1 : The 8-Puzzle state space for a very simple example 

    There are 362,880 different states that the puzzle can be in, and to find a solution the search has to find a route through them. From most positions of the search the number of edges (that's the blue lines) is two. That means that the number of nodes you have in each level of the search is 2^d where d is the depth. If the number of steps to solve a particular state is 18, then that�s 262,144 nodes just at that level.

    The 8 puzzle game state is as simple as representing a list of the 9 squares and what's in them. Here are two states for example; the last one is the GOAL state, at which point we've found the solution. The first is a jumbled up example that you may start from.

    Start state SPACE, A, C, H, B, D, G, F, EGoal state A, B, C, H, SPACE, D, G, F, EThe rules that you can apply to the puzzle are also simple. If there is a blank tile above, below, to the left or to the right of a given tile, then you can move that tile into the space. To solve the puzzle you need to find the path from the start state, through the graph down to the goal state.

    There is example code to to solve the 8-puzzle on the google code page.

    Pathfinding

    In a video game, or some other pathfinding scenario, you want to search a state space and find out how to get from somewhere you are to somewhere you want to be, without bumping into walls or going too far. For reasons we will see later, the A* algorithm will not only find a path, if there is one, but it will find the shortest path. A state in pathfinding is simply a position in the world. In the example of a maze game like Pacman you can represent where everything is using a simple 2d grid. The start state for a ghost say, would be the 2d coordinate of where the ghost is at the start of the search. The goal state would be where pacman is so we can go and eat him. There is also example code to do pathfinding on the google code page.

    Figure 2 : The first three steps of a pathfinding state space

    Implementing A*

    We are now ready to look at the operation of the A* algorithm. What we need to do is start with the goal state and then generate the graph downwards from there. Let's take the 8-puzzle in figure 1. We ask how many moves can we make from the start state? The answer is 2, there are two directions we can move the blank tile, and so our graph expands.
    If we were just to continue blindly generating successors to each node, we could potentially fill the computer's memory before we found the goal node. Obviously we need to remember the best nodes and search those first. We also need to remember the nodes that we have expanded already, so that we don't expand the same state repeatedly.
    Let's start with the OPEN list. This is where we will remember which nodes we haven't yet expanded. When the algorithm begins the start state is placed on the open list, it is the only state we know about and we have not expanded it. So we will expand the nodes from the start and put those on the OPEN list too. Now we are done with the start node and we will put that on the CLOSED list. The CLOSED list is a list of nodes that we have expanded.

    f = g + h

    Using the OPEN and CLOSED list lets us be more selective about what we look at next in the search. We want to look at the best nodes first. We will give each node a score on how good we think it is. This score should be thought of as the cost of getting from the node to the goal plus the cost of getting to where we are. Traditionally this has been represented by the letters f, g and h. 'g' is the sum of all the costs it took to get here, 'h' is our heuristic function, the estimate of what it will take to get to the goal. 'f' is the sum of these two. We will store each of these in our nodes.
    Using the f, g and h values the A* algorithm will be directed, subject to conditions we will look at further on, towards the goal and will find it in the shortest route possible.

    So far we have looked at the components of the A*, let's see how they all fit together to make the algorithm :

    Pseudocode


    Hopefully the ideas we looked at in the preceding paragraphs will now click into place as we look at the A* algorithm pseudocode. You may find it helpful to print this out or leave the window open while we discuss it.

    To help make the operation of the algorithm clear we will look again at the 8-puzzle problem in figure 1 above. Figure 3 below shows the f,g and h scores for each of the tiles.

    Figure 3 : 8-Puzzle state space showing f,g,h scores 

    First of all look at the g score for each node. This is the cost of what it took to get from the start to that node. So in the picture the center number is g. As you can see it increases by one at each level. In some problems the cost may vary for different state changes. For example in pathfinding there is sometimes a type of terrain that costs more than other types.
    Next look at the last number in each triple. This is h, the heuristic score. As I mentioned above I am using a heuristic known as Nilsson's Sequence, which converges quickly to a correct solution in many cases. Here is how you calculate this score for a given 8-puzzle state :


    Nilsson's sequence score
    A tile in the center scores 1 (since it should be empty)
    For each tile not in the center, if the tile clockwise to it is not the one that should be clockwise to it then score 2. Multiply this sequence by three and finally add the total distance you need to move each tile back to its correct position. Reading the source code should make this clearer.

    Looking at the picture you should satisfy yourself that the h scores are correct according to this algorithm.

    Finally look at the digit on the left, the f score. This is the sum of g and h, and by tracking the lowest f down through the state space you are doing what the A* algorithm would be doing during its search.

    Let me now look at the example source code provided with the tutorial, for although the algorithm at this stage may be clear in your mind the implementation is a little complicated. The language of choice for this kind of algorithm is really Common Lisp or Prolog, and most Universities use these when teaching. This effectively lets students focus on the algorithm rather than the implementation details such as memory and data stuctures. For our purposes however, I will refer to my example source code. This is in C++ and uses standard library and STL data structures.

    C++ implementation details

    If you intend on compiling and running the example code then you can get it on the google code page. I have not put any project, workspace or makefiles in the archive, but compilation and linking should be straight forward; the programs run from a command line. As we will see the A* algorithm is in a header file, since it is implemented as a template class, so to compile you need only compile on of the example files 8puzzle.cpp or findpath.cpp.

    There are comments throughout the source, and I hope it is clear and readable. What follows then is a very brief summary for how it works, and the basic design ideas.

    The main class is called AStarSearch, and is a template class. I chose to use templates because this enables the user to specialise the AStarSearch class to their user state in an efficient way. Originally I used inheritence from a virtual base class, but that lead to the use of type casts in many places to convert from the base Node to the user's node. Also templates are resolved at compile time rather than runtime and this makes them more efficient and require less memory.

    You pass in a type which represents the state part of the problem. That type must contain the data you need to represent each state, and also several member functions which get called during the search. These are described below :

    float GoalDistanceEstimate( PuzzleState &nodeGoal );
    Return the estimated cost to goal from this node

    bool IsGoal( PuzzleState &nodeGoal );
    Return true if this node is the goal

    void GetSuccessors( AStarSearch *astarsearch );
    For each successor to this state call the AStarSearch's AddSuccessor call to add each one to the current search

    float GetCost( PuzzleState *successor );
    Return the cost moving from this state to the state of successor

    bool IsSameState( PuzzleState &rhs );
    Return true if the provided state is the same as this state

    The idea is that you should easily be able to implement different problems. All you need do is create a class to represent a state in your problem, and then fill out the functions above.
    Once you have done that you create a search class instance like this :

    AStarSearch astarsearch;

    Then the create the start and goal states and pass them to the algorithm to initialize the search : 

    astarsearch.SetStartAndGoalStates( nodeStart, nodeEnd ); 
    Each step (a step is getting the best node and expanding it's successors) you call : 

    SearchState = astarsearch.SearchStep();

    Which returns a status which let's you know whether the search succeeded, failed, or is still going.

    Once your search succedes you need to be able to display it to the user, or use it in your program. To facilitate this I have added functions to allow movement through the solution.

    UserState *GetSolutionStart();
    UserState *GetSolutionNext()

    UserState *GetSolutionEnd();
    UserState *GetSolutionPrev()

    You use these to move an internal iterator through the solution. The most typical use would be to GetSolutionStart (the start state) and the iterate through each node using GetSolutionNext. For debugging purposes or some problems you may need to iterate through the solution backwards, and the second two functions allow that.

    Debugging and Educational functions

    Let's say you decide to display the OPEN and CLOSED lists at each step of the solution. This is a common debug feature whilst getting the algorithm working. Further, for the student it is often easier to see what is going on this way. Using the following calls you can display the lists during the search process...

    UserState *GetOpenListStart( float &f, float &g, float &h );
    UserState *GetOpenListNext( float &f, float &g, float &h );
    UserState *GetClosedListStart( float &f, float &g, float &h );
    UserState *GetClosedListNext( float &f, float &g, float &h );

    As you see these calls take references to float values for f,g and h so if your debugging or learning needs involve looking at these then you can pass floats in to store the results. If you don't care these are optional arguments.

    Examples of how you use these features are present in both the findpath.cpp and 8puzzle.cpp example files.

    I hope that at this point you will understand the key concepts you need, and by reading and experimenting with the example code (stepping through it with a debugger is very instructive) you hopefully will fully grasp the A* Algorithm. To complete this introduction I will briefly cover Admissibility and Optimization issues. 

    Admissibility

    Any graph search algorithm is said to be admissible if it always returns an optimal soution, that is the one with the lowest cost, if a solution exists at all.
    However, A* is only admissible if the heuristic you use h' never over-estimates the distance to the goal. In other words if you knew a heuristic h which always gave the exact distance to goal then to be admissible h' must be less than or equal to h. 
    For this reason when choosing a heuristic you should always try to ensure that it does not over-estimate the distance the goal. In practice this may be impossible. Look at the 8-puzzle for example; in our heuristic above it is possible that we may get an estimated cost to goal that is higher than is really neccessary. But it does help you to be aware of this theory. If you set the heuristic to return zero, you will never over-estimate the distance to goal, but what you will get is a simple search of every node generated at each step (breadth-first search).
    One final note about admissibility; there is a corollary to this theory called the Graceful Decay of Admissibility which states that if your heuristic rarely over-estimates the real distance to goal by more than a certain value (lets call it E) then the algorithm will rarely find a solution which costs more than E over the cost of the optimal solution.


    Optimization

    A good source of optimizations for A* can be found in Steve Rabin's chapters in Game Gems, which is on the books page. The forthcoming book AI Wisdom by the same publisher is going to have several chapters on optimization of A*. These of course focus on pathfinding, which is the ubiquitous use of A* in games.
    Optimizing pathfinding is a whole subject in itself and I only want to target the A* algorithm for general use, but there are some obvious optimizations you will want to make for most problems. After testing my example code with VTune I found the two main bottlenecks were searching the OPEN and CLOSED lists for a new node, and managing new nodes. A simple but very effective optimization was to write a simpler memory allocator than the C++ std new uses. I have provided the code for this class and you can enable it in stlastar.h. I may write a tutorial on it in the future if there is sufficient interest.
    Since you always want to get the node with the lowest 'f' score off the OPEN list each search loop you can use a data structure called a 'priority queue'. This enables to you to organise your data in a way in which the best (or worst depending on how you set it up) item can always be removed efficiently. Steve Rabin's chapter in the book above shows how to use an STL Vector along with heap operations to get this behaviour. My source code uses this technique
    If you are interested in priority queues follow the link above to my old A* tutorial as I implemented one from scratch in C, and the source code has been used in public projects such as FreeCell Solver 
    Another optimization is that instead of searching the lists you should use a hash table. This will prevent you having to do a linear search. A third optimization is that you never need to backtrack in a graph search. If you look at pathfinding for example you will never be nearer to the goal if you step back to where you came from. So when you write your code to generate the successor's of a node, you can check the generated ones and eliminate any states that are the same as the parent. Although this makes no difference to the operation of the algorithm it does make backtracking quicker.
    The key to optimization is not to do it until you have your code working and you know that your problem is correctly represented. Only then can you start to optimize the data structures to work better for your own problem. Using VTune or True Time, or whatever profiler you have available is the next step. In some problems checking to see if something is the goal or not may be costly, whilst in others generating the successor nodes at each step may be a significant bottleneck. Profiling takes the guesswork out of finding where the bottleneck is, so that you can target the key problems in your application.

    Report bugs and get the latest code from google project

    a-star-algorithm-implementation

    For more information


  • The original tutorial was inspired by Brian Stout's pathfinding tutorial in Game Developer:
    http://www.gamasutra.com/features/19990212/sm_01.htm
  • I also recommend Stephen Woodcock's website:
    http://www.gameai.com
  • Books See the books page; the AI section contains two good text books which include good sections on the A* algorithm.

    Other implementations on the web

    If this list is out of date or you would like to add an implementation link I be grateful for your email.

  • Sune Trudslev has written a tutorial and provided C# code
    http://www.codeproject.com/csharp/CSharpPathfind.asp
  • Thomas Grubb's Delphi Pathfinding demo
    http://www.riversoftavg.com/downloads.htm
  • Director (using Lingo) pathfinding demo
    http://www.spritelab.dk/beta/Astar2.html
  • James MacGill, Leeds University, has a useful Java based demo
    http://www.ccg.leeds.ac.uk/james/aStar
  • Lisp link
    http://yoda.cis.temple.edu:8080/UGAIWWW/resources/search-resources.html
  • Geert-Jan van Opdorp's Java A* Implementation
    http://www.gameai.com/javastar.html
  • Blair Robertson's A* pathfinder in C (Mac OS9)
    http://www.idevgames.com/fileshow.php3?showid=179
  • 1-2 of 2