Problem Space of the Tower of London Test

 

Facts and figures

 

Eric J. Fimbel - Research results and data sets - home page 


This page replaces http://www.ele.etsmtl.ca/profs/efimbel/tol/

Citation: Fimbel, E.J. (2009) Problem Space of the Tower of London Test. Available at http://tolspace.googlepages.com/. Last retrieved : (mm/dd/yyyy)

Publication: Fimbel, E.J., Lauzon, S., Rainville, C. (2009) Performance of Humans vs. Exploration Algorithms on the Tower of London Test. PLoS ONE 4(9): e7263. doi:10.1371/journal.pone.0007263 download draft (pdf)



The Tower of London Test was designed by Shallice (1982, Philosophical Transactions of the Royal Society of London, B, 298, 199-209) in order to assess deficits of planning in patients with lesions of the frontal lobe. The device is composed of three colored beads placed on three rods containing 1, 2 and 3 beads. The goal is to reach a target configuration with a minimal number of moves.

 Figure 1. Two tasks of the TOL. A (configurations 36 -> 25) requires 2 moves B (configurations 36 -> 52) requires 8 moves


 

Figure 2. Possible configurations of the TOL. The numbers follow the convention of Berg & Byrd (2002, J. of Experimental and Clinical Neuropsychology, 24(5), 586-604)


The problem space of the TOL is a graph that contains (in a broad sense) all the possible problems. The vertices (or nodes) are the configurations and the edges are the possible moves. 

A task (also called a specific problem) is defined as a pair of nodes. The solutions are the shortest paths between these nodes.

Figure 3. Problem space of the TOL

 

Facts about the problem space

  • There are 36 nodes (vertices). 
  • There are 54 connections (edges), corresponding to 108 possible moves.
  • The diameter, i.e., the maximal distance between two nodes counted in edges, is 8.
  • The branching factor, i.e., the number of possible moves from a configuration, is between 2 and 4 (average 3).
  • There is a 6th-order symmetry (Fig.1): there are 6 physical configurations, and within each of them, 6  permutations of colors.

 

Facts about the tasks

  • There are 1296 different tasks (36 x 36 pairs of configurations) including the 36 trivial tasks in which the initial and final configurations are the same.
  • There are 216 groups of equivalent tasks given the 6th-order symmetry (within each group, there are 6 equivalent tasks obtained by permuting the colors).
  • The tasks require between 0 and 8 moves.

number of tasks as a function of distance 

number of moves per task

  • The tasks have between 1 and 8 solutions, i.e., paths of minimal length in the search space. Up to 2 moves, there is only 1 solution. After that, the number of solutions increases.

average number of solutions as a function of distance

number of solutions per task


Complexity facts

The TOL is a particular case of the problem of the shortest path, i.e., find a path between 2 nodes in an unknown graph. 

This problem is solved by means of general exploration algorithms that :

receive as parameters the graph and a pair of nodes I, F at a distance N. 
 
produce a solution, i.e. a path of length N from I to F 
 

The algorithmic complexity of this problem, i.e., the rate of growth of the execution time as a function of the distance, depends on the graph. If we only know the branching factor b of the graph  the worst case and the average case will be in O( b^N), which means that the execution time grows exponentially.

The problem space of the TOL is known therefore this coarse estimate can be refined. We now examine the execution times of different exploration algorithms on the TOL, depth-first and broad- first. For details, see:

Cormen, T.H., Leiserson, C.F., Rivest R.J. (2001). Introduction to Algorithms - 2nd. edition. MIT Press.


Depth-first exploration algorithms

Their strategy is to examine a path and its branching sub-paths in depth. If a solution is found stop, else examine the following path. These algorithms require minimal memory but they should know (or guess) in advance at what distance N we are.

The execution time is coarsely proportional to the number of paths of  length N. We can omit the  paths of length < N (they are also examined, but they are fewer than the paths of length N).

average number of paths as a function of distance

number of paths per initial configuration

If you draw a trendline you see that the number of paths increases as 3.16^N.  This is above the general prediction b^N with the average branching factor b = 3.

If the algorithm has a loop control we consider only the direct paths (that contain no loops) because the others are discarded immediately.

average number of direct paths as a function of distance

number of direct paths per initial configuration

In both cases, the execution time of depth-first exploration algorithms increases exponentially with the distance N. 

warning. These results are only analytical predictions.

Broad-first exploration algorithms
 
Their strategy is to examine the nodes in order of increasing distance until the target node is found. When the memory is unbounded, these algorithms can store any node and edge that was examined so that it is not examined again.

Their execution time is coarsely proportional to the number of nodes and edges at a distance N or lower.

average number of nodes + edges as a function of distance 

number of nodes + edges per initial configuration

 
 
 

 
 

If you draw a trendline on the average number of nodes+edges, you see that it increases linearly with the distance.

In the TOL, the execution time of broad-first exploration algorithms with unbounded memory increases linearly  with the distance N.

warning. These results are only analytical predictions.

A trade-off between memory and execution time

In summary, if the algorithm has only a bounded memory, the execution time is exponential (i.e., grows exponentially with the distance) whatever the category of algorithm.

 B(N) ~ b^N

If the algorithm has unbounded memory, it is possible to solve the problem of the TOL in linear time (i.e., the execution time grows linearly with the distance).

U(N) ~ N

We can even do better. If we use a priori knowledge, it is possible to solve the problem in constant time. For instance, use a lookup table that contains a solution for each pair of configurations.

C(N) ~ 1

Note that a-priori information can be represented as pre-defined variables as well as embedded in the algorithm, i.e., the algorithm is designed specifically for the TOL .

However, it is not interesting to consider such ad-hoc algorithms. The test of the TOL is aimed at naive participants, therefore algorithms should have no a priori information.


Specific difficulty of each task vs. analytical results

The problem space of the TOL is bounded and presents symmetries, therefore the execution time may differ even is the tasks have the same distance. 

We use an empirical method to determine the specific difficulty of each task: we measure the average execution time of a random algorithm. Each execution of random algorithms (also called probabilistic algorithms) may use a different "strategy" to find a solution.

The execution time of a random algorithm averaged across repetitions is thus equivalent to the average across a set of deterministic strategies.

In practical terms, we do not measure the execution time. We characterize it by counting the number of steps (i.e., examined nodes and edges). This is called the barometer instruction technique. The actual execution time will be almost proportional to the number of steps. For details, see: 

Brassard, G., Bratley, P. (1996). Fundamentals of Algorithms. Prentice Hall


Random broad-first exploration algorithm

The random algorithm is broad-first and it has bounded memory (somehow similar to human working memory limitations). Therefore, the nodes and edges that are examined cannot be marked and the execution time is expected to grow exponentially.

The strategy is to examine the nodes in order of increasing distance until the target node is found. For each node, the possible moves are examined in a random order. The algorithm stores the solution under construction and does not examine the moves that may cause a loop.

random broad-first exploration algorithm (pdf)

The number of steps was averaged for each task on 2048 repetitions. This provides an index of algorithmic difficulty Id for each task.

number of steps of a random broad-first algorithm per task

average number of steps as a function of distance

average number of steps per initial configuration

Globally, the algorithmic difficulty increases exponentially with the distance, but there may be important differences between tasks with the same distance.

If you draw a trendline on the average number of steps as a function of distance, you see that the number of paths increases as 3.06^N.  This is between the general prediction 3^N and the analytical prediction for bounded memory (number of paths ~ 3.12^N).

Working data

The foregoing data is summarized

by tasks

by distances

general excel datasheet (xls 8 mb)

Fossile software

This program implements the random broad-first exploration algorithm, algorithms to compute the distances, number of paths, etc.  and a clumsy graphical interface to solve problems interactively.

lontow.c (zip 3mb)

 The program is written in Turbo C for Dos (it was adapted from an older program that was adapted from ... etc.). The zip file contains a standalone installation for Windows XP and alike.

unzip the file, and go to subdirectory ../commands/ 

execute editTol.bat for the turbo C environment

execute runTol.bat for executable.

note. I expect that Turbo C for Dos is now in the public domain. Otherwise, don't download this program.

 

Further readings about Algorithmic Complexity

Hartmanis, J. (1981). Observations about the development of theoretical computer science. Annals of the History of Computing, 3, 42-51.

Johnson, D. S. (1990). A catalog of complexity classes. In van Leeuwen, J., ed., Handbook of Theoretical Computer Science, Vol A, 67-161. Amsterdam: North-Holland.

 


Copyright: (c) 2009 E.J. Fimbel, S. Lauzon, C. Rainville. This is open-access content distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.