Human decision-making under computational complexity
Peter Bossaerts @ University of Melbourne
Abstract:
We know a lot about how humans deal with one type of uncertainty, where trial-and-error (reinforcement learning) works effectively, such as in foraging, in gambling, or in repairing, tuning, and even in strategic games. But what if uncertainty is generated by computational complexity? Theoretically, one cannot deal effectively with it by means of trial-and-error. And indeed, humans follow fundamentally different strategies when faced with complexity. Yet they rarely resolve uncertainty completely. The talk will summarize fifteen years of research on human attitudes towards complexity. It will show, among others, what makes a decision difficult for humans, how the theory of computation sheds light on it, how well humans approximate correct solutions, and how social interaction through markets may help.
Bio:
Peter Bossaerts is Redmond Barry Distinguished Professor at the University of Melbourne. He pioneered the use of controlled experimentation (with human participants) in the study of financial markets. He also pioneered the use of decision and game theory in cognitive neuroscience, thereby helping establish the novel fields of neuroeconomics, decision neuroscience and computational neuropsychiatry. Recently, he has started to use computer science to study human and market behavior when uncertainty emerges because of complexity. Bossaerts graduated with a PhD from UCLA and spent most of his career at the California Institute of Technology (Caltech). He has also held positions at Carnegie Mellon University, Ecole Polytechnique Fédérale de Lausanne (EPFL), and the University of Utah, among others. Later in 2022, he will join the Faculty of Economics at Cambridge University to take up a Leverhulme Trust International Professorship. Bossaerts is elected Fellow of the Econometric Society, the Society for The Advancement of Economic Theory, and the Academy of the Social Sciences in Australia.
Summary:
Focus is making complex decisions and the human biases associated with this
Examples:
Knapsack Problem
Given: some number of values
Optimization Task: choose values to maximize some constraint (e.g. target specific sum)
Decision Task: determine can the constraint be satisfied?
Use visual cues to minimize amount of mental computation
Traveling Salesperson
Variants: Euclidean or random structure, shortest path
These are NP-Complete problems, so (likely) require exponential time to solve in worst case (it is unclear what average complexity means for these)
Metric of the complexity of individual problem instances:
Sahni-k
Use the greedy algorithm to place object into knapsack
If that answers the question, done
Otherwise, use greedy algorithm to add one more item, repeat
Complexity = number of iterations required to find solution
Record for a human they’ve observed: 6 iterations
Success of reinforcement learning to find optimal solution
Knapsack that has just 85 possible combinations
30 trials of algorithm needed produce 5% chance of success
~400 trials needed for 95% chance of success
Solution space is extremely heterogeneous
Both computer algorithms and humans explore many problem variants
Efforts to characterize how humans search the space work poorly due to this heterogeneity
Humans find the optimal solution ~45% of the time.
How well do they approximate?
Decision task: theory’s predictions about which problems are easier to approximate agrees with human approximation performance
Optimization task: for humans weighted shortest path is much easier than other problems but theory does not identify a difference between their approximability
Does communication via markets help people solve these problems?
Market: people to invest in objects
Get paid if a given object ends up in the knapsack
Markets enable participants to find solution 27% of the time, while individuals do so 17%
This is surprising because people are somehow giving up information freely via market without a direct benefit to themselves
Market: people invest in a security that captures the value of the optimal solution, not the solution itself
Across a range of problem complexities the market was able to capture their value correctly
Seems like it takes some people getting close to prompt others to look more deeply
They didn’t notice bubbles or market instability
Individuals do more poorly, with a many individuals not finding the answer
The conducted a clinical trial
Gave intelligence-improving drugs (e.g. Ritalin) to people doing above tasks
Experiments show that the drugs cause people to spend more time doing the tasks but are less successful at finding solutions