Optimizing the Design of an Experiment using the ADOpy Package:
An Introduction and Tutorial
Click here to learn what to bring to the tutorial
Tutorial Session as part of 41st Annual Meeting of Cognitive Science Society
Wednesday (July 24) at 9:00 am - noon in room 524A
Palais des Congrès de Montréal in Montreal, Canada
Introduction: Experimentation is one of the cores of cognitive science, whether one is interested in understanding the mechanisms underlying cognitive control or the neural basis of decision-making. A productive research enterprise depends on good data, which in turn depends on a good experimental design. Unfortunately, not all experimental designs are equally informative, and as researchers well know, it can be difficult to design an experiment that produces clear-cut results. This is because the consequences of design decisions are not known in advance of data collection. Heuristic designs based on one’s experience in the field and the success of past studies, or pilot experiments, can assist in improving some design choices, but they rarely provide sufficient insight into experiment outcomes.
Adaptive design optimization (ADO): Advances in Bayesian statistics and machine learning offer algorithm-based ways to identify good experimental designs. Adaptive design optimization (ADO; Cavagnaro, Myung, Pitt, & Kujala, 2010; Myung, Cavagnaro, & Pitt, 2013) is one such method for improving inference by maximizing the informativeness and efficiency of data collected in an experiment. ADO originates from optimal experimental design in statistics (Atkinson & Donev, 1992; Chaloner & Verdinelli, 1995) and active learning in machine learning (Cohn, Atlas, & Ladner, 1994; Settles, 2012). ADO is a model-based algorithm that exploits the characteristic predictions of a computational model of task performance to guide stimulus selection trial after tria to attain the experimental objectivel in an optimal and efficient manner. The stimulus that is chosen on each trial is intended to be most informative or diagnostic with respect to the specific objective, whether parameter estimation or model discrimination. This is achieved by judiciously combining on-the-fly analysis of participant responses from past trials with model predictions. In parameter estimation, ADO chooses stimuli in the design space that are deemed likely to reduce the greatest uncertainty about parameter values in an information theoretic sense. In model discrimination, ADO chooses those stimuli for which the models make the most disparate predictions.
ADOpy: ADOpy is an open-source Python package that implements ADO. The package is written using high-level modular-based commands such that users can use the package without having to understand the computational details of the ADO algorithm. The package is available on GitHub at https://github.com/adopy/adopy with three pre-installed experimental tasks in psychophysics, delay discounting, and risky choice.
Tutorial format: The purpose of the tutorial is to introduce ADOpy to cognitive scientists at large. This tutorial is based on a manuscript currently under review (https://psyarxiv.com/mdu23/). The first part of the tutorial provides a conceptual introduction to ADO along with examples of its application, followed by an overview of the setup and installation of ADOpy. The second part consists of a hands-on training tutorial on ADOpy, along with work-through examples. Also demonstrated is how to convert a non-ADO task into an ADO-based task in PsychoPy. (Note: The present tutorial is related to an earlier Workshop on Optimal Experimental Design held in Summer 2015 during the 37th Annual Meeting of Cognitive Science Society.)
Tutorial audience: The tutorial is intended for graduate students, postdoctoral researchers, and scientists, who are new to ADO and have workable knowledge of Python programming and cognitive modeling experience.
9:00 - 9:10: Welcome (Jay Myung)
9:10 - 10:00: An introduction to adaptive design optimization (ADO) & movie (Mark Pitt & Jay Myung)
10:00 - 10:30: Setup and installation of ADOpy (Jaeyeong Yang )
10:30 - 11:00: Coffee Break
11:00 - 12:00: Hands-on session on ADOpy (Woo-Young Ahn & Jaeyeong Yang)
Atkinson, A., & Donev, A. (1992). Optimum Experimental Designs. Oxford University Press.
Cavagnaro, D. R., Myung, J. I., Pitt, M. A., & Kujala, J. (2010). Adaptive design optimization: A mutual information based approach to model discrimination in cognitive science. Neural Computation, 22, 887-905. [PDF}
Chaloner, K., & Verdinelli, I. (1995). Bayesian experimental design: A review. Statistical Science, 10(3), 273–304.
Cohn, D., Atlas, L., & Ladner, R. (1994). Improving generalization with active learning. Machine Learning, 15(2), 201–221.
Myung, J. I., Cavagnaro, D. R., & Pitt, M. A. (2013). A tutorial on adaptive design optimization. Journal of Mathematical Psychology, 57, 53-67. [PDF]
Settles, B. (2012). Active learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 6(1),1–114.
(Contact email: email@example.com; last updated Jul 16, 2019)