Research

 Current Research Program

Note that the numbers in square brackets refer to the corresponding publications in my CV

My research focuses on understanding human cognition through mathematical models, which can be classed as a combination of the broad research areas of cognitive psychology, computational modelling, and psychological methods. More specifically, my substantive expertise largely falls within human decision-making, and involves the development and testing of models that are able to explain response time and choice [12, 20, 21] (as well as additional pieces of information, such as changes of mind [23, 37]), such as extensions of the well-known and commonly used evidence accumulation framework [24]. I also have methodological expertise in the development and implementation of Bayesian methods for estimating and comparing cognitive models [11, 15], as well as methods for estimating and comparing models with computationally intractable likelihood functions [16, 20]. While my research program consists of many projects ranging from sole-author work to large-team collaborations, I believe that these projects can be broadly classed into four main research areas: (1) developing, testing, extending, and comparing models of human cognition; (2) applying cognitive models to measure constructs of latent cognitive processes; (3) creating and refining methods of estimating and contrasting cognitive models; (4) investigating and assessing the philosophical underpinnings of how we use cognitive models.

Developing, Testing, Extending, and Comparing Models of Human Cognition

One of the key reasons for my research focus on computational modelling is my core belief that developing precise, testable theories of cognitive processes is crucial to understanding human cognition. Specifically, I believe that the precise quantitative predictions made by cognitive models provide a level of insight that is not possible with verbal theories alone, making cognitive models fundamental to creating well-defined theories of cognitive processes. However, while cognitive models have the potential to create precise, process-level theories of human cognition that can be easily tested and compared against one another, I believe that cognitive modelling often suffers from two key limitations [24]. First, while cognitive models make precise predictions, there are often several different cognitive models that make extremely similar predictions for the specific data of interest. Second, while cognitive models are often proposed as providing an account of a broad cognitive process (e.g., decision-making), and therefore, should technically make predictions about any behaviour that we believe results from that process, cognitive models are often used in a restrictive manner, only assessing very specific aspects of observable behaviour to ensure that the model provides an accurate account of the assessed data.

One potential solution to both of these issues – which is a key focus of my current research – is forcing cognitive models to account for more sources of data. Importantly, by constraining cognitive models to jointly account for multiple sources of data, (1) the predictions of different models are more likely to differ over the joint data space, helping to potentially alleviate some of the mimicry issues, and (2) the models are now extended to capture an additional aspect of the cognitive process, reducing their restrictiveness. For instance, my recent research on double responding [23, 49] – where people make a second, often corrective, response after their initial response – has provided an additional constraint for evidence accumulation models, distinguishing the predictions of models with different types of inhibition that typically closely mimic one another in response time and choice data alone. In addition, double responding also extends standard evidence accumulation models to account for changes of mind in decision-making, reducing the restrictiveness of the theory as these models are now forced to explain another aspect of the decision-making process. Furthermore, my recent research collaborating with experts in electromyography (EMG) recordings of thumb movements [37, 51] has resulted in the development of models that are able to explain the interaction between the decision and motor systems, and well as partial errors, where people begin to respond for one option before changing their mind and responding for the other option.

In addition to my research on changes of mind, my research has also involved developing, testing, and extending models in a variety of other contexts. For instance, I have been heavily involved in research that assesses whether people become less cautious as they spend more time on a decision [4, 9, 21, 22, 28, 32], reflected in collapsing thresholds and urgency-gating models, with a key finding being that whether people adopt these time-dependent strategies is largely task dependent [9, 21]. My research has also involved developing, evaluating, comparing, and applying models of conflict tasks [20, 26, 42] (typically referred to as conflict diffusion models), with a key development being a model-based framework for separating facilitation and interference effects from one another without the need for neutral trials [42]. Finally, I’ve worked on developing, evaluating, and comparing models in several other areas of cognition research, such as task learning [6], multi-attribute choice [12, 31], memory [59], and continuous multi-agent decision-making [64].

Applying Cognitive Models to Measure Constructs of Latent Cognitive Processes

Another important feature of cognitive models is their ability to provide estimates of cognitive constructs of interest through their theoretically meaningful parameters. Specifically, while statistical models are typically designed to be atheoretical, in order to keep them generalisable to a range of contexts and simple enough to remain usable by a general audience, the theoretical constraint within cognitive models often results in their free parameters reflecting cognitive constructs that may help in answering a variety of research questions. For example, in evidence accumulation models, the drift rate parameter determines how quickly the accumulator moves (on average) towards the threshold, but also provides a measurement of the cognitive construct of task ability. Likewise, the threshold parameter determines how much accumulated evidence is required to terminate the process and trigger a decision, but also provides a measurement of the cognition construct of task caution. Importantly, cognitive models can provide a powerful framework for directly estimating cognitive constructs of interest, rather than attempting to infer things about these constructs from summary statistics of observed data alone.

My research has involved numerous applications of cognitive models – particularly evidence accumulation models – to estimate parameters and help answer a variety of theoretical and applied research questions. For instance, one of my research topics has been optimality, and whether people are able to adopt optimal strategies to achieve the task goal given to them by the experimenter [1, 9, 13]. Within evidence accumulation models, participants have the ability to strategically adjust their thresholds to balance their speed-accuracy trade-off, meaning that we can assess how well participants can achieve a speed-accuracy trade-off that maximises the rewards provided by the task (often referred to as reward rate optimality).

However, as with my work on collapsing thresholds models, the key finding has been that people’s ability to adopt an optimal threshold is largely task dependent [13]. Moreover, I have collaborated with social cognition researchers on several projects, using the parameter estimates from cognitive models to better understand gaze cueing [47], pro-social behaviour [43], ego-depletion [27], and need for closure [3]. Similarly, I have also collaborated with developmental researchers on several projects, using the parameter estimates from evidence accumulation models to better understand autism [39], dyslexia [38], early life adversity [8], and ageing [26, 48]. Finally, I have used the parameter estimates of cognitive models to better understand a range of more typical cognitive psychology paradigms, such as how emphasising speed influences decision-making [35] and how multi-tasking differs from increases in difficulty in a single task [30], as well as more broad research questions such as the heritability of cognitive constructs like task ability and caution [7].

Creating and Refining Methods of Estimating and Contrasting Cognitive Models

While cognitive models possess a great deal of potential utility, both as theories of cognitive processes and measurement tools for estimating cognitive constructs, practical issues can often make it difficult or even impossible to utilise cognitive models in ways that best serve our needs. Specifically, due to the complexity of many cognitive models, methods that are commonly used for estimating (e.g., Bayesian [hierarchical] parameter estimation) and/or comparing (e.g., Bayes factors, cross-validation) simple statistical models can quickly become computationally intractable in cognitive models. Moreover, when dealing with some extremely complex, theoretically precise cognitive models (e.g., the leaky-competing accumulator, urgency-gating models, etc.), even the likelihood function can become computationally intractable to calculate, making things as basic as fitting the model a difficult task. Furthermore, even when we are able to estimate the parameters of a cognitive model, the complex correlation structure between parameters in some cognitive models means that there is no guarantee that these estimates will be robust and reliable. My methodological goal within my cognitive modelling research has been to try and reduce the burden of these problems as much as possible, in order to make it easier for myself and others to achieve the greatest utility possible from cognitive modelling.

One of my key methodological focuses in cognitive modelling has been assessing the identifiability of models relative to one another (i.e., model identifiability), as well as of different parameters within a model (i.e., parameter identifiability), through simulation-based studies [15, 22, 29, 32, 52]. Importantly, if we are unable to distinguish between two models in the unlikely-yet-ideal setting where one of them is the true, data-generating model, then comparing these models on empirical data serves limited purpose. Likewise, if we are unable to accurately estimate a parameter in a model in the unlikely-yet-ideal setting where the model is the true, data-generating model, then attempting to estimate this parameter in empirical data will only serve to mislead us. For instance, one of my key findings suggested that both out-of-sample prediction type methods (e.g., AIC, DIC, WAIC) and Bayesian model selection type methods (e.g., BIC, Bayes factors) have great difficulty in distinguishing between null and small within-subjects effects in the parameters of evidence accumulation models, though in different ways; specifically, out-of-sample methods often provided false alarms on null effects, whereas Bayesian model selection methods often missed small effects [15]. My other key findings have demonstrated parameter identifiability issues in the unconstrained linear ballistic accumulator (LBA) [52], collapsing thresholds and urgency gating models [22, 32], and the double-pass diffusion model [29].

Another key methodological focus of my cognitive modelling research has been developing methods for fitting and comparing complex cognitive models. For instance, some of my previous work has focused on developing and implementing methods for estimating marginal likelihoods (i.e., Bayes factors) in cognitive models, so that researchers can utilise and compare cognitive models in a similar way to how they may compare simpler, statistical models [5, 10, 11]. Another key part of my work was developing a method and framework for efficiently simulating and fitting extremely complex evidence accumulation models [16], which I then combined with one of the marginal likelihood estimation methods I had developed [11], to create pseudo-likelihood Bayes factors that I used to compare the different conflict diffusion models on several flanker data sets [20]. Finally, I have also been part of collaborative efforts to develop different implementations of hierarchical evidence accumulation models, such as fixed/random/mixed effects models [45] and mixture models [46].

Investigating and Assessing the Philosophical Underpinnings of How We Use Cognitive Models

Continuing developments in computational cognitive modelling mean that the field continues to become increasingly complex and diverse, with the number of possible research trajectories always growing as previous research topics continue to branch out. While these continuing developments certainly showcase the potential of cognitive modelling, they also provide a difficult landscape for researchers to navigate in trying to best utilise cognitive models. For example, as the number of cognitive modelling approaches increases, it is likely that previous “best practices” will eventually be superseded by new best practices. Likewise, as the number of potential research questions increases, it is difficult to know which current practices are best to answer these novel questions. More generally, in order to perform high quality cognitive modelling research, we need to have a deep understanding of why we are utilising cognitive models, what insight we hope to gain from the modelling process, and what the best approaches are to help us to best achieve our goals. Therefore, I believe that it is important to continually investigate and assess the philosophical underpinnings of how we use cognitive models, rather than purely following the tradition set by previous cognitive modelling approaches.

Based on my experience and expertise with cognitive models, I have attempted to clarify my thoughts on why we use cognitive models, and how we should use them for different research goals, across several different research projects. For instance, one of my key arguments has been that “cognitive modelling” is too diverse to lump into a single category, and that while no categorisation system is perfect, it may be practically useful to split different types of cognitive modelling into different categories, as different types of cognitive modelling will have different goals, and therefore, different best practices [17]. Specifically, I have argued in favour of four categories: model application, where a model is used as a measurement tool for parameter estimation; model comparison, where multiple models are quantitatively compared in their ability to account for empirical data; model evaluation, where one or more model(s) are assessed on how well they account for specific aspects of the data; and model development, a more general category that covers the more creative aspects of developing a model. Based on these categories, I argued that open science practices such a preregistration – a controversial topic in cognitive modelling [60] – are most applicable to the model application category of cognitive modelling, and I supervised the development of a preregistration template for model application [36]. Moreover, I have argued that when we wish to know which model is the best model of a given cognitive process, then the best approach is to use model comparison through model selection methods; a controversial opinion based on traditional cognitive modelling approaches, which views the assessment of models against qualitative benchmarks as the gold standard of determining the best model [25]. However, I have also argued that when we wish to know why one model performs better/worse than another, or whether a model is able to capture a very specific pattern of data, then model evaluation is the best approach, and in many cases the question of why a model performs how it does may be even more important to progressing our theories of cognitive processes than the question of which model is best [17]. I have also investigated why model flexibility is important to consider within cognitive models, and what it means for a model to be flexible [2]. Finally, I have also been critical of the use of many-analyst approaches in cognitive modelling [63], suggesting that they may sacrifice the quality of each analyst’s work in favour of obtaining a large quantity of analysts, based on my experience as a modeller in two previous many-analyst projects [14, 19].