Research Projects

Overview


My research encompasses both basic and applied methods, with a core focus on using basic and theoretical perspectives of attention and decision making to address applied questions. My early research focused on questions of spatial uncertainty, attention, and perception in applied environments, which provided me with foundational knowledge to begin exploring questions of human-automation interaction in these domains (Patton et al., 2021; 2022). My more current research seeks to go beyond existing ideas on the effects of automation (do you trust it, do you follow its recommendations, do you comply with incorrect recommendations) to understand how such phenomena occur in dynamic decision making environments, where individuals go through cycles of decisions and actions rather than just individual discrete decisions. My recent work has begun to explore a rather neglected area concerning when and why people choose to engage automation (discretionary automation).

Spatial Cognition



Much of my early research focused on applied issues of spatial reasoning in military contexts as a result of working with the Army Research Laboratory. One such problem is navigation – how do we best support a soldier in navigating a new environment? I investigated how different frames of reference, or viewpoints, can be used at the same time. Through two sets of experiments (Patton, in press; Patton, under review), I evaluated the results in competing models to understand how the ability of humans to use both viewpoints at the same time can best be predicted in future scenarios. My findings suggest that there is no benefit to performance while using both viewpoints at once. This creates new challenges for designing navigational aids as these frames of reference have offsetting benefits and drawbacks that would be best addressed through dual use.

Decision Making Under Spatial Uncertainty


Funded by the Office of Naval Research, we investigated how uncertainty impacts how humans navigate to reach a pre-determined rendezvous point. We first created a simplified maritime scenario that asked people to control a ship by making changes to its heading and speed with the goal of intercepting another ship on the screen. We found that people consistently made incorrect speed and heading choices that caused them to arrive late to the rendezvous point. This chronic under-estimation of the time needed to achieve a goal suggests that the planning bias was present in this context (Patton et al., 2020). Such biases can negatively impact real ship navigation, wasting resources and potentially undermining safety. To remedy this, we took an unorthodox approach by introducing uncertainty. Uncertainty usually worsens performance in a decision making context, but here we decided to introduce temporal uncertainty in the form of ship movement delays. On a small proportion of trials, the user ship would stop moving for a few seconds before continuing on their trajectory. This improved rendezvous accuracy on the trials where they was no delay, but subjects still arrived late when there was a delay (Patton et al., 2021a). Further analyses suggested that people adjusted their estimates of time needed to rendezvous based on the possibility of delays, but still underestimated the time to rendezvous when a delay did occur.

Automated Decision Support in a Dynamic Decision Making Environment


One safety-critical issue faced by the Navy is the inability to identify hostile intentions from movement (Patton et al., 2021b). This provided a context to answer questions of uncertainty with direct applications to real world problems. The first step in creating a useful decision aid was to understand why identifying hostile movements is challenging. Accordingly, I used a testing platform that required the user to visually track multiple ships on a screen – a multi-object tracking task – and remember their past movements. Such tracking likely induces a significant load on working memory so we implemented a simple visual aid that reduced the working memory load of keeping track of ships on the screen. This led to small improvements, but in this safety-critical domain, it was not enough (Patton et al., 2022) and suggested that it might be useful to increase the level of automation.


The initial findings that simple memory aids were not sufficient to improve performance to an acceptable level in a real world tracking task indicated a need for a better aid. Therefore, I began investigating an automated decision aid that tracked ship movements and highlighted the ship with the highest number of hostile movements to the participant. This approach is unique because it is a dynamic decision making task – one that requires the user to make a number of smaller, related decisions to accumulate evidence before reaching a final consensus – whereas most prior work on automation largely focused on single, static decisions. We found a tendency to over-rely on the aid, but it led to large improvements in detection accuracy. In further work, I investigated whether providing an explanation for the automation’s recommendation (known as transparency) impacted performance. In contrast to prior work, an explanation in this case made no difference to performance (Patton et al., under review). This is a novel finding that warrants more investigation but may suggest that the benefits of transparency in static decision making environments do not manifest in dynamic environments.


While our first exploration of automation in this paradigm was based on an experimenter driven algorithm, our second exploration switched to a machine learning algorithm. We developed an interdisciplinary collaboration with members of the Computer Science Department at Colorado State University to build an automated aid that gathered evidence from the user to create a dynamic prediction of the hostile ship. When we implemented the new aid into the original paradigm, we found mostly replicated results from the experimenter driven aid (Calgar, Patton, et al., under review). These results suggest that for simple laboratory experiments, the back-end of an algorithm may not matter as much as the forward-facing, user-experience focused side, which can save experimenters interested in these questions time and money. We are continuing this collaboration to pursue questions surrounding the impact of planning-assistant machine learning on performance and reliance.

Automation Use in Uncertain Circumstances


One area in which there is little research is discretionary automation, or the choice to turn automation on or off. Although there is an abundance of research surrounding how humans may comply with automation when it is present, it is not clear whether individuals are skilled at understanding when to ask the automation for a recommendation, rather than always having one provided to them on the screen.


I first investigated this topic during a collaboration with the Naval Research Laboratory. We adopted a multi-tasking testbed to have an option for participants to turn an aid that highlights targets on and off, and manipulated task difficulty. In the larger context of the interests of the lab, future plans are focused on exploring how giving people the choice to use automation impacts workload (the amount of mental resources being used), measured by eye tracking metrics.


Most recently, I have begun to examine how attention theories and models interact with the decision to use automation. In a dual-task paradigm mimicking warehouse management, I am investigating questions surrounding the impact of task difficulty, anchoring, task switching abilities, and risk on the discretion or choice to use automation.