Research in the CHAT Lab encompasses both basic and applied methods, with a core focus on using basic and theoretical perspectives of attention and decision making to address applied questions. Broadly speaking, we conduct research in areas of attention, decision making, cognition, and automation use. We often collaborate with colleagues in computer science and engineering programs to broaden the scope of our work. Below is a snapshot of some of the recent work in the lab.
The most recent work in the lab has focused on the way operators perceive automated systems in various contexts. We have examined the way that operator trust in the automation translates to their dependence on the system, which you can read more about here. We have also investigated the way experience with a system impacts estimates of reliability and trust, which you can read more about here. Currently, we are examining the way in which operators percieve the difficulty of a situation and how that translates to automation use behaviors.
Other ongoing projects in the lab include experiments run by undergraduate and graduate students exploring the impact of risk on the likelihood of allowing a self-driving car to operate, and the impact that transparency has on using automation in risky scenarios.
Ongoing work in the lab is investigating how human operators choose to use automation when provided with a simple on/off decision. You can consider real-world examples such as choosing when to use your GPS, or when to use ChatGPT. This work has focused on how to improve these decisions, as operators often do not use the automation when they should, and rely on it when they should not. You can read more about this work here and here (more coming soon!).
Funded by the Office of Naval Research, this work looked at how human operators detect movement patterns of objects on a screen when aided by automation. Various forms of automation were tested, including visual aids (history trails), simple algorithmic aids presented in text, and more complex machine learning aids that highlighted target objects. We also investigated automation transparency. You can read more about this work here, here, and here.