Research in the CHAT Lab encompasses both basic and applied methods, with a core focus on using basic and theoretical perspectives of attention and decision making to address applied questions. Broadly speaking, we conduct research in areas of attention, decision making, cognition, and automation use. We often collaborate with colleagues in computer science and engineering programs to broaden the scope of our work. Below is a snapshot of some of the recent work in the lab.
The most recent work in the lab has focused on the way operators perceive automated systems in various contexts. We have examined the way that operator trust in the automation translates to their dependence on the system, which you can read more about here. We have also investigated the way experience with a system impacts estimates of reliability and trust, which you can read more about here. Currently, we are examining the way in which operators perceive the difficulty of a situation and how that translates to automation use behaviors. Some of our early findings are available here.
We also collaborate with colleagues in the Edward P. Fitts Department of Industrial and Systems Engineering department here at NCSU. This work has focused on how users perceive machine learning decision aids compared to human decision supports. Our early results are available here.
Ongoing work in the lab is investigating how human operators choose to use automation when provided with a simple on/off decision. You can consider real-world examples such as choosing when to use your GPS, or when to use ChatGPT. This work has focused on how to improve these decisions, as operators often do not use the automation when they should, and rely on it when they should not. You can read more about this work here, here, and here.
Funded by the Office of Naval Research, this work looked at how human operators detect movement patterns of objects on a screen when aided by automation. Various forms of automation were tested, including visual aids (history trails), simple algorithmic aids presented in text, and more complex machine learning aids that highlighted target objects. We also investigated automation transparency. You can read more about this work here, here, here, here and here.