This project had two main branches, with the intention to examine the trust development process during interaction with an automated vehicle. The first branch examined the baseline interaction with an automated driving system and measure the trust over time via multiple drives. The second branch focused on what outside factors could influence trust, first through initial information about the system and then later information after an error.
I envisioned the project as a testbed to understand driver behavior and trust over time. I designed the project using STISIM Drive simulation software and created seven unique but content controlled drives that participants could experience in either manual or automated driving modes, and one of the seven drives contained an error to test reactions to the error and the subsequent influence on trust. Within this testbed. I could change various factors for later studies while still being able to control the scenarios and compare the results over the experimental series.
Trust Development Framework Branch
The first branch had the goal of using the seven drive series to establish a trend of trust over time to understand how interaction with the system would influence drivers' trust calibration. I wanted to understand the patterns of learned trust, error cost, and non-active trust repair after an error. I found an increase in trust through the initial few drives, a similar decrease in trust for the error in both the takeover request (system-limit failure) and the complete-failure (system-malfunction failure), and then a natural recovery of trust for the subsequent drives after the error. Additionally, the drive with no error showed that trust eventually reached a plateau and stabilized for the rest of the drives. This branch also tested drivers during the day versus driving at night to see if this environmental lighting difference influenced their trust in the automation system. However, there were no trust differences between lighting conditions even though at lower luminance levels, scotopic vision has a shorter viewing distance and less visual acuity than photopic vision in better light levels.
For more information on this branch, please see the published work in:
Mishler, S., & Chen, J. (2023). Effect of automation failure type on trust development in driving automation systems. Applied Ergonomics, 106, 103913.
Mishler, S. (2019). Whose drive is it anyway? Using multiple sequential drives to establish patterns of learned trust, error cost, and non-active trust repair while considering daytime and nighttime differences as a proxy for difficulty (Master's Thesis, Old Dominion University).
Framing Automation Trust Experiments (FATE) Branch
Building on the prior framework from the previous branch, the FATE branch sought to understand how trust calibration could be influence through manipulating swift trust directly prior to interaction with a newly updated autonomous system and how to influence trust after an error. Ensuring that drivers' trust is properly calibrated to the capability of the automation is important for avoiding misuse or disuse of the system. Through use of the framing effect, I implemented positive (gain frame) and negative (loss frame) descriptions of the automated driving system update to the human driver to either promote or dampen trust. Results showed that framing was effective at increasing trust compared to the control and dampening and recovery of trust after an error was influenced by the trust promotion. In other words, promoting trust through positive framing not only influences short term trust, but makes individuals quicker to repair trust after an automation error. In a later experiment, I added an active trust dampening or trust repair message after the drive with the error, but the results were mixed, indicating active calibration of trust is complex. However, it did demonstrate that there is an important consideration for the timing of the repair/dampening message, where messages occurring while the error is happening or within a short interval after the error could be more effective than a delayed message.
For more information on this branch, please see the published work in:
Mishler, S. A. (2023). Framing Automation Trust: How Initial Information About Automated Driving Systems Influences Swift Trust in Automation and Trust Repair for Human Automation Collaboration (Doctoral dissertation, Old Dominion University).
Mishler, S., & Chen, J. (2023). Framing Updates: How Framing Influences Trust for Automated Driving Systems. In Proceedings of the Human Factors and Ergonomics Society 67th International Annual Meeting. Washington DC: HFES.
Vigilance
Because of the rise in semi-automated driving that still requires the human driver to attend to the roadway, there is need to investigate the issues surrounding vigilance in partially automated driving. The ability of humans to attend to a task begins to wane over time so this study implemented interventions to improve driver vigilance and prevent crash potential while also considering different theories surrounding the reason for the vigilance decrement. A driving related task asking the driver about real objects in the road at various points in the study was compared to a non-driving-related task asking individuals general knowledge questions and a control with no intervention. Results showed convergent evidence for both resource depletion and disengagement as sources of the vigilance decrement, while also demonstrating that the general knowledge questions of the non-driving-related task given at various points during the 45 minute drive were able to help mitigate some aspects of the vigilance decrement.
For more information please see:
Mishler, S., & Chen, J. (2024). Boring but demanding: Using secondary tasks to counter the driver vigilance decrement for partially automated driving. Human factors, 66(6), 1798-1811.
Cybersecurity - AI Detection Aids
Artificial intelligence is beneficial for assisting human being with detection of various malicious factors. Phishing emails attempt to steal personal information from users and AI can help spot these potential threats and notify the user. However, the AI detection is not always perfect and can misclassify emails. Similarly, when AI computer vision is used to detect traffic signs, various digital attacks or sign alterations can make detection unreliable. These two different studies investigated detection reliability of AI aids/assistants to help best inform the human user.
The phishing study found that a highly reliable aid garnered more trust and lead to the best performance. Feedback information helped improve user's judgements accuracy and users that did not trust the aid and rejected its help later on performed worse in experiments showing the importance of making an aid reliable and informative. However, human aids with transparency information helped improve performance, but the effect was nearly reversed for AI aids, demonstrating potential differences in human-human and human-AI trust.
The computer vision study demonstrated a further difference between human-human and human-AI beliefs and expectations, showing that users were more confident in their own ability to detect signs than an AI, but still believed an AI was more capable for certain detection situations than it actually was. The finding highlights the automation bias that humans have higher expectations from automation, but if the automation makes an error, that error would be more impactful.
For more information, please see:
Mishler, S., Parker, C., & Chen, J. (2024). Using anthropomorphism, transparency, and feedback to improve phishing detection aids. Theoretical Issues in Ergonomics Science, 1–21.
https://doi.org/10.1080/1463922X.2024.2373457
Mishler, S., Jeffcoat, C., & Chen, J. (2019). Effects of anthropomorphic phishing detection aids, transparency information, and feedback on user trust, performance, and aid retention. In Proceedings of the Human Factors and Ergonomics Society 63rd International Annual Meeting. Washington DC: HFES.
Mishler, S., Garcia, K., Fuller-Jakaitis, E., Wang, C., Hu, B., Still, J., & Chen, J. (2021). Predicting a malicious stop sign: Knowledge, exposure, trust in AI. In Proceedings of the Human Factors and Ergonomics Society 65th International Annual Meeting. Washington DC: HFES.
And search for other followup work by collaborators
Clash - Should I Stay or Should I Go
As automated vehicles improve, the design of the human input device will need to adapt to meet the new demands. This project investigated different driver response methods to automated warnings to improve performance and safety. Direct and indirect response methods were tested by having participants react to warnings using the steering wheel (direct) or by button press (indirect). Results indicated that participants were faster for the indirect method, but had higher accuracy for the direct method. Responses to the indirect method were faster because the physical action required was lesser; however, the cognitive response time was faster for the direct response method because it was a more natural, trained response and had no issues of stimulus-response compatibility. Cognitive processing of both the automation's warning and the human's response decision could not be processed in parallel for the indirect response, instead requiring an extra processing step before the decision. The study demonstrates that the cognitive decision making process is an important consideration when designing human input devices in partially automated vehicles.
For more information, please see:
Mishler, S., & Chen, J. (2018). Effect of response method on driver responses to auditory warnings in simulated semi-autonomous driving. In Proceedings of the Human Factors and Ergonomics Society 62nd International Annual Meeting. Washington DC: HFES.