This page is dedicated to the various projects and research I have been involved with.
August 2020 - Ongoing
This project focused on how humans perceive the capabilities of Artificial Intelligence (AI) in autonomous vehicles, specifically within computer vision techniques. With new emerging technologies, layman users need to understand the capabilities of their newfound technological system or device. This will allow them to allocate an appropriate amount of trust and reliance in the device to maximize its effectiveness. However, the current AI is susceptible to cyber-attacks, which can produce misclassifications of images. These cyber-attacks can lead to dangerous situations, especially in the context of autonomous vehicles.
We have found that people overestimate the AI’s ability to correctly classify attacked images, such as the PGD and physical attack. We also found that people believe that AI identifies higher safety critical road signs more, such as stop signs, than lower safety critical signs, such as roundabout ahead signs. Users need to be aware of external factors that can affect the system’s integrity, such as hackers and the environment. Overall, people need enough knowledge to know the system’s abilities to be ready to take over when the system fails. Findings indicate that explainable AI (XAI) may be needed for participants to better calibrate their understanding of AI’s capabilities.
For more information, please see the following publications:
Garcia, K., Mishler, S., Xiao, Y., Wang, C., Hu, B., Still, J. D., & Chen, J. (2022). Drivers’ understanding of Artificial Intelligence in autonomous driving systems: A study of a malicious stop sign. Journal of Cognitive Engineering and Decision Making, 16(4), 237-251. doi:10.1177/15553434221117001
Garcia, K., Xiao, Y., Mishler, S., Wang, C., Hu, B., & Chen, J. (2022). Identifying perturbed roadway signs: Perception of AI capabilities. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 66, No. 1, pp. 125-125). Sage CA: Los Angeles, CA: SAGE Publications. doi:10.1177/1071181322661225
Garcia, K. R., Xiao, Y., Mishler, S., Wang, C., Hu, B., & Chen, J. (2021). Human perception of AI capabilities in identifying malicious roadway signs. In Proceedings of the APA Conference on Technology, Mind & Society. doi:10.1037/tms0000077
Mishler, S., Garcia, K., Fuller-Jakaitis, E., Wang, C., Hu, B., Still, J., & Chen, J. (2021). Predicting a malicious stop sign: Knowledge, exposure, trust in AI. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 65, No. 1, pp. 347-348). Sage CA: Los Angeles, CA: SAGE Publications. doi:10.1177/1071181321651239
August 2020 - Ongoing
This project investigated how people understand and react to different types of flood warnings and branches the decision-making and risk-perception literature for drivers in dangerous driving conditions. Risk communication can be relayed through the information processing approach as well as the mental model approach. The flood warnings tested were designed to incorporate the mental model approach by using preexisting mental models and understandings of floods to help communicate the risk. It also included the information processing approach with the human in-the-loop of the risk communication process. The different flood warnings manipulated in this project demonstrate a multi-granularity approach to risk communication, where some warnings were more abstract while others were more detailed in the information that was conveyed.
This project is important for geographical areas that experience frequent flooding, hurricanes, or rising oceans. In both Norfolk, Virginia, which is where Old Dominion University is located, and Houston, Texas, where Rice University is located, the area is prone to flash flooding and hurricanes. This means that the majority of residents have been faced with a flooded roadway before. Driving through flooded roadways can cause vehicle damage and drowning, which can both be avoided if drivers were to avoid the flood. The flood warnings tested were to test the framing effect of information to determine what would help drivers avoid a flooded roadway. The type of flood warning influenced drivers' decisions regarding a flooded rodway. Other factors, such as time-pressure, urgency, and education about driving can influence how drivers decide what to do, which is why these factors were tested in the experiments. The results from these studies can help designers create better warnings for people to make more risk-avoidant decisions when faced with a flooded roadway.
For more information, please see the following publications:
Garcia, K. R., & Chen, J. (2023). Driver decisions based on flood warning information. In Proceedings of the Human Factors and Ergonomics Society 67th International Annual Meeting (Vol. 67, No. 1, pp. 739-740). Sage CA: Los Angeles, CA: SAGE Publications. doi:10.1177/21695067231192568
Garcia, K. R. (2022). The effect of flood warning information on driver decisions in a driving simulator scenario [Master’s thesis, Old Dominion University]. ODU Digital Commons. doi:10.25777/7854-qf81
May 2022 - Ongoing
The purpose of this project was to investigate how users set their privacy settings on social networking sites (SNSs), such as Instagram. People use social media daily, especially younger individuals. Users can post photos on SNSs that can contain friends, animals, food, events, and scenery, and sharing can be a great way to connect with others online. However, what happens when a group picture is posted, but one user does not want it to be? This project investigates how users compute their personal privacy calculus to decide what they do or do not post based on certain privacy settings in place. Privacy calculus, along with heuristics and trust, are all factors of the privacy paradox. The privacy paradox explains how SNS users worry about their privacy, yet do not take steps to secure their privacy and information. Additionally, this study investigates the default effect, which is where users accept the default setting that the application suggests for privacy settings.
Overall, most of the participants knew of the majority of the privacy settings, especially those that had been out for a longer time on Instagram. We also found that there were not many reported conflicts regarding sharing a photo with multiple individuals in it. This research can help make suggestions to SNS developers on how they design their privacy settings. It is important to make these settings visible and known to users for them to appropriately protect their photo privacy. From the findings indicating the presence of the default effect, SNS developers should also design the default settings to protect user privacy rather than information sharing.
For more information, please see the following publication:
Garcia, K. R., Quesnel, A., Li, N., & Chen, J. (2023). Investigating user photo privacy settings on Instagram. In Proceedings of the Human Factors and Ergonomics Society 67th International Annual Meeting (Vol. 67, No. 1, pp. 2291-2292). Sage CA: Los Angeles, CA: SAGE Publications. doi:10.1177/21695067231192286
August 2021 - Ongoing
This project focused on phishing attacks in social media. Phishing has been around for a long time and typically was conducted through emails. However, with the advancement of technology, it is now common for it to occur through social media. A number of social media sites have launched shopping features, which allow users to buy and sell goods from companies and other users. However, some sellers are not legitimate, and scam unknowing buyers by phishing their personal information.
The purpose of this study was to test different trainings to help users from falling victim to phishing attacks on social media, specifically on the Instagram Shop. These training methods have proven to be an effective measure against email phishing attacks. However, this study showed that the trainings were not as effective on Instagram as they were for email attacks. The trainings tested in this study could be implemented in Instagram’s interface to help educate users on what to be aware of when shopping. There are a number of resources online outside of the application itself educating users about scams and phishing, but there are not any that are part of the application itself.
For more information, please see the following publication:
Garcia, K. R., Ammons, J., Xiangrui, X., & Chen, J. (2023). Phishing in social media: Investigating training techniques on Instagram Shop. In Proceedings of the Human Factors and Ergonomics Society 67th International Annual Meeting (Vol. 67, No. 1, pp. 1850-1855). Sage CA: Los Angeles, CA: SAGE Publications. doi:10.1177/21695067231192588
coming soon...
August 2022 - December 2022
coming soon...
May 2021 - August 2021
coming soon...
August 2019 - May 2020
coming soon...
April 2019 - August 2019
coming soon...
January 2019 - May 2019
coming soon...
May 2018 - May 2019
coming soon...
January 2018 - May 2018
coming soon...