[C8] Do Integral Emotions Affect Trust? The Mediating Effect of Emotions on Trust in the Context of Human-Agent Interaction. M. A. A. Fahim, M. M. H. Khan, T. Jensen, Y. Albayram, and E. Coman. In Proceedings of the International Conference on Designing Interactive Systems (DIS 2021), pp. 1492 – 1503. Virtual Event, USA. (paper)
Abstract: Prior efforts have noted the effect of reliability, risk, and degree of anthropomorphism on trust in the context of human-agent interaction. However, the effects of these factors on resulting emotions while interacting with autonomous agents and their influence on trust are not clear. Towards that, we designed a 2 (partner: automation/human) × 2 (risk: low/high) × 2 (reliability: low/high) between-group study to identify relevant discrete emotions and their (emotions’) influences on users’ trustworthiness perceptions (ability, integrity, and benevolence). The results identified four emotion factors (positive emotions, hostility, anxiety, and loneliness) related to human-agent interaction. Although the reliability condition affected all four emotion factors, the mediating effects of the emotion factors on reliability and trustworthiness perceptions relationships differed for the varying emotion factors. The implications of our findings for trust calibration in the context of designing interactive systems are discussed in the paper.
[C7] Trust and Anthropomorphism in Tandem: The Interrelated Nature of Automated Agent Appearance and Reliability in Trustworthiness Perceptions. T. Jensen, M. M. H. Khan, M. A. A. Fahim, and Y. Albayram. In Proceedings of the International Conference on Designing Interactive Systems (DIS 2021), pp. 1470 – 1480. Virtual Event, USA. (paper)
Abstract: Anthropomorphism in the design of interface agents is implicitly linked to increasing user trust and acceptance. However, the role of perceived anthropomorphism and perceived trustworthiness in trust appropriateness given a system’s capabilities and limitations is unclear. We designed a 2 (reliability: low, high) x 3 (agent appearance: computer, avatar, human) between-subject study to observe how agent appearance influenced user perceptions of and reliance on an automated teammate in a collaborative image classification task. Trust appropriateness was characterized as the degree to which reliance matched an optimal level given the system’s reliability. Although agent appearance did not significantly influence trust appropriateness, it did affect perceptions of trustworthiness, particularly for low reliability agents. Our results suggest that trust and anthropomorphism involve highly related, dynamic perceptions aimed at anticipating system behavior. Based on our findings, recommendations for future research on trust and anthropomorphism are discussed along with some design implications.
[C6] Investigating the Effects of (Empty) Promises on Human-Automation Interaction and Trust Repair. Y. Albayram, T. Jensen, M. M. H. Khan, M. A. A. Fahim, R. Buck, and E. Coman. In Proceedings of the International Conference on Human-Agent Interaction (HAI 2020), pp. 6-14. Virtual Event, Australia. (paper)
Abstract: Setting expectations for future behavior with promises is one way to manage human-human trusting relationships. To investigate the effect of promises made by an automated system, we conducted a 2 (reliability: low, high) x 3 (promise type: no-promise, optimistic, realistic) between-subject study where participants collaborated with an Automated Target Detection (ATD) system to classify images in multiple rounds of gameplay. We found that an optimistic promise (i.e., "I promise to do better") initially led to significantly more reliance on automation than a realistic promise (i.e., "I cannot do better than this"), but not in the long-term. High reliability participants relied more on the automation and reported greater perceived trustworthiness compared to low reliability participants. In addition, participants in the no promise group reported a greater degree of frustration compared to the other groups. We discuss the implications of our findings for trust repair in automated systems.
[C5] Smartphone Based Human Activity Recognition with Feature Selection and Dense Neural Network. S. K. Bashar, M. A. A. Fahim, and K. H. Chon. In 42nd Annual International Conferences of the IEEE Engineering in Medicine and Biology Society (EMBC 2020), Montreal, Canada. (paper)
Abstract: For the past few years, smartphone based human activity recognition (HAR) has gained much popularity due to its embedded sensors which have found various applications in healthcare, surveillance, human-device interaction, pattern recognition etc. In this paper, we propose a neural network model to classify human activities, which uses activity-driven hand-crafted features. First, the neighborhood component analysis derived feature selection is used to choose a subset of important features from the available time and frequency domain parameters. Next, a dense neural network consisting of four hidden layers is modeled to classify the input features into different categories. The model is evaluated on publicly available UCI HAR data set consisting of six daily activities; our approach achieved 95.79% classification accuracy. When compared with existing state-of-the-art methods, our proposed model outperformed most other methods while using fewer features, thus showing the importance of proper feature selection.
[C4] Who Would Bob Blame? Factors in Blame Attribution in Cyber attacks Among the Non-adopting Population in the Context of 2FA. S. Peck, M. M. H. Khan, M. A. A. Fahim, E. Coman, T. Jensen, and Y. Albayram. In 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC), Madrid, Spain. (paper)
Abstract: This study focuses on identifying the factors contributing to a sense of personal responsibility that could improve understanding of insecure cybersecurity behavior and guide research toward more effective messaging targeting non-adopting populations. Towards that, we ran a 2 (account type) x 2 (usage scenario) x 2(message type) between-group study with 237 United States adult participants on Amazon MTurk, and investigated how the non-adopting population allocates blame, and under what circumstances they blame the end user among the parties who hold responsibility: the software companies holding data,the attackers exposing data, and others. We find users primarily hold service providers accountable for breaches but they feel the same companies should not enforce stronger security policies on users. Results indicate that people do hold end users accountable for their behavior in the event of a breach, especially when the users’ behavior affects others. Implications of our findings in risk communication is discussed in the paper.
[C3] Effect of Feedback on Users’ Immediate Emotions: Analysis of Facial Expressions during a Simulated Target Detection Task. M. A. A. Fahim, M. M. H. Khan, T. Jensen, Y. Albayram, E. Coman, and R. Buck. In Proceedings of the International Conference on Multimodal Interaction (ICMI 2019), pp. 49–58. Suzhou, China. (paper)
Abstract: Safety-critical systems (e.g., UAV systems) often incorporate warning modules that alert users regarding imminent hazards (e.g., system failures). However, these warning systems are often not perfect, and trigger false alarms, which can lead to negative emotions and affect subsequent system usage. Although various feedback mechanisms have been studied in the past to counter the possible negative effects of system errors, the effect of such feedback mechanisms and system errors on users’ immediate emotions and task performance is not clear. To investigate the influence of affective feedback on participants’ immediate emotions, we designed a 2 (warning reliability: high/low) × 2 (feedback: present/absent) between-group study where participants interacted with a simulated UAV system to identify and neutralize enemy vehicles under time constraint. Task performance along with participants’ facial expressions were analyzed. Results indicated that giving feedback decreased fear emotions during the task whereas warning increased frustration for high reliability groups compared to low reliability groups. Finally, feedback was found not to affect task performance.
[C2] The Apple Does Fall Far from the Tree: User Separation of a System from its Developers in Human-Automation Trust Repair. T. Jensen, Y. Albayram, M. M. H. Khan, M. A. A. Fahim, R. Buck, and E. Coman. In Proceedings of the International Conference on Designing Interactive Systems (DIS 2019), pp. 1071–1082. San Diego, CA, USA. (paper)
Abstract: To promote safe and effective human-computer interactions, researchers have begun studying mechanisms for "trust repair" in response to automated system errors. The extent to which users distinguish between a system and the system's developers may be an important factor in the efficacy of trust repair messages. To investigate this, we conducted a 2 (reliability) x 3 (blame) between-group, factorial study. Participants interacted with a high or low reliability automated system that attributed blame for errors internally ("I was not able..."), pseudo-externally ("The developers were not able..."), or externally ("A third-party algorithm that I used was not able..."). We found that pseudo-external blame and internal blame influenced subjective trust differently, suggesting that the system and its developers represent distinct trustees. We discuss the implications of our findings for the design and study of human-automation trust repair.
[C1] Initial Trustworthiness Perceptions of a Drone System based on Performance and Process Information. T. Jensen, Y. Albayram, M. M. H. Khan, R. Buck, E. Coman, and M. A. A. Fahim. In Proceedings of the International Conference on Human-Agent Interaction (HAI 2018), pp. 229-237. Southampton, UK. (paper)
Abstract: Prior work notes dispositional, learned, and situational aspects of trust in automation. However, no work has investigated the relative role of these factors in initial trust of an automated system. Moreover, trust in automation researchers often consider trust unidimensionally, whereas ability, integrity, and benevolence perceptions (i.e., trusting beliefs) may provide a more thorough understanding of trust dynamics. To investigate this, we recruited 163 participants on Amazon's Mechanical Turk (MTurk) and randomly assigned each to one of 4 videos describing a hypothetical drone system: one control, the others with additional system performance or process, or both types of information. Participants reported on trusting beliefs in the system, propensity to trust other people, risk-taking tendencies, and trust in the government law enforcement agency behind the system. We found that financial risk-taking tendencies influenced trusting beliefs. Also, those who received process information were likely to have higher integrity and ability beliefs than those not receiving process information, while those who received performance information were likely to have higher ability beliefs. Lastly, perceptions of structural assurance positively influenced all three trusting beliefs. Our findings suggest that a) users' risk-taking tendencies influence trustworthiness perceptions of systems, b) different types of information about a system have varied effects on the trustworthiness dimensions, and c) institutions play an important role in users' calibration of trust. Insights gained from this study can help design training materials and interfaces that improve user trust calibration in automated systems.
[J2] The Mediating Effect of Emotions on Trust in the Context of Automated System Usage. M. A. A. Fahim, M. M. H. Khan, T. Jensen, Y. Albayram, E. Coman, and R. Buck. In IEEE Transactions on Affective Computing. (paper)
Abstract: Safety-critical systems are often equipped with warning mechanisms to alert users regarding imminent system failures. However, they can suffer from false alarms, and affect users emotions and trust in the system negatively. While providing feedback could be an effective way to calibrate trust under such scenarios, the effects of feedback and warning reliability on users emotions, trust, and compliance behavior is not clear. This paper investigates this by designing a 2 (feedback: present/absent) 2 (warning reliability: high/low) 4 (sessions) mixed design study where participants interacted with a simulated unmanned aerial vehicle (UAV) system to identify and neutralize enemy targets. Results indicated that feedback containing both correctness and affective components decreased users positive emotions and trust in the system, and increased loneliness and hostility (negative) emotions. Emotions were found to mediate the relationship between feedback and trust. Implications of our findings for designing feedback and calibration of trust are discussed in the paper.
[J1] Anticipated Emotions in Initial Trust Evaluations of a Drone System Based on Performance and Process Information. T. Jensen, M. M. H. Khan, Y. Albayram, M. A. A. Fahim, R. Buck, and E. Coman. In International Journal of Human–Computer Interaction 36.4 (2020): 316-325. (paper)
Abstract: Trust in automation has been largely studied through a cognitive lens, though theories suggest that emotions play an important role. Understanding the affective aspects of human-automation trust can inform the design of systems that garner appropriate trust calibration. Toward this, we designed 4 videos describing a hypothetical drone system: one control, and three with additional performance or process information, or both. Participants reported the intensity of 19 emotions they would anticipate as system operator, perceptions of the system’s trustworthiness, individual differences, and perceptions of the institution behind the system. Emotions factored into hostility, positive, anxiety, and loneliness components that were regressed on system information, individual differences, and institutional trust. We found that financial risk-taking, recreational risk-taking, and propensity to trust influenced the intensity of different emotion factors. Moreover, greater perceptions of the institution’s ability led to more intense hostility emotions, greater perceptions of the institution’s benevolence led to less intense hostility, and integrity perceptions decreased anxiety and increased positive emotions. Lastly, structural assurance led to less intense hostility and anxiety and more intense positive emotions. These results offer support for the relationship between human-automation trust and emotions, warranting future research on how operator emotions can be addressed to improve trust calibration.
[W1] Role of Social Media in Disaster Response in the Context of Savar Tragedy. J. T. B. Jafar, H. Dev, M. E. Ali and M. A. A. Fahim. In Workshop on Advances in Data Management: Applications and Algorithms(WADM), Bangladesh University of Engineering and Technology, Dhaka, Bangladesh, June, 2013. (paper)
Abstract: In recent years, social media is regularly being used to ask for help or report injuries during disasters. Such role of social media as a medium of disasters response is crucial in developing countries, where often government organizations are not well equipped for quick rescue operation and relief donation. Hence, help by the people becomes a significant source of disaster response. General people collects information regarding the disaster, victims and their needs from social media posts by volunteers working in the disaster affected area and respond by participating in rescue operation, sending reliefs etc. Such participation of general people has been observed during Savar tragedy in Bangladesh, where social media was used to propagate information regarding the crisis, victims and their needs. In this paper, we explore the role of social media in leveraging disaster in the context of Savar Tragedy.