Computer-Supported Collaboration

Dr Kane's work in the area of computer-supported collaboration takes a human-centered approach. A dominant perspective is that teams tackling complex problem benefit from the using computer-mediated technologies to share information among analysts separated by distance and time (Bruns, 2012). However, National Science Foundation supported research conducted by Dr. Kane in collaboration with Carnegie Mellon University Professor Sara Kiesler and then-graduate student Rougu Kang suggests that such optimism be tempered (Kane, Kielser, & Kang, 2018; Kang, Kane, & Kiesler, 2014). Experiments reveal that analysts often experience “teammate inaccuracy blindness,” mistaking inaccurate information as helpful and performing worse than counterparts who receive only raw data from remote collaborators (Kang, Kane, & Kiesler, 2014). Although adding accurate information from an additional collaborator can help overcome the tendency to rely on misinformation (Kang, Kane, & Kiesler, 2014), for the controversy or inconsistency to be useful it would need to be noticed. A series of experiments that employ an evaluation prompt designed to increase attention to collaborator’s information show that inaccuracy blindness is quite challenging to counteract and poses a timely challenge for the researchers and practitioners alike (Kane, Kielser, & Kang, 2018).

 

Dr. Kane is a co-PI on a project examining how intelligence analysts can work together with artificial intelligence to overcome inaccuracy blindness and other collaboration issues. This project is funded by a grant from the Army Research Office to an interdiciplinary team including PI Dr. Susannah Paletz the University of Maryland College of Information Studies and co-PI Adam Porter in the Department of Computer Science and the Fraunhofer USA Center for Experimental Software Engineering. The team interviewed intelligence analysts to generate an inductive model of sensemaking in intelligence analysis shift handovers (Kane et al., 2023) and used those insights to developed a novel experimental paradigm for examining the effects of AI on shift handover sensemaking and team cognition (Paletz et al., 2022; 2023)  

References

Kane, A. A., Kiesler, S., & Kang, R. (2018). Inaccuracy blindness in collaboration persists, even with an evaluation prompt. CHI '18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1-9. doi: 10.1145/3173574.3174068

Kane, A.A., Paletz, S.B.F., Vahlkamp, S.H., Nelson, T., Porter, A., Diep, M. & Carraway, M. (2023) Intelligence analysis shift work: Sensemaking processes, tensions and takeaways. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 67(1), 1-6  https://doi.org/10.1177/21695067231192569 

Kang, R., Kane, A. A., & Kiesler, S. (2014). Teammate inaccuracy blindness: When information sharing tools hinder collaborative analysis. Proceedings of the Annual Meeting of Computer Supported Collaborative Work (CSCW ‘14). NY: ACM Press. doi: 10.1145/2531602.2531681

Paletz, S. B. F., Kane, A. A., Vahlkamp, S., Diep, M., Porter, A., & Nelson, T. (2023, July). Developing a platform and experiment for AI supports in intelligence analyst asynchronous teamwork. Paper presented at the 18th Annual Interdisciplinary Network for Group Research (INGRoup) Conference, Seattle, WA. 

Paletz, S., Kane, A., Porter, A., Diep, M., Nelson, T., Vahlkamp, S., Rasevic, A., Cox, J., Roy, A., Francois, N., Cooper, A., & Carraway, M. (2022) The invasion of Vorgaria: A task and a platform for studying AI supports of team cognition in intelligence shift handovers. Lightning talk presented in the Creativity and Human-Centered AI Cluster at the 39th Annual Human-Computer Interaction Lab (HCIL) Symposium, University of Maryland. https://hcil.umd.edu/wp-content/uploads/2022/05/HCIL-Symposium-2022-Full-Program-online.pdf 



Updated March 2024.