The Mapping, Imaging, and Sensing for Computing Lab (Misc.) investigates how we capture, visualize, and interpret data through the lens of applied computer vision, immersive media, remote sensing, and computing education. Our research integrates data visualization, computing education, and novel imaging techniques to enhance human understanding of complex information. A critical part of our work also focuses on visual sensemaking, examining how visual representations impact perception, decision-making, and trust in data. We develop innovative visualization techniques that improve the clarity and accessibility of information across disciplines. Through immersive media and eye tracking, we explore how people engage with visual content, uncovering cognitive and affective patterns that influence interpretation. Our work in computing education seeks to enhance learning experiences by leveraging visualization and interactive technologies.
Towards AI-facilitated Collaborative Visual Sensemaking
🏅 Awarded: COEIT Interdisciplinary Proposal Award '25-'26
This project will design and study an AI facilitator to support collaborative sensemaking of complex data visualizations. Current data literacy approaches focus on individuals, neglecting structured teamwork skills like perspective-taking, empathy, and negotiation. Using design-based research, the team will run Wizard-of-Oz studies to capture dyad interactions and identify effective facilitator behaviors, then develop and test an AI agent with discourse analysis and intervention strategies to promote equitable, reflective dialogue. This project will advance visualization literacy, collaboration, and critical thinking skills, preparing learners for multidisciplinary workplaces where joint interpretation of complex data is essential.
Increasing Visual Literacy With Collaborative Foraging, Annotation, Curation, and Critique
🏅 Awarded: Hrabowski Innovation Fund '24-'26, DoIT Learning Analytics Mini-grant '23 and '24
📃 Check out our paper and poster
Students today are facing both information overload and information contamination/bloat from dubious sources: AI-generated content, masqueraded opinions of armchair experts and influencers, context-less listicles, and consumer manipulation. Much of this information is heralded by graphs and charts to bolster the argument. Because this information firehose presents as technical visual communications, the overload is both cognitive and perceptual, potentially causing more insidious misperceptions than text alone. This project broadly aims to fortify students against perceptual misinformation by increasing students’ Visual Information Literacy (VIL) and critical evaluation skills, surfacing and dispelling common misconceptions about visual technical information. Prior research on misconceptions in information visualization pedagogy suggests students benefit from repeated opportunities to forage, curate and critique examples, discussing and debating with peers and instructors.
To harvest these benefits, we are developing an open-source visual curation + annotation platform that enables students to collaboratively participate in the processes of searching for and curating found examples of misleading charts and graphs, collaborative annotation + critique of examples into concept maps, and discovery of new examples + patterns. The web-app will be piloted in a CSEE Data Visualization course. Data from this tool will enable assessment of VIL by examining patterns in student annotations and groupings.
Hands-on Ethical Inquiry: A Participatory and Exploratory Lab Model for AI/ML Ethics Education
🏅 Awarded: Hrabowski Innovation Fund '25-'27
As artificial intelligence and machine learning (AI/ML) systems become increasingly embedded in all aspects of society, computing professionals must be prepared to navigate complex ethical challenges using both affective and cognitive knowledge. However, ethics instruction in computing education is often delivered as lecture- or discussion-based content focused on codes and case studies, detached from technical practice and lacking experiential engagement needed to foster deep moral reasoning. This project proposes a participatory, lab-based model for AI/ML ethics education that blends ethical inquiry with hands-on technical exploration. Using a customized, no-code web app, students will experiment with real AI/ML models trained on anonymized, student-relevant data. Through guided experimentation and reflection, students will investigate fairness metrics, explore trade-offs, and consider how their technical decisions impact diverse stakeholders—including themselves. To evaluate the impact of this intervention, the project will address 4 research questions: (1) Does the lab improve students’ understanding of fairness metrics and AI/ML decision systems? (2) Does it shift students’ attitudes toward the ethical responsibilities of computing professionals? (3) How does it affect the quality and depth of students’ ethical inquiry? and (4) Do students demonstrate increased communication skills or moral courage? A quasi-experimental design will compare outcomes from a traditional (Fall 2025) and intervention (Fall 2026) offering of a computing ethics course, using pre/post AI Literacy & Ethics assessment, thematic analysis of student reflections and discussions, and think-aloud protocols. The project aims to develop open-source & adaptable instructional materials that promote affective and cognitive growth in ethical reasoning.
Chart Smarts: Cultivating Critical Visual Literacy in Diverse Student Populations with Student-Curated Examples
🏅 Awarded: USM Elkins Scholarship of Teaching and Learning Fellowship '25-'26
🧾 students funded through the Digital Storytelling Internship Program
This project focuses on interventions and analyses that encourage students in Data Visualization courses to discover, critique, and discuss misleading graphs and charts. By performing a deductive thematic analysis of student-submitted examples, we explore how demographic factors influence which visualizations students consider most misleading and why. This work aims to improve visual literacy interventions for diverse student populations, ultimately enhancing equity in data visualization pedagogy and fostering ethically minded computing professionals.
(2024 NSF REU) Glacial Guardians: Development of a Data-Driven Educational Video Game Exploring Antarctic Iceberg Lifecycles
📃 Check out our paper and poster (Best Poster Award at UMBC COEIT Research Day!)
🎮 Demo on Steam coming soon!
Understanding the iceberg life cycle---from calving to drifting, fracturing, and melting---is useful to climate scientists because melting icebergs release freshwater and nutrients into the ocean, and contribute to sea ice and ocean currents, all of which are important contributors to scientific models of sea level rise. Visualization of these processes aids scientists in the development and understanding of more accurate climate change models, and visual narration of compelling stories about specific calving events can help scientists engage the public and highlight the significance of these processes in the broader context of climate change. Prior studies have found that video games can help encourage people to think about and advocate for climate change mitigation, especially those that incorporate real-world data. With this in mind, we have created a data-driven educational video game, called Glacial Guardians, which incorporates real-life Antarctic data and interactive elements. In our game, we focus on an iceberg named A68, which broke off from the Larsen C ice shelf in July 2017. It quickly fractured into two pieces: A68A and the smaller piece A68B, and continued to fracture until its eventual demise in April 2021. An interactive and data-driven experience of the iceberg lifecycle invites the public to explore and learn in an engaging virtual world, while grounding that experience in real-life data. In video games, users can become emotionally attached to the world, so the goal is to create a game that not only informs but also motivates action toward addressing climate challenges, involving the viewers in the tension and outcome of a visual story that will inevitably change life on Earth.
This project uses eye-tracking technology to study how people look at and interpret misleading graphs and charts. By recording gaze patterns and pupil responses, we can analyze how attention shifts across visual elements and what features may cause confusion or misinterpretation. The goal is to understand how people process visual data in real time and to use these insights to design better strategies for building data and visualization literacy.
(2025 NSF REU) User-Centered Design of AI-Enabled Tools for Polar Environmental Change
This undergraduate research project, conducted as part of a summer iHARP REU program, explores user interface design strategies to support polar scientists working with automated methods for analyzing environmental change. As research in polar regions increasingly involves large-scale satellite imagery, simulation data, and machine learning techniques, there is a growing need for tools that help scientists effectively visualize and interpret the outputs of these automated systems. This project focuses on developing interface features that align with scientific reasoning practices, supporting exploration, comparison, and iterative hypothesis development.
As a case study, the project investigates glacier calving events using automated image segmentation of satellite data. The student will prototype interface elements that allow users to examine spatial and temporal patterns, adjust model parameters, and annotate areas of interest within segmentation results. By designing and testing interfaces that facilitate understanding of calving dynamics and their contributing factors, the project aims to enhance the usability of AI-assisted workflows in polar science. Through interviews with polar researchers, iterative interface design, and usability testing, the project aims to contribute to broader efforts to create transparent and collaborative AI systems in environmental science, and to develop visual interactive tools that strengthen quantitative evaluation methods.
O. Patterson, R. Williams, "Glacial Guardians: Development of a Data-Driven Educational Video Game Exploring Antarctic Iceberg Lifecycles", submitted to IGARSS 2025 - 2025 IEEE International Geoscience and Remote Sensing Symposium.
paper | poster (Best Poster Award at UMBC COEIT Research Day!)
R. M. Williams, A. U. Syed, and K. V. Kurumaddali, “Increasing Visual Literacy With Collaborative Foraging, Annotation, Curation, and Critique,” in Proceedings of the 2024 on ACM Virtual Global Computing Education Conference V. 1, Virtual Event NC USA: ACM, Dec. 2024, pp. 249–255. doi:10.1145/3649165.3690108.
N. Tack, D. Engel, and R. Williams, “WebXR, CAVEs, and the Balance of XR Platform Agnosticity Versus Performance in Immersive Scientific Visualization”, IEEE VR, submitted and accepted, 2024.
N. Tack, B. A. Tama, A. Jebeli, V. P. Janeja, D. Engel, and R. Williams, “Metrics for the Quality and Consistency of Ice Layer Annotations,” in IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA: IEEE, Jul. 2023, pp. 4935–4938. doi:10.1109/IGARSS52108.2023.10283420.
N. Tack, N. Holschuh, S. Sharma, R. Williams, and D. Engel, “Development and Initial Testing of XR-Based Fence Diagrams for Polar Science,” in IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA: IEEE, Jul. 2023, pp. 1541–1544. doi:10.1109/IGARSS52108.2023.10281776.
N. Tack, R.M. Williams, N. Holschuh, S. Sharma, and D. Engel, “Visualizing the Greenland Ice Sheet in VR using Immersive Fence Diagrams,” in Practice and Experience in Advanced Research Computing, Portland OR USA: ACM, Jul. 2023, pp. 429–432. doi: 10.1145/3569951.3603635.
K. Ohiri, E. Nguyen, L. Osborn, R.M.Williams, L. Currano, et.al “Textile-based wireless EMG wearable system for prosthetic arm control,” 11th International IEEE EMBS Conference on Neural Engineering, 2023.
R.M. Williams, L.E. Ray, J.H. Lever, and A. M. Burzynski, “Crevasse Detection in Ice Sheets Using Ground Penetrating Radar and Machine Learning,” IEEE J. Sel. Top. Appl. Earth Observations Remote Sensing, vol. 7, no. 12, pp. 4836–4848, Dec. 2014, doi: 10.1109/JSTARS.2014.2332872.
R.M. Williams, L.E. Ray, and J.H. Lever, “Autonomous robotic ground penetrating radar surveys of ice sheets; Using machine learning to identify hidden crevasses,” in 2012 IEEE International Conference on Imaging Systems and Techniques Proceedings, Manchester, United Kingdom: IEEE, Jul. 2012, pp. 7–12. doi: 10.1109/IST.2012.6295593.
R.M. Williams, L. E. Ray, and J. Lever, “An autonomous robotic platform for ground penetrating radar surveys,” in 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany: IEEE, Jul. 2012, pp. 3174–3177. doi: 10.1109/IGARSS.2012.6350750
Jervon Drakes
Olivia Patterson
Afrin Unisa Sayed
Amanjot Singh
Sadia Tisha Narin
Samuel Nathanson
Changjia Yang
Renee McDonald
Olivia Patterson
Omkar Chougule
Amanjot Singh
Afrin Unisa Syed ('25)
Sadia Tisha Narin
Renee McDonald
Morgan Bailey ('23)
Krishna Vamsi Kurumaddali ('24)
Arya Honraopatil ('24)