Amna Batool
PhD Candidate, University of Michigan
Mixed Method UX Researcher
Privacy | Healthcare | Responsible AI
About Me
Hi, I’m Amna 👋 I’m a final-year Ph.D. candidate at the University of Michigan’s School of Information, advised by Kentaro Toyama. My research sits at the intersection of Human-Computer Interaction (HCI), privacy, healthcare, and Responsible AI. In simpler words, I study how marginalized communities around the world interact with technology, and how we can design more inclusive, equitable, and ethical digital systems.
To explore these questions, I use mixed methods, everything from surveys and interviews to observations, content analysis, and system design. I bring in feminist, cultural, and intersectional lenses to understand people’s lived experiences, and I lean on user-centered and problem-solving approaches to guide design.
Collaboration is at the heart of my work. I believe big problems can only be tackled together, so I’ve partnered with product managers, designers, engineers, and cross-sector stakeholders (public, private, government, NGOs) to shape product strategy, design, and engagement.
My projects have informed product design decisions, policy frameworks, and on-the-ground solutions for public organizations. Sometimes impact looks like a published theory or framework; other times it’s as simple as helping a community gain digital literacy skills that improve their everyday lives.
Along the way, I’ve published 30+ papers in ACM venues, my work has been cited 900+ times, and I’ve received two Best Paper Awards and an Impact Award in privacy and security. But for me, the real impact is when the people I work with feel seen, supported, and empowered through technology.
Within privacy, my research explores women’s online privacy concerns through two interrelated themes:
1) Understanding the perceived and lived experiences of women facing online harms, with a particular emphasis on image-based abuse, including AI-generated content such as sexual deepfakes, and the role of local norms around gender, religion, and social values (e.g., honor, family reputation) in shaping the impact of these harms. [Papers: 1🔗, 2🔗, 3🔗, 4🔗, 5🔗]
2) Critically examining governance mechanisms—both online (e.g., platform content moderation policies and community standards) and offline (e.g., reporting processes involving law enforcement agencies and NGOs)—to investigate how global platform policies intersect with local legal and cultural frameworks, and how these dynamics influence victims' access to justice. [Paper 🔗 ]
My work in responsible and ethical AI focuses on understanding how AI tools operate in real-world sociotechnical contexts and how they can be designed or governed to make AI systems more accountable, transparent, and safe. Broadly, I work across three interrelated themes:
Examining the practical use and limitations of interpretability and fairness tools in industry to understand how practitioners actually integrate (or struggle to integrate) them into workflows. [Paper 🔗 ]
Studying the role of AI in high-stakes applications such as content moderation on large platforms like TikTok and Facebook, where questions of responsibility, governance, and user safety come to the forefront.
Exploring and improving detection techniques for emerging harms such as sexual deepfakes, where technical advances must be coupled with ethical, legal, and policy considerations.
In healthcare, my work focuses on two key areas:
Designing culturally tailored digital interventions, such as SMS systems, voice-based platforms, and mobile applications, that address the health information needs of women and frontline healthcare workers while improving public service delivery in low-resource settings. [Papers: 1🔗, 2🔗, 3🔗]
Exploring the societal and structural barriers that limit women’s access to quality healthcare and information in marginalized communities. [Papers: 1🔗, 2🔗, 3🔗, 4🔗]