Amna Batool, Ph.D.
Post-Doctoral Fellow | UX Researcher
Privacy | Healthcare | Responsible AI
About Me
Hi, I’m Amna 👋!!!
I’m a post-doctoral fellow at the Keough School of Global Affairs, University of Notre Dame. I did my Ph.D. from the University of Michigan’s School of Information, under the supervision of Kentaro Toyama. My research sits at the intersection of Human-Computer Interaction (HCI), privacy, healthcare, and Responsible AI. Broadly, I study how marginalized communities around the world interact with technology, and how we can design more inclusive, equitable, and ethical technological solutions.
To explore these questions, I use mixed methods, including interviews, ethnographic observations, surveys, content analysis, and system design. I bring feminist, cultural, and intersectional lenses to understand people’s lived experiences, and I draw on community-based participatory, user-centered, and problem-solving approaches to guide design.
Collaboration is at the heart of my work. I believe big problems can only be tackled together, so I partner with product managers, designers, engineers, and cross-sector stakeholders across public, private, government, and NGO spaces to shape product strategy, design, and engagement.
My projects have informed product design decisions, policy frameworks, and on-the-ground solutions for civil and government organizations. The impact ranges from developing theories and frameworks that explain existing and emerging phenomena to supporting communities in building digital literacy skills that improve everyday life. Ultimately, impact for me means that the people I work with feel seen, supported, and empowered through technology.
Along the way, I have received two Gold Medals for Distinguished Student, published over 30 papers in ACM venues, seen my work cited more than 1,000 times, and received two Best Paper Awards and two Impact Awards in privacy and security.
Within privacy, my research explores women’s online privacy concerns through two interrelated themes:
1) Understanding the perceived and lived experiences of women facing online harms, with a particular emphasis on image-based abuse, including AI-generated content such as sexual deepfakes, and the role of local norms around gender, religion, and social values (e.g., honor, family reputation) in shaping the impact of these harms. [Papers: 1🔗, 2🔗, 3🔗, 4🔗, 5🔗]
2) Critically examining governance mechanisms—both online (e.g., platform content moderation policies and community standards) and offline (e.g., reporting processes involving law enforcement agencies and NGOs)—to investigate how global platform policies intersect with local legal and cultural frameworks, and how these dynamics influence victims' access to justice. [Paper 🔗 ]
My work in responsible and ethical AI focuses on understanding how AI tools operate in real-world sociotechnical contexts and how they can be designed or governed to make AI systems more accountable, transparent, and safe. Broadly, I work across three interrelated themes:
Examining the practical use and limitations of interpretability and fairness tools in industry to understand how practitioners actually integrate (or struggle to integrate) them into workflows. [Paper 🔗 ]
Studying the role of AI in high-stakes applications such as content moderation on large platforms like TikTok and Facebook, where questions of responsibility, governance, and user safety come to the forefront.
Exploring and improving detection techniques for emerging harms such as sexual deepfakes, where technical advances must be coupled with ethical, legal, and policy considerations.
In healthcare, my work focuses on two key areas:
Designing culturally tailored digital interventions, such as SMS systems, voice-based platforms, and mobile applications, that address the health information needs of women and frontline healthcare workers while improving public service delivery in low-resource settings. [Papers: 1🔗, 2🔗, 3🔗]
Exploring the societal and structural barriers that limit women’s access to quality healthcare and information in marginalized communities. [Papers: 1🔗, 2🔗, 3🔗, 4🔗]