This research investigates how Pakistan’s unique cultural, historical, and socio-political contexts shape the development and deployment of AI systems, particularly focusing on fairness and its implications for marginalized communities. With AI increasingly used in critical domains like healthcare, education, and criminal justice, the study highlights the limitations of Western-centric fairness models when applied in non-Western contexts. The research aims to uncover local perceptions of fairness, identify socio-cultural influences, and propose actionable strategies for equitable AI governance. By collaborating with advocacy groups, the study aspires to empower marginalized communities, inform culturally resonant ethical guidelines, and contribute to global discussions on AI fairness in underrepresented regions.
Bridging Ethics and AI: Exploring Practitioner Experiences
This ongoing research project, an outcome of my research internship at the Max Planck Institute for Security and Privacy (MPI-SP) in Bochum, Germany, seeks to evaluate how AI ethical frameworks and toolkits are adopted and used by practitioners in real-world industry settings. By conducting qualitative interviews with AI developers and practitioners across North America, Europe, and the UK, the study aims to uncover the challenges and gaps in current responsible AI (RAI) tools. Currently in the data collection phase, this project addresses a critical gap between AI ethics theory and practice, offering evidence-based insights into how these tools influence ethical decision-making in AI development. The findings are expected to guide the creation of more effective and practical frameworks that align with the complex realities of AI systems.