Dr. Sara Blanco
Philosophy, Ethics and Responsible AI
Philosophy, Ethics and Responsible AI
Hi! I'm Sara and I work at Accenture as an analyst in Responsible AI (RAI Iberia).
🤖 As a RAI analyst, I work at the intersection of AI technology, ethics, and regulation. I help clients across the private and public sectors navigate the complex landscape of AI governance—particularly in aligning with the EU AI Act. My role involves assessing the compliance of AI systems, identifying regulatory gaps, and co-designing actionable plans to address them. I translate complex legal and ethical AI requirements into practical, client-specific documentation and guidance. By analyzing organizational structures and workflows, I identify key stakeholders and recommend governance frameworks like RAI Committees to support ethical AI implementation. I also contribute to transparency by developing system and model cards. My passion lies in making Responsible AI not just a legal checkbox, but a meaningful and integrated part of organizations' digital strategies.
🎓 I've recently completed my PhD in Philosophy at the University of Tübingen, Germany. My research focuses on AI and its ethical implications. In my thesis, I investigated the concept of trust in AI and how it relates to other kinds of trust such as interpersonal and institutional trust, as well as related notions like reliance. I argued that trust constitutes a relational concept involving moral responsibility, a perspective that extends to cases where the trusted party is an AI system. By alternating between theoretical reflections on trust and its application to AI, my work offers a novel understanding of both trust in AI and moral responsibility. My thesis is part of the AITE project (Artificial Intelligence, Trustworthiness and Explainability), funded by the Baden-Würtemberg Stiftung. You can download the book here.
Prior to my doctoral studies in Tübingen, I completed a Research Master in Analytic Philosophy at KU Leuven, Belgium. My undergraduate studies were in Philosophy at the University of Valladolid, Spain—the city where I was born and raised.
💡 I'm particularly interested in the ethical challenges of generative AI, such as authorship and originality. This connects to my broader goal of examining human-AI interaction from an ethical perspective and contributing to responsible AI development.
Blanco, S. (2025). Trusting as a Moral Act: Trustworthy AI and Responsibility. Doctoral thesis.
Blanco, S. (2025). Human Trust in AI: a Relationship beyond Reliance. In: AI and Ethics.
Blanco, S. (2024). Between the Muse and the Code: Exploring the boundaries of plagiarism and inspiration in the age of AI. In Claridades. Revista De Filosofía, 16(2), 211–232.
Blanco, S. (2023). Explicabilidad y Fiabilidad en IA: Un Vínculo Cuestionable. Revista de la Sociedad de Lógica, Metodología y Filosofía de la Ciencia en España, Especial V Congreso de Postgrado, pp. 33-36.
Blanco, S. (2022). Trust and Explainable AI: Promises and Limitations. Ethicomp Conference Proceedings, pp. 245-256.
AI as Socio-Technical Tools: Reframing Trust and Responsibility.
PONS Forum, Tübingen, Germany (2024, February 9).
Philosophy of Science and Epistemology of ML, Delft, The Netherlands (2024, February 27) - invited talk.
Context Matters: Exploring the Limits of Explainability in Medical AI. CIVIS expert workshop "AI and opacity in healthcare contexts", Lausanne, Switzerland (2024, February 1) - invited talk.
Human Trust in AI: a Relationship Beyond Reliance. AITE Conference, Tübingen, Germany (2023, October 24) - invited talk.
The normative need for trust in AI.
Rethinking Policy, Expertise and Trust, Dublin, Ireland (2023, March 24).
fPET 2023, Delft, The Netherlands (2023, April 19).
17th CLMPST, Buenos Aires, Argentina (2023, July 24).
Trusting as a Moral Act: Trust in AI and Moral Responsibility. Workshop: AI & Values, Hamburg, Germany (2023, January 24) - invited talk.
Trust versus Reliance in AI: A Moral Borderline. Technology and Politics, Leuven, Belgium (2022, September 20).
Trust and Explainable AI: Promises and Limitations.
IACAP 22, Santa Clara, United States (2022, July 22).
Ethicomp, Turku, Finland (2022, July 27).
GAP.11, Berlin, Germany (2022, September 15).
The Explainability-Trust Hypothesis: An Epistemic Analisis of its Limitations.
AITE- Research Colloquium, Tübingen, Germany (2022, April 25) - virtual.
IZEW Research Colloquium, Tübingen, Germany (2022, May 3). - virtual.
Issues in XAI #4: Between Ethics and Epistemology, Delft, The Netherlands (2022, May 24).
The Ethics of Trust and Expertise, Yerevan, Armenia (2022, June 2).
V Congreso de Postgrado de la SLMFCE, Valladolid, Spain (2022, June 14).
Neurotechnology Meets Artificial Intelligence, München, Germany (2022, June 30).
Explainability in Machine Learning, Tübingen, Germany (2023, March 29) - invited talk.
Trust and Explanations in AI: A Dynamic Relationship. AITE Research Colloquium, Tübingen, Germany (2021, May 17). - virtual.
The Role of Trust in XAI. AITE Research Colloquium, Tübingen, Germany (2020, December 9).
As instructor:
Genuine trust in artificial intelligence
Course belonging to the Data Literacy and Ethics in Practice programs at the University of Tübingen.
2023-24, Winter Semester.
2023, Summer Semester.