I am a research fellow at the "Ethics and Philosophy Lab" of the Cluster of Excellence "Machine Learning: New Perspectives for Science" at the University of Tübingen.  My background is in philosophy and my research focusses on issues in machine learning, lying at the intersection of ethics and philosophy of science. In particular, I am interested in problems of interpretability, fairness, and reliability, with an emphasis on the medical domain. More recently, I am also working on the use of predictive models in the social sciences, and machine learning in psychopathology.  Regardless of the specific topic, much of my research seems to boil down to the questions of "how do we know that a machine learning model fulfills its destined functions within a given socio-technical domain?"; and "what guardrails should we implement to ensure that it does?". 

Furthermore, I am interested in questions of methodology in AI ethics: although the field is less than a decade old, it seems to be increasingly fragmenting into a number of sub-communities, using different tool-kits and sometimes even speaking different languages. What is the right way to do ethics of AI? What are the strengths and weaknesses of different approaches? (How) Do we have to revise our normative apparatus to accommodate for the technical nature of machine learning models? Is there a way to meaningfully integrate insights from different approaches in ethics?

I especially enjoy the social part of academia -- be it organizing events or collaborating on papers. Hence if you have any ideas for possible collaborations, feel encouraged to get in touch.

Together with Konstantin Genin, Timo Freiesleben, and Sebastian Zezulka, I am organizer of the "Philosophy of Science Meets Machine Learning" conference series.

Up until recently, I was, together with Eric Raidl, a co-supervisor in the project "Artificial Intelligence, Trustworthiness, and Explainability" (AITE) funded by the Baden-Württemberg Stiftung.

I am also a co-supervisor in the "Certification and Foundations of Safe Machine Learning Systems in Healthcare” project, funded by the Carl-Zeiss-Stiftung. 


News: PhilML'24 will take place in Tübingen from September 11-13. The CfP is now open. The deadline is May 24. For more information, see also: https://philevents.org/event/show/122006

Talks in '24: 

Nov. 14.-17: Talk on Reliability in Foundation Models. PSA Symposium: Machine Learning -- From Prediction to Reflection. New Orleans.

October 22nd: Talk about AI and discrimination. Stuttgart Science Week.

October 12: Medical AI between competition and collaboration. Congress of the German Society for Ophthalmology. Berlin.

June 28: A Paradigm Shift?: On the Ethics of Medical Large Language Models. AEM AG "Digitalisierung und Gesundheit". Potsdam University

May 15-17: A Minimalist Account of the Right to Explanation. Workshop: Ethics of Explainable AI. Uni Paderborn.

May 8: A Minimalist Account of the Right to Explanation. Hamburg Technical University (invited by Max Kiener).

May 1st: A Minimalist Account of the Right to Explanation. Workshop: Public Reason and AI. Uni Southampton.

March 20: The Double-Standard Problem in Medical Machine Learning Solved: Why Explainability Matters.  Workshop: "The Ethics of (Un)Explainability in Healthcare". Uni Münster. 

February 23: Lecture: Algorithmic Fairness in Healthcare -- Taking Stock and Looking Ahead. Spring School "Data Justice in Healthcare". Uni Tübingen

January 15: Guest Lecture on the Ethics of AI, TU Munich (invited by Enkelejda Kasneci) (online)


Policy Work: 

Mach 11: Federal Institute for Occupational Safety and Health. Berlin (invited expert for the ethics of medical AI)



Recent Publications

Grote, T. (forthcoming): Medical Artificial Intelligence. In The Oxford Handbook of Philosophy of Medicine (ed. by A. Broadbent). OUP

Grote, T. & Buchholz, O. (forthcoming): Machine Learning in Public Health and the Prediction-Intervention Gap. In Philosophy of Science for Machine Learning: Core Issues and New Perspectives (ed. by J. Durán and G. Pozzi), Springer (Synthese Library). 

Grote, T., Genin. K. & Sullivan, E. (2024): Reliability in Machine Learning. Philosophy Compass

Grote. T. & Berens, P. (2024): A paradigm shift? -- On the ethics of medical large language models. Bioethics.

Grote, T. (2024): Machine Learning in Healthcare and the Methodological Priority of Epistemology over Ethics. Inquiry (Special Issue on Philosophy of AI)

Buchholz, O. & Grote, T. (2023): Predicting and Explaining with Machine Learning Models: Social Science as a Touchstone. Studies in History and Philosophy of Science

Freiesleben, T. & Grote, T. (2023): Beyond Generalization: A Theory of Robustness in Machine Learning. Synthese

Grote, T. (2023): The Allure of Simplicity: On Interpretable Machine Learning Models in Healthcare. Philosophy of Medicine.

Grote, T. (2023). Fairness as adequacy: a sociotechnical view on model evaluation in machine learning. AI and Ethics, 1-14. 

Grote, T & Berens, P. (2023): Uncertainty, Evidence, and the Integration of Machine Learning into Medical Practice. The Journal of Medicine and Philosophy.

Grote, T., & Broadbent, A. (2022). Machine learning and public health: Philosophical issues. In The Routledge Handbook of Philosophy of Public Health (pp. 190-204). Routledge. 

Grote T & Keeling. G. (2022): Enabling fairness in healthcare through machine learning. Ethics and Information Technology.

Broadbent. A. & Grote, T. (2022): Can Robots Do Epidemiology? Machine learning, causal inference, and predicting the outcomes of public health interventions. Philosophy & Technology.

Grote, T & Keeling, G. (2022): On Algorithmic Fairness in Medical Practice. Cambridge Quarterly of Healthcare Ethics.

Grote, T. & Berens. P. (2021): How Competitors Become Collaborators -- Bridging the Gap(s) Between Machine Learning Algorithms and Clinicians. Bioethics

Genin, K. & Grote, T. (2021): Randomized Controlled Trials in Medical AI -- A Methodological Critique. Philosophy of Medicine.

Grote, T. (2021): Randomised Controlled Trials in Medical AI: Ethical ConsiderationsJournal of Medical Ethics. 

Grote, T. (2021): Trustworthy Medical AI Systems Need to Know When They Don`t Know. Journal of Medical Ethics.

Grote, T., & Di Nucci, E. (2020). Algorithmic decision-making and the problem of control. Technology, anthropology, and dimensions of responsibility. Metzler (edited by Michael Kühler and Birgit Beck), 97-113. 

Grote, T. & Berens, P. (2020): On the Ethics of Algorithmic Decision-Making in Healthcare. Journal of Medical Ethics 46(3), 205-11



Contact: thomas.grote@uni-tuebingen.de