I am a research fellow at the "Ethics and Philosophy Lab" of the Cluster of Excellence "Machine Learning: New Perspectives for Science" at the University of Tübingen.  My background is in philosophy and my research focusses on issues in machine learning, lying at the intersection of ethics and philosophy of science. In particular, I am interested in problems of interpretability, fairness, and reliability, with an emphasis on the medical domain. More recently, I am also working on the use of predictive models in the social sciences and psychopathology.  Regardless of the specific topic, much of my research seems to boil down to the questions of "how do we know that a machine learning model fulfills its destined functions within a given socio-technical domain?"; and "what guardrails should we implement to ensure that it does?". 

Furthermore, I am interested in questions of methodology in AI ethics: although the field is less than a decade old, it seems to be increasingly fragmenting into a number of sub-communities, using different tool-kits and sometimes even speaking different languages. What is the right way to do ethics of AI? What are the strengths and weaknesses of different approaches? (How) Do we have to revise our normative apparatus to accommodate for the technical nature of machine learning models? Is there a way to meaningfully integrate insights from different approaches in ethics? One issue here that I am currently interested in is understanding the exact relationship between ethical and epistemic problems in more detail.

I particularly enjoy the social part of academia -- be it organizing events, joint grant applications, or collaborating on papers. Hence if you have any ideas for possible collaborations, feel encouraged to get in touch.

Together with Konstantin Genin, Timo Freiesleben, and Sebastian Zezulka, I am organizer of the "Philosophy of Science Meets Machine Learning" conference series.

I am also a co-supervisor in the "Certification and Foundations of Safe Machine Learning Systems in Healthcare” project, funded by the Carl-Zeiss-Stiftung. 

Up until recently, I was, together with Eric Raidl, a co-supervisor in the project "Artificial Intelligence, Trustworthiness, and Explainability" (AITE), funded by the Baden-Württemberg Stiftung.