Philosophy Meets Machine Learning: What Counts As Trustworthy?
Workshop at ICML 2026, Seoul, South Korea
Workshop at ICML 2026, Seoul, South Korea
Philosophers have long thought deeply about many concepts that are used colloquially in the machine learning (ML) community such as epistemology, counterfactuals, explainability, reliability, uncertainty and causality. As ML systems are now embedded in high-stakes decisions across science, industry, and public life, it is urgent that when ML researchers claim properties such as "explainability", "reliability", "intelligence" or "cognition", these claims are made with awareness of what practitioners, policymakers, and affected users mean by those terms. In particular, we argue that the ML community needs to take a step back and review whether the mathematical objectives used in optimisation and evaluation procedures truly take into account how philosophers have analysed them—analyses that explicitly aim to connect notions like explanation, evidence, and uncertainty to human understanding, justification, and use.
Philosophers of science and psychologists are more actively engaged than ever in such questions; however, their interaction with ML researchers remains sparse and fragmented. The goal of the proposed workshop is to facilitate a lively dialogue between the two otherwise largely separate communities, to promote more principled and grounded advances in ML and artificial intelligence.
We invite short paper submissions (up to 4 pages, excluding references and appendix) from both philosophers and ML researchers on the following topics:
Epistemology of learning systems: knowledge, belief, evidence, justification, understanding, etc.
Uncertainty: interpretations of probability and credence; confidence, ignorance, ambiguity, etc.
Counterfactual reasoning: when counterfactual questions are well-posed, and what makes counterfactual answers meaningful.
Foundations of causal modelling: in particular, links between causal formalisms used in ML and philosophical accounts of causation.
Explainability and interpretability: explanation vs. prediction; understanding as a cognitive and social achievement; what counts as an explanation for whom, and why.
Reliability, robustness, and generalisation: principled notions of “reliability” beyond accuracy, statistical/philosophical perspectives on “reliable” scientific or societal use.
Submissions should be made by 11th May (anywhere on earth) on openreview.
Format: All submissions must be in PDF format. Submissions are limited to four content pages. Unlimited additional pages are allowed for references and supplementary materials. Reviewers may choose to read the supplementary materials but will not be required to. Camera-ready versions may go up to five content pages.
Style file: You must format your submission using the ICML 2026 LaTeX style file. Please include the references and supplementary materials in the same PDF as the main paper.
Double-blind reviewing: The reviewing process will be double blind. As an author, you are responsible for anonymizing your submission. In particular, you should not include author names, author affiliations, or acknowledgements in your submission and you should avoid providing any other identifying information (even in the supplementary material).
LLM policy: The use of LLMs are permitted only as a writing assistance tool.
Dual-submission policy: We welcome ongoing and unpublished work. We will also accept papers that are under review at the time of submission, or that have been recently accepted for publication at a non-ML venue (i.e., any venue that is not ICML, NeurIPS, ICLR, or a similar conference or journal). Submissions published in venues for related fields (in particular, philosophy) are welcome.
Non-archival: The workshop is a non-archival venue and will not have official proceedings. Workshop submissions can be subsequently or concurrently submitted to other venues.
Visibility: Submissions and reviews will not be public. Only accepted papers will be made public.
Reciprocal reviewing: Authors of submitted works are encouraged to volunteer as reviewers for other submissions, to ensure a fair and high-quality review process.
For questions, please contact philml.icml26@gmail.com.
11th May: Paper submission deadline
31st May: Notification of decision
10th/11th July: Workshop
The workshop will take place from 8am to 5pm on Friday 10th / Saturday 11th July, 2026 at Coex, Seoul, South Korea.
8:00-8:15 Opening remarks
8:15-9:00 Invited talk 1
9:00-9:45 Invited talk 2
9:45-10:30 Oral presentations (6 x 7 minutes)
10:30-11:30 Coffee break & Poster session
11:30-12:15 Invited talk 3
12:15-13:00 invited talk 4
13:00-14:00 Lunch
14:00-14:45 Invited talk 5
14:45-15:30 Invited talk 6
15:30-16:00 Coffee break & Poster session
16:00-17:00 Panel discussion & Closing
ETH Zürich
ETH Zürich
Stanford
TU Nuremberg, Helmholtz AI