Program
📍AI Research Building, Lecture Hall
Maria-von-Linden-Str. 6, 72076 Tübingen
Wednesday, 11.09.
08:45-09:00 Arrival & registration
09.00-09:15 Welcome
09:15-10:00 Daniel Herrmann (University of Groningen) Standards for belief representation in LLMs
10:15-11:00 Sara Pernille Jensen (University of Oslo) Dissecting link uncertainty
11:15-12:00 Brent Mittelstadt (University of Oxford) Do large language models have a duty to tell the truth?
12:00-13:00 Lunch
13:00-13:45 Ana-Andreea Stoica (MPI Tübingen) Causal Inference from competing treatments
14:00-14:45 Maximilian Noichl (Utrecht University) On Epistemic Virtues in Unsupervised Learning
14:45-15:30 Coffee break
15:30-16:15 Andreea Eșanu (New Europe College) Scrutinizing the foundations: could large language models be solipsistic
16:30-17:15 Molly Crockett (Princeton University) Monocultures of knowing in science & society
19:00 Conference dinner at Freistil/Neckaw (covered)
Thursday, 12.09.
09:00-09:15 Morning coffee
09:15-10:00 Gabrielle Johnson (Claremont McKenna College) Precarious accurate predictions
10:15-11:00 Frauke Stoll & Annika Schuster (TU Dortmund) Understanding without understanding
11:15-12:00 Donal Khosrowi (Leibniz University Hannover) Conceptual disruptions and the proper roles for ML in scientific discovery
12:15-13:00 Lunch
13:00-13:45 Nico Formánek (University Stuttgart) What is overfitting?
14:00-14:45 Hanseul Lee & Hyundeuk Cheon Transparency of what?
(Seoul National University)
14:45-15:30 Coffee break
15:30-16:15 Hanna van Loo & Jan-Willem Romeijn Thick descriptions in data-driven psychiatry
(University of Groningen)
16:30-17:15 Stefan Buijsman (TU Delft) Evaluating the quality of explanations beyond fidelity
19:00 Dinner at El Pecado (self-pay)
Friday, 13.09.
090-09:15 Morning coffee
09:15-10:00 Julia Haas (DeepMind) Measuring for moral performance in foundation models
10:15-11:00 Benedikt Höltgen (University of Tübingen) Causal modeling without counterfactuals or generative distributions
11:15-12:00 Alexander Tolbert (Emory University) Causal agnosticism about race
12:00-13:00 Lunch
13:00-13:45 Bertille Picard (CREST-ENSAI) Does personalized allocation make our experimental designs more fair?
14:00-14:45 Aydin Mohseni (Carnegie Mellon University) AI alignment as a principal-agent problem
14:45-15:30 Coffee break
15:30-16:15 Dominik Janzing (Amazon Research) All causal DAGs are wrong but some are useful
16:15-16:45 Closing remarks and drinks
Tom Sterkenburg (LMU Munich) Values in machine learning: What follows from underdetermination?
Please take into account that the program might change.