When:
December 08 - 09, 2025
Where:
Institut für Medizingeschichte und Wissenschaftsforschung
Universität zu Lübeck
Königstrasse 42
23552 Lübeck
Germany
Organized by:
Britta Lübke, Christian Herzog & Deniz Sarikaya
This is the follow up workshop of Fibonacci's Garden 1
Registration is free but necessary. Write an email to Deniz, or (more easy for us) fill out the following google form.
Lone Thomasky & Bits&Bäume /
https://betterimagesofai.org /
https://creativecommons.org/licenses/by/4.0/
In 1960, physicist and Nobel Laureate Eugene Wigner wrote an article entitled “The Unreasonable Effectiveness of Mathematics in the Natural Sciences.” Wigner was surprised that the mathematical structure of a physical theory not only accurately describes the physical world but also leads to new predictions and further advances. This phenomenon puzzled him and led to the expression “unreasonable effectiveness”: Why is mathematics so effective in describing the natural world?
More recently, a Nobel Prize in Physics was awarded for work that many might consider part of computer science. The first image of a black hole, for example, relied heavily on sophisticated code. Formal tools have become integral to many complex measurements, cutting across various scientific practices. Now, with the 2024 Nobel Prizes in Chemistry and Physics awarded directly for modern AI-based tools in science, this trend has become even more pronounced. Yet, formal tools encompass far more than contemporary AI — they include modeling, statistical methods, and theory building. Many aspects of scientific inquiry are deeply connected to the formal sciences.
We are organizing a two-day exploratory workshop. The main theme will broadly examine the applicability of mathematics both within and beyond its traditional boundaries. The unifying theme is the epistemic dimension of using formal tools. What does their use mean for explainability? Is there a loss of theoretical understanding associated with increasingly statistical approaches — or is this concern misplaced? We also welcome contributions addressing educational and ethical dimensions of these issues.
Ozan Altan Altınok (University of Freiburg / Johannesburg)
Cornelius Borck (Universität zu Lübeck)
Jordi Fairhurst (Universitat de les Illes Balears)
Christian Herzog (Universität zu Lübeck)
Marit Kastaun (Universität Kassel)
Inke R. König (Universität zu Lübeck)
José Antonio Perez-Escobar (UNED, Madrid)
Britta Lübke (Universität Hamburg)
Deniz Sarikaya (Universität zu Lübeck & Vrije Universiteit Brussel)
Seunghyun Song (University of Tilburg)
Two pending invitations.
TBA
Day 1, Monday
10: 00 Opening and Greetings by Cornelius Borck (Universität zu Lübeck)
10:15 - 11:15 Talk 1: TBA
11:15 - 11:45 BREAK
11:45 - 12:45 Talk 2: Britta Lübke
12:45 - 13:45 Talk 3: Marit Kastaun
13:25 - 15:00 Lunch Break
15:00 - 16:00 Talk 4: TBA
16:00 - 17:00 Talk 5: Senghyun Song & Jordi Fairhurst
short break
17:15 - 18:15 Talk 6: Deniz Sarikaya
19:00 Conference Dinner: TBA
Day 2, Tuesday
09:00 - 10:00 Talk 7: Inke R. König
10:00 - 11:00 Talk 8: Ozan A. Altinok
short break
11:15 - 12:15 Talk 9: José A. Pérez-Escobar
12:15 - 13:00 How to follow up!
Lunch (Optional) - Lübecker Kartoffelkeller
Title: Two Transparencies against Black Boxing Medical ML systems; Distinguishing Public Transparency for Scientific Ideals from Certification for Group Privacy.
By Ozan Altınok (University of Freiburg / Johannesburg)
Machine Learning (ML) algorithms and foundational models use black boxing processes that rely on Deep Neural Networks (DNNs). Modus operandi of DNNs creates connections which are not transparently built into the systems, but are developing these connections by themselves opaquely. These processes when applied to medical decision making often use medico - social categories such as race, and relationships of these categories at different layers. ML algorithms are susceptible to biases present in the data on which they are trained, leading to discriminatory outcomes. ML's reliance on historical data also can reinforce existing disparities in healthcare, leading to differential access to accurate diagnoses and treatments for certain social categories. As ML processes are multi layered, what aspects of the processes need to be transparent, under which circumstances and how far remain open questions in publicly governed design choices. Legal frameworks regulate individual rights, but governance for public remains open. How to decide on which categories are of social interest to people who are categorized and are of discriminatory natures to be made transparent remains an open question. This paper addresses this gap by distinguishing two distinct logics of transparency for two different social aims as transparency for group of transparency for public epistemic purposes and group post-hoc transparency for group certification.
***
Title: Some boundaries on the use of categories from a biostatistical point of view
By Inke R. König (Universität zu Lübeck)
Abstract: TBA
***
Title: At the edges of explainability – educational perspectives on dealing with uncertainties in science and science education
By Britta Lübke (Universität Hamburg)
Abstract: Uncertainties are an inevitable part of scientific practices and the knowledge that this practices produce. Some of these uncertainties can be converted into knowledge at a later point in time, while others we will never be able to know or never be able to know for sure. In the natural sciences in particular, models (and the uncertainties inherent in them) are a central medium for gaining knowledge. Furthermore, all interaction processes in which the participants can never know exactly how their verbal and physical actions will be interpreted and what reactions their counterparts always involve uncertainties. This means that practices of education – as Luhmann already stated with regard to the so-called technology deficit – are always contingent and permeated by uncertainties in at least two ways. At the same time, empirical studies show a strong orientation toward certainty in a wide variety of social, political, and in some cases even academic discourses. This paper examines the significance of and how to deal with different types of uncertainties in the context of the natural sciences and discusses how future-oriented science education could deal with them appropriately.
***
Title: Ordering chaos: Exploring the pertinence of mathematical explanations in biology.
By José Antonio Perez-Escobar (UNED, Madrid)
Abstract: Mathematical models in biology purport to capture some structural aspect of biological phenomena to, for instance, yield predictions. Mathematical explanations of biological phenomena, on the other hand, appeal to a mathematical fact that contributes to explaining and understanding biological phenomena. Examples of mathematical explanations of biological phenomena are the hexagonal shape of honeycomb cells and the prime life cycles of cicadas. There are two major accounts of these explanations: 1) mathematical facts complement biological facts to produce fundamental mathematical explanations in biology, and 2) mathematics provides modal explanations (e.g. necessity or impossibility by mathematical constraint) that are beyond the scope of causal explanations. In this talk I will argue for another understanding of mathematical explanations in biology: mathematical facts from pure mathematics, rather than yielding epistemically "neutral” mathematical explanations, regulate which biological theory, among several possible candidates, does the heavy lifting in the explanation of the biological phenomenon in question. In the talk I will particularly explore cases where this regulation is harmful, for instance, by substituting biological optimization for (pure) mathematical optimization and resorting to outdates biological theories to do the explanatory heavy lifting only because they better fit mathematical facts from pure mathematics.
***
Title: On the back and forth between biological and artificial systems
By Deniz Sarikaya (Universität zu Lübeck)
Abstract: As it is well known “artificial intelligence” was introduced as a kind of marketing term. In this work in progress talk we will explore how this metaphor gets overstretched at times in AI-safety research. We will distinguish between just and unjust biological import. If time is left we will then look the other way: how are artificial cognitive systems forming the biological ones.
***
Title: Epistemic value of deep disagreements: the case of mathematics
By: Seunghyun Song (Tilburg University) and Jordi Fairhurst (Palma de Mallorca)
Abstract: This paper articulates the value of sustaining deep disagreements in fields of science. We illustrate our position based on a particular example of mathematics. Mathematics is considered to differ from other scientific disciplines due to its outstanding level of consensus. Mathematicians share clear agreements on, e.g., the right and wrong answers to fundamental questions in the field, valid and invalid proofs, or true and false theorems (see Wagner 2022 for a programmatic analysis). This high degree of consensus in mathematics, however, does not entail that it is completely free from disagreements. Recently there has been a growing body of literature noting the presence of deep disagreements within mathematics and noting their possible implications for mathematical practice (see Aberdein 2023; Kant 2023; Wagner 2023). We offer a working definition of deep disagreements in mathematics as instances in which mathematicians, despite sharing epistemic goals (e.g., bringing true beliefs in mathematics), are faced with persistent disagreements with no clear resolution because they rely on different fundamental beliefs and/or epistemic principles to attain these goals. These deep disagreements may pose a threat to mathematical progress, stagnating the production of knowledge about contentious mathematical topics.
This paper argues one need not fear mathematical deep disagreements since these disputes may be of great interest to mathematicians’ epistemic goals. The paper proceeds as follows. First, we provide a brief explanation regarding what deep disagreements are. On this basis, we offer two concrete case studies of deep disagreements found in the field of mathematics. Second, we discuss the valuable contributions these deep disagreements may offer to mathematics. Building on the work of De Cruz and De Smedt (2013), we argue that deep disagreements can provide three valuable contributions to mathematics: (i) new evidence and/or proofs, (ii) a re-evaluation of existing evidence, assumptions and/or proofs and (iii) an antidote to confirmation bias. By arguing thus, we critically refute that deep disagreements may pose a threat to mathematical progress. Third, we provide our normative take on the epistemic goals of mathematicians. We argue that mathematicians should uphold an epistemic principle of openness. This principle, if abided by in the context of deep disagreements by mathematicians, will transform deep disagreements into fertile grounds of epistemic pursuits. We illustrate our epistemic principle of openness in the context of scientific exchange, thereby establishing a good practice of knowledge impartment and uptake, where a scientist remains open to their peers’ arguments, claims and criticisms in the context of deep disagreements.
Die Akademie der Wissenschaften in Hamburg, Ethical Innovation Hub of the Universität zu Lübeck, and Institut für Medizingeschichte und Wissenschaftsforschung der Universität zu Lübeck. The Event is also endorsed by the CIPSH Chair: Diversity of Mathematical Research Cultures and Practices.