When:
December 08 - 09, 2025
Where:
Institut für Medizingeschichte und Wissenschaftsforschung
Universität zu Lübeck
Königstrasse 42
23552 Lübeck
Germany
Organized by:
Britta Lübke, Christian Herzog & Deniz Sarikaya
This is the follow up workshop of Fibonacci's Garden 1
Registration is free but necessary. Write an email to Deniz, or (more easy for us) fill out the following google form.
Lone Thomasky & Bits&Bäume /
https://betterimagesofai.org /
https://creativecommons.org/licenses/by/4.0/
In 1960, physicist and Nobel Laureate Eugene Wigner wrote an article entitled “The Unreasonable Effectiveness of Mathematics in the Natural Sciences.” Wigner was surprised that the mathematical structure of a physical theory not only accurately describes the physical world but also leads to new predictions and further advances. This phenomenon puzzled him and led to the expression “unreasonable effectiveness”: Why is mathematics so effective in describing the natural world?
More recently, a Nobel Prize in Physics was awarded for work that many might consider part of computer science. The first image of a black hole, for example, relied heavily on sophisticated code. Formal tools have become integral to many complex measurements, cutting across various scientific practices. Now, with the 2024 Nobel Prizes in Chemistry and Physics awarded directly for modern AI-based tools in science, this trend has become even more pronounced. Yet, formal tools encompass far more than contemporary AI — they include modeling, statistical methods, and theory building. Many aspects of scientific inquiry are deeply connected to the formal sciences.
We are organizing a two-day exploratory workshop. The main theme will broadly examine the applicability of mathematics both within and beyond its traditional boundaries. The unifying theme is the epistemic dimension of using formal tools. What does their use mean for explainability? Is there a loss of theoretical understanding associated with increasingly statistical approaches — or is this concern misplaced? We also welcome contributions addressing educational and ethical dimensions of these issues.
Ozan Altan Altınok (University of Freiburg / Johannesburg)
Cornelius Borck (Universität zu Lübeck)
Sepehr Ehsani (UCL) [Online]
Jordi Fairhurst (Universitat de les Illes Balears)
Harald Gropp (Universität Heidelberg)
Christian Herzog (Universität zu Lübeck)
Inke R. König (Universität zu Lübeck)
Donal Khosrowi (Universität Hannover)
Britta Lübke (Universität Hamburg)
José Antonio Perez-Escobar (UNED, Madrid)
Andrei Rodin (Smolny Beyond Borders)
Deniz Sarikaya (Universität zu Lübeck & Vrije Universiteit Brussel)
Seunghyun Song (University of Tilburg)
Day 1, Monday
10: 00 Opening and Greetings by Cornelius Borck (Universität zu Lübeck)
10:15 - 11:15 Talk 1: José A. Pérez-Escobar
11:15 - 11:45 BREAK
11:45 - 12:45 Talk 2: Britta Lübke
12:45 - 13:45 Talk 3: Sepehr Ehsani
13:25 - 14:30 Lunch Break
14:30 - 15;00 Comments by Harald Gropp
15:00 - 16:00 Talk 4: Donal Khosrowi
16:00 - 17:00 Talk 5: Senghyun Song & Jordi Fairhurst
short break
17:15 - 18:15 Talk 6: Deniz Sarikaya
19:00 Conference Dinner: Paulaner Lübeck am Dom ( https://paulanerluebeck.com/ )
Day 2, Tuesday
09:00 - 10:00 Talk 7: Inke R. König (resheduled).
10:00 - 11:00 Talk 8: Ozan A. Altinok
short break
11:15 - 12:15 Talk 9: Andrei Rodin
12:15 - 13:00 How to follow up!
Lunch (Optional) - Lübecker Kartoffelkeller
Title: Two Transparencies against Black Boxing Medical ML systems; Distinguishing Public Transparency for Scientific Ideals from Certification for Group Privacy.
By Ozan Altınok (University of Freiburg / Johannesburg)
Machine Learning (ML) algorithms and foundational models use black boxing processes that rely on Deep Neural Networks (DNNs). Modus operandi of DNNs creates connections which are not transparently built into the systems, but are developing these connections by themselves opaquely. These processes when applied to medical decision making often use medico - social categories such as race, and relationships of these categories at different layers. ML algorithms are susceptible to biases present in the data on which they are trained, leading to discriminatory outcomes. ML's reliance on historical data also can reinforce existing disparities in healthcare, leading to differential access to accurate diagnoses and treatments for certain social categories. As ML processes are multi layered, what aspects of the processes need to be transparent, under which circumstances and how far remain open questions in publicly governed design choices. Legal frameworks regulate individual rights, but governance for public remains open. How to decide on which categories are of social interest to people who are categorized and are of discriminatory natures to be made transparent remains an open question. This paper addresses this gap by distinguishing two distinct logics of transparency for two different social aims as transparency for group of transparency for public epistemic purposes and group post-hoc transparency for group certification.
***
Title: Some boundaries on the use of categories from a biostatistical point of view
By Inke R. König (Universität zu Lübeck)
Abstract: Medical data comes in different flavors. From a statistical perspective, one of the first aspects we are interested in is the scale level of a variable, ranging from dichotomous to ratio level scale levels. Whereas statistical analyses usually prefer higher levels due to the higher information content, categorical data is frequently being used in practice due to the ease of interpretation.
Although some data is naturally categorical, I argue that in many instances categorical variables are rather simplifications of reality, and different variations of this will be inspected. As a consequence, working with categorical data has a number disadvantages which will be illustrated. Example will be shown from applied projects such as the CRC Sex Diversity.
***
Title: At the edges of explainability – educational perspectives on dealing with uncertainties in science and science education
By Britta Lübke (Universität Hamburg)
Abstract: Uncertainties are an inevitable part of scientific practices and the knowledge that this practices produce. Some of these uncertainties can be converted into knowledge at a later point in time, while others we will never be able to know or never be able to know for sure. In the natural sciences in particular, models (and the uncertainties inherent in them) are a central medium for gaining knowledge. Furthermore, all interaction processes in which the participants can never know exactly how their verbal and physical actions will be interpreted and what reactions their counterparts always involve uncertainties. This means that practices of education – as Luhmann already stated with regard to the so-called technology deficit – are always contingent and permeated by uncertainties in at least two ways. At the same time, empirical studies show a strong orientation toward certainty in a wide variety of social, political, and in some cases even academic discourses. This paper examines the significance of and how to deal with different types of uncertainties in the context of the natural sciences and discusses how future-oriented science education could deal with them appropriately.
***
Title: Ordering chaos: Exploring the pertinence of mathematical explanations in biology.
By José Antonio Perez-Escobar (UNED, Madrid)
Abstract: Mathematical models in biology purport to capture some structural aspect of biological phenomena to, for instance, yield predictions. Mathematical explanations of biological phenomena, on the other hand, appeal to a mathematical fact that contributes to explaining and understanding biological phenomena. Examples of mathematical explanations of biological phenomena are the hexagonal shape of honeycomb cells and the prime life cycles of cicadas. There are two major accounts of these explanations: 1) mathematical facts complement biological facts to produce fundamental mathematical explanations in biology, and 2) mathematics provides modal explanations (e.g. necessity or impossibility by mathematical constraint) that are beyond the scope of causal explanations. In this talk I will argue for another understanding of mathematical explanations in biology: mathematical facts from pure mathematics, rather than yielding epistemically "neutral” mathematical explanations, regulate which biological theory, among several possible candidates, does the heavy lifting in the explanation of the biological phenomenon in question. In the talk I will particularly explore cases where this regulation is harmful, for instance, by substituting biological optimization for (pure) mathematical optimization and resorting to outdates biological theories to do the explanatory heavy lifting only because they better fit mathematical facts from pure mathematics.
***
Title: Can Generative AI Produce Novel Evidence?
By: Donal Khosrowi (Universität Hannover)
Abstract: Researchers across the sciences increasingly explore the use of generative AI (GenAI) systems for various inferential and practical purposes, such as for drug and materials discovery or for reconstructing destroyed manuscripts and artifacts in the historical sciences. This paper explores a novel epistemological question: can GenAI systems generate evidence that provides genuinely new knowledge about the world or can they only produce hypotheses that we might seek evidence for? Exploring responses to this question, we argue that GenAI outputs can constitute de novo synthetic evidence: evidence about the world that didn’t exist before and that goes beyond the primary resources, e.g. background theory and other material evidence, that are used to construct and train GenAI systems. This is a major new insight: it shows that formal, algorithmic systems can be credited with significant epistemic achievements that are enabled by, but don’t reduce to, other achievements made by human engineers and investigators who construct and use these systems.
***
Title: The End of Theory and The Topological Data Analysis in Biomedical Research
By: Andrei Rodin (Smolny Beyond Borders)
Abstract: Topological Data Analysis (TDA) is a relatively new technique used for analysing large datasets, which comprise high-dimensional, usually incomplete and often noisy data. Among some other fields of data-driven research TDA proved particularly effective in the Life Sciences including the neuroscience, biomedicine, genomics and evolution studies. For a philosopher of science TDA is interesting for several reasons. First, it provides an example of effective application in science of a mathematical theory, to wit the the Algebraic Topology, which has been earlier thought of as highly abstract and fully detached from the world of human experience. Second, TDA supports a powerful visualisation technique that allows one to literally see on a computer screen various topological shapes of given datasets and thus to grasp their essential features. As a universal mathematical tool for science TDA can be compared with more traditional mathematical tools such as Partial Differential Equations (PDEs). But the way in which TDA helps scientists to represent and understand relevant empirical data is clearly not the same. Analysing recent examples from the Life Sciences we give some preliminary answers to the question What sort of scientific understanding of empirical phenomena TDA may possibly provide?
***
Title: On the back and forth between biological and artificial systems
By Deniz Sarikaya (Universität zu Lübeck)
Abstract: As it is well known “artificial intelligence” was introduced as a kind of marketing term. In this work in progress talk we will explore how this metaphor gets overstretched at times in AI-safety research. We will distinguish between just and unjust biological import. If time is left we will then look the other way: how are artificial cognitive systems forming the biological ones.
***
The place of mathematics in explaining with laws in science
By Sepehr Ehsani (UCL)
This talk is about the role of mathematical formalism versus natural language in scientific explanations, particularly explanations that incorporate some form of laws. Setting the stage, I will start from a simplified yet evidenced framework from cell biology. To explain a phenomenon in the cell, e.g. signal transduction from the outside of the cell membrane all the way to the cell nucleus, one can first create a mechanistic ‘model’ of this phenomenon based on empirical findings along with some guesswork. Such a model usually entails a number of interacting parts (i.e. various macromolecules, small molecules, etc.) having some sort of spatiotemporal organisation. One or more cell-biology-relevant ‘laws’ (lawlike generalisations, to be more accurate) could also figure in the model. Next, a ‘narrative’ can be provided based on this model, which weaves together the sequence of steps, the interactions, the resultant activities and the “why this way and not that way” contrastive questions that can be answered by the said laws. This narrative can be called a ‘principled-mechanistic explanation’, i.e. an explanation involving both mechanisms and laws. Now, even though each stage in this explanatory process is amenable to formalisation (e.g. dynamical models of a mechanism, formalised lawlike generalisations that can aid in prediction and quantification, etc.), it appears that a relatively complete and polished explanation should be in natural-language form to maximise our collective understanding of the target phenomenon. I will discuss some thoughts for why this may be so and how this picture extends to disciplines beyond cell biology. Overall, my proposal is that being aware of the implications of formal/mathematical stipulations on the eventual natural-language explanation that is to be posited is an essential—and yet non-trivial—objective to maintain.
***
Title: Epistemic value of deep disagreements: the case of mathematics
By: Seunghyun Song (Tilburg University) and Jordi Fairhurst (Palma de Mallorca)
Abstract: This paper articulates the value of sustaining deep disagreements in fields of science. We illustrate our position based on a particular example of mathematics. Mathematics is considered to differ from other scientific disciplines due to its outstanding level of consensus. Mathematicians share clear agreements on, e.g., the right and wrong answers to fundamental questions in the field, valid and invalid proofs, or true and false theorems (see Wagner 2022 for a programmatic analysis). This high degree of consensus in mathematics, however, does not entail that it is completely free from disagreements. Recently there has been a growing body of literature noting the presence of deep disagreements within mathematics and noting their possible implications for mathematical practice (see Aberdein 2023; Kant 2023; Wagner 2023). We offer a working definition of deep disagreements in mathematics as instances in which mathematicians, despite sharing epistemic goals (e.g., bringing true beliefs in mathematics), are faced with persistent disagreements with no clear resolution because they rely on different fundamental beliefs and/or epistemic principles to attain these goals. These deep disagreements may pose a threat to mathematical progress, stagnating the production of knowledge about contentious mathematical topics.
This paper argues one need not fear mathematical deep disagreements since these disputes may be of great interest to mathematicians’ epistemic goals. The paper proceeds as follows. First, we provide a brief explanation regarding what deep disagreements are. On this basis, we offer two concrete case studies of deep disagreements found in the field of mathematics. Second, we discuss the valuable contributions these deep disagreements may offer to mathematics. Building on the work of De Cruz and De Smedt (2013), we argue that deep disagreements can provide three valuable contributions to mathematics: (i) new evidence and/or proofs, (ii) a re-evaluation of existing evidence, assumptions and/or proofs and (iii) an antidote to confirmation bias. By arguing thus, we critically refute that deep disagreements may pose a threat to mathematical progress. Third, we provide our normative take on the epistemic goals of mathematicians. We argue that mathematicians should uphold an epistemic principle of openness. This principle, if abided by in the context of deep disagreements by mathematicians, will transform deep disagreements into fertile grounds of epistemic pursuits. We illustrate our epistemic principle of openness in the context of scientific exchange, thereby establishing a good practice of knowledge impartment and uptake, where a scientist remains open to their peers’ arguments, claims and criticisms in the context of deep disagreements.
Die Akademie der Wissenschaften in Hamburg, Ethical Innovation Hub of the Universität zu Lübeck, and Institut für Medizingeschichte und Wissenschaftsforschung der Universität zu Lübeck. The Event is also endorsed by the CIPSH Chair: Diversity of Mathematical Research Cultures and Practices.