From Heuristics to Algorithmics: Putting Math at the Foundations of AI
Modern AI is heuristic, rather than algorithmic. An algorithm is a computer code representation of a piece of rigorous mathematics. As such, algorithms are associated with strong expectations with respect to their behavior on input and output, and the falsifiability of those expectations is the basis for classical verification, validation and uncertainty quantification of computer models. A heuristic is a computer code representation of a collection of non-rigorous, semi-mathematical intuitions, and as such creates no checkable a-priori expectations for its behavior. Because of this heuristic nature, no AI model is ever "wrong," because no concept of model correctness is available for AI. I will argue that this fact is the origin of many of the intractable pathologies of modern AI. Both notable successes and puzzling failures of AI are traceable to the heuristic choices that pervade deep learning models. I will review difficulties such as the explainability gap and AI hallucinations in the context of this framework. I will discuss the problematic implications of heuristic AI for science models that adopt AI methods. I will then sketch a program for building a mathematical foundation for AI, so that AI models may gain robustness and verifiability, and begin the process of graduating from heuristics to algorithms.
A Century of Noether’s Theorem
In the summer of 1918, Emmy Noether published the theorem that now bears her name, establishing a profound two-way connection between symmetries and conservation laws. The influence of this insight is pervasive in physics; it underlies all of our theories of the fundamental interactions and gives meaning to conservation laws that elevates them beyond useful empirical rules. Noether’s papers, lectures, and personal interactions with students and colleagues drove the development of abstract algebra, establishing her in the pantheon of twentieth-century mathematicians. This talk traces her path from Erlangen through Göttingen to a brief but happy exile at Bryn Mawr College, illustrating the importance of “Noether’s Theorem” for the way we think today.
Moderated by:
Jacob H. Thomas, Illinois Institute of Technology
Panelists:
Dr. Zack Sullivan, Professor of Physics, Illinois Institute of Technology
Dr. Sullivan is a theoretical high-energy particle physicist, focusing on particle processes and the phenomenology of strong and weak interactions. His current research, funded by the US Department of Energy, is on experimental neutrino physics. He has been a visiting physicist and researcher at both Fermilab and Argonne National Laboratory, the latter of which involved work in Quantum Information Sciences. He has previously served as the Associate Dean for Research and Graduate Education for the Lewis College of Science and Letters and Chairman for the APS Prarie Section.
Dr. Robert Eisenberg, Professor of Physiology & Biophysics, Rush University Medical Center
Dr. Eisenberg, a professor at RUSH and adjunct research professor at Illinois Tech, is an experimental and theoretical biophysicist and mathematical molecular biologist. His work subscribes to the device approach to biology, which views biological systems as a series of input/outputs, controlled by robust and well-defined functions. Recent research focuses ionic channels and modelling ionic flow. He is an emeritus chairman for the Department of Physiology and Biophysics at RUSH, a role for which he served 1995-2015.
Dr. Carlo Graziani, Computational Scientist, Argonne National Laboratory
Carlo Graziani received a B.S. in applied physics from Columbia University in 1982 and a Ph.D. in physics from the University of Chicago in 1993. He was a postdoctoral research associate at the University of Chicago for the summer of 1993 and then an NRC/NASA research associate at the Goddard Space Flight Center from 1993 to 1996 and at the Enrico Fermi Institute from 1996 to 1999. He then worked as a science team member of the international High-Energy Transient Explorer (HETE) project for over a decade. In June 2007 he joined the University of Chicago, where he was a research associate professor in the Department of Astronomy & Astrophysics, later joining Argonne in 2017 where he is now a computational scientist..
The Maxwell Equations are Universal and Exact, (therefore) Scary
When the Maxwell equations are written without a dielectric constant, they are universal and exact, from inside atoms to between stars. Dielectric and polarization phenomena need then to be described by phenomenological (constitutive) stress strain relations for charge density, that show how charge redistributes when the electric field is changed, analogous to compressibility relations in fluid mechanics.
Total current (including the ethereal displacement current) then never accumulates at all, independent of the properties of matter, in contrast to electron/conduction current that accumulates as charge. These properties arise from the properties of space time according to the theory of special relativity because of the Lorentz invariance of the elementary charge.
Accumulation of charge contradicts Kirchhoff’s law as stated in texts: “the currents into a node equal those leaving the node.” The universal existence of displacement current forces a generalization of Kirchhoff’s current law. The results can be striking. Hopping phenomena (almost) disappear. In unbranched systems like circuit components or ion channels, total current does not depend on location. Spatial Brownian motion disappears. The infinite spatial variation of a Brownian model of thermal noise becomes the zero spatial variation of total current. Maxwell’s Core Equations become a perfect (spatial) low pass filter.
An Exact and Universal theory of Electrodynamics is a scary challenge to scientists like me, trained to be skeptical of all sweeping claims to perfection.