I will show how many algebraic problems have been solved using AI tools, resulting in publications in top journals. The focus isn't on fully automatic tools but on those with proving power beyond human capacity, yet within a narrow scope, helping us find the missing link in complex proofs.
Holds a PhD in mathematics from the University of York. Is full professor of mathematics at the mathematics department of the Universidade Nova de Lisboa. Leads the Laboratory for Augmented Intelligence Theorem Proving and the system ProverX (https://www.proverx.dm.fct.unl.pt)
Exploring mathematical structure in language may offer valuable insights into questions surrounding large language models. In this talk, we propose that enriched category theory can provide a natural framework for investigating the passage from texts, and probability distributions on them, to a more semantically meaningful space. We will define a category of expressions in language enriched over the unit interval and then pass to a certain class of functions on those expressions. As we will see, the latter setting has rich mathematical structure and comes with category-theoretical and geometric tools in which to explore that structure.
I am currently a research mathematician at Sandbox AQ and a visiting professor of mathematics at The Master's University where I help run the Math3ma Institute. I finished my PhD in mathematics in spring 2020 at the CUNY Graduate Center under the supervision of John Terilla and spent some time as a postdoctoral researcher at X, the Moonshot Factory (Google X). My research interests lie in the intersection of quantum physics, machine intelligence, and category theory.
In this lecture, I will motivate spheres (or n-balls) as the atom for neurosymbolic unification. The topological relation between spheres can explicitly and precisely represent fundamental symbolic structures, such as part-whole relations and subordinate relations. Sphere centres can host pre-trained vector embeddings from traditional neural networks. By introducing the method of reasoning through explicit model construction, we show that sphere neural networks can achieve the rigour of symbolic-level syllogistic reasoning. We list three methodological defects that prevent supervised deep learning from reaching rigorous syllogistic reasoning, independent of the amount of training data.
Dr Tiansi Dong is the team lead of neurosymbolic representation learning at Fraunhofer IAIS, a visiting fellow of the Computer Lab at the University of Cambridge, and serves as co-organiser of the Computational Humour workshop at COLING'25 and the Neural Reasoning and Mathematical Discovery Workshop at AAAI’25.
In this talk we will describe what are the main building blocks contributing to the success of complex high energy physics experiments. We will show where the adoption of quantum techniques has been tested and how typical particle decay processes are described under the formalism of quantum information.
CERN has started its second phase of the Quantum Technology Initiative with 5year-term plan aligned with the CERN research and collaboration objectives. We will discuss the integration of Quantum Machine Learning (QML) into the High Energy Physics (HEP) pipeline to address computational challenges in the analysis of vast and complex datasets. This talk will walk through main research directions and results from theoretical foundations of quantum machine learning algorithms to application in several areas of HEP and will outline future directions for incorporating quantum technologies into the broader HEP research framework and beyond.
I received my industrial PhD in High Energy Physics from the University of Pavia working on quantum machine learning models for boson polarisation discrimination. I worked for several years as Quantum Technical Ambassador and Hybrid Cloud solution Architect at IBM.
In my current role at CERN I coordinate and supervise a group of researchers focusing on application of quantum algorithms. Main research directions are quantum machine learning and the investigation of distributed quantum computing, development of hybrid classic-quantum algorithms pipeline for theoretical experimental physics and beyond.
Algebraic Machine Learning (AML) is a new ML paradigm that can learn from data, from a formal description of a problem, or from a combination of both. The method is purely algebraic and does not use search or optimization. AML can be effectively applied to data-driven problems, giving accuracies that rival those of Multi-Layer Perceptrons, and it is also capable of regression with real-valued data as input or output. At the same time, it can also learn without using data, for example, to solve and generate Sudokus or to find Hamiltonian cycles, using only the formal rules.
Fernando Martin-Maroto is a Theoretical Physicist and Mathematician currently conducting research at the Mathematics of Behavior and Intelligence Laboratory, Champalimaud Foundation. His academic background includes studies in Statistical Learning at Stanford University and Abstract Algebra at the University of California, Berkeley. With over 18 years of experience in the private sector, primarily in Silicon Valley, Martin-Maroto has developed various commercial products based on ML and probabilistic inference. In 2018, he introduced Algebraic Machine Learning in collaboration with Gonzalo García de Polavieja, which has been his primary focus since then.
Conjectures hold a special status in mathematics. Good conjectures epitomise milestones in mathematical discovery and have historically inspired new mathematics and shaped progress in theoretical physics. Hilbert's list of 23 problems and André Weil's conjectures oversaw major developments in mathematics for many decades. Crafting conjectures can often be understood as a problem in pattern recognition, for which Machine Learning (ML) is tailor-made. We propose a systematic approach to finding abstract patterns in mathematical data, in order to generate conjectures about mathematical inequalities, using machine intelligence. We focus on strict inequalities of type f < g and associate them with a vector space. By geometrising this space, which we refer to as a conjecture space, we prove that this space is isomorphic to a Banach manifold. We develop a structural understanding of this conjecture space by studying linear automorphisms of this manifold and show that this space admits several free group actions. Based on these insights, we propose an algorithmic pipeline to generate novel conjectures using geometric gradient descent, where the metric is informed by the invariances of the conjecture space.
I am a theoretical physicist specializing in machine learning for mathematical discovery and quantum gravity. My research spans physics, complex geometry, and AI-driven approaches to mathematical conjectures. I completed my doctorate in Theoretical Physics as a Rhodes Scholar at Oxford University and held postdoctoral appointments at the Instituto de Ciencias Matemáticas (Madrid) and The Alan Turing Institute (London). Currently I am Director of Studies and Bye Fellow in Computer Science at Queens' College, and a Research Fellow and Affiliated Lecturer in the Department of Computer Science and Technology. I also enable entrepreneurial opportunities for students and academics through an entrepreneurship society, housed at Queens' college.
I will report on joint work with L. Carvalho, J. L. Costa, and J. Mourão where we propose a new simple architecture, Zeta Neural Networks (ZeNNs), in order to overcome several shortcomings of standard multi-layer perceptrons (MLPs). Namely, in the large width limit, MLPs are non-parametric, they do not have a well-defined pointwise limit, lose non-Gaussian attributes and become unable to perform feature learning; moreover, standard MLPs perform poorly in learning high frequencies. The new ZeNN, breaks the (MLP permutation) symmetry at the level of the perceptrons, and is inspired by three simple principles from harmonic analysis:
(1) Enumerate the perceptrons and introduce a non-learnable weight to enforce convergence;
(2) Introduce a scaling (or frequency) factor;
(3) Choose activation functions forming a near orthogonal system.
I plan to present two theorems and some experiments that show ZeNNs do not suffer from these pathologies.
Since February 2024, I have been an Associate Professor in the Department of Mathematics at Instituto Superior Tecnico in Lisbon, Portugal. Previously, I had positions as an FCT Principal Investigator at Instituto Superior Técnico, a NOMIS Fellow at IST Austria, an Assistant Professor at Universidade Federal Fluminense (UFF), a postdoc in IMPA and as an Elliot Assistant Research Professor at Duke University. I also had a research membership at MSRI and a visiting scientist position at the Max Planck Institute in Bonn (Germany). I completed my PhD in 2014 at Imperial College London, under the supervision of Sir Simon Donaldson.
Deterministic feature decoupling refers to the process in Data Analysis of minimizing the algebraic entanglement of features, expressed as global functions of samples, in a given dataset. Such coupling is a source of redundancy that harms different applications, including those in Machine Learning and style transfer/synthesis. This makes feature decoupling a relevant tool for data processing.
Javier Portilla (Ph.D. 1999, Univ. Politécnica de Madrid) did a postdoc in the Center for Neural Science, at New York University. Then he joined the Dept. of Computer Science and Artificial Intelligence, Universidad de Granada. In 2003, he obtained an excellence research grant, and, in 2008, a research-tenured position at the Instituto de Óptica, CSIC, in Madrid. In 2008 he received an IEEE SPS Best Paper Award, and, in 2019, an Outstanding Editorial Board Award from IEEE Trans. on Comp. Imaging. His research interests include computational imaging, visual-statistical modeling, and image processing.
Coming soon
Josef Urban is a Distinguished Researcher at Czech Institute of Informatics, Robotics and Cybernetics (CIIRC), heading its AI department and formerly also the ERC project AI4REASON. His main interests are in combining inductive/learning and deductive/reasoning AI methods over large formal knowledge bases. His systems have won several theorem proving competitions, and the methods today assist formal verification in proof assistants. He received his MSc in Mathematics and PhD in Computer Science from Charles University in Prague, worked as an assistant professor in Prague, and as a researcher at University of Miami and Radboud University Nijmegen. He has also co-founded the conference on Artificial Intelligence and Theorem Proving (AITP) and co-organized it since 2016.
Reccommended readings:
Who Can Understand the Proof? A Window on Formalized Mathematics [ https://writings.stephenwolfram.com/2025/01/who-can-understand-the-proof-a-window-on-formalized-mathematics/ ]
What’s Really Going On in Machine Learning? Some Minimal Models [ https://writings.stephenwolfram.com/2024/08/whats-really-going-on-in-machine-learning-some-minimal-models/ ]
Can AI Solve Science? [ https://writings.stephenwolfram.com/2024/03/can-ai-solve-science/ ]
What Is ChatGPT Doing … and Why Does It Work? [ https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/ ]
ChatGPT Gets Its “Wolfram Superpowers”! [ https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/ ]
Generative AI Space and the Mental Imagery of Alien Minds [ https://writings.stephenwolfram.com/2023/07/generative-ai-space-and-the-mental-imagery-of-alien-minds/ ]
and Wolfram Language [ https://www.wolfram.com/language/ ]
Stephen Wolfram is a renowned computer scientist, mathematician, and theoretical physicist best known for his pioneering work in computational mathematics and cellular automata. He founded Wolfram Research in 1987, where he created the well-known software Mathematica and Wolfram Alpha, a computational knowledge engine. Wolfram is currently developing a new foundation for physics and exploring the potential of artificial intelligence in science. He emphasizes the importance of computational thinking in understanding complex systems and proposes integrating AI with specialized computational tools to overcome the limitations of current AI models in tackling complex scientific challenges.