This research line exploits the interface between Complexity Science methods and concepts (statistical physics, dynamical systems, time series, ...) and AI (mainly neural network models for supervised and unsupervised learning). We use complexity techniques to unravel the inner workings of neural network training optimization, and we get inspired by the physics of complexity to build decentralized AI solutions where the whole is more than the sum of its parts.
Key papers on theory of machine learning
ANN training through the lens of a dynamicist
Kaloyan Danovski, Miguel C. Soriano and Lucas Lacasa
Frontiers in Complex Systems 2 (2024)
More is different: interacting brains and collective learning
An effective theory of collective deep learning
Lluís Arola-Fernández and Lucas Lacasa
Physical Review Research 6, L042040 (2024)
Graph-theoretical image processing
Visibility graphs for image processing
Jacopo Iacovacci, Lucas Lacasa
IEEE Transactions in Pattern Analysis and Machine Intelligence 42, 4 (2020)
Applications of machine learning across the disciplines
Shopper intent prediction from clickstream e-commerce data with minimal browsing information
Borja Requena, Giovanni Cassani, Jacopo Tagliabue, Ciro Greco, Lucas Lacasa
NPG Scientific Reports 10, 16983 (2020)
Aerodynamic and structural airfoil shape optimisation via Transfer Learning-enhanced Deep Reinforcement Learning
David Ramos, Lucas Lacasa, Eusebio Valero, Gonzalo Rubio
Submitted for publication
Lucas Lacasa, Abel Pardo, Pablo Arbelo, Miguel Sánchez, Pablo Yeste, Noelia Bascones, Alejandro Martínez-Cava, Gonzalo Rubio, Ignacio Gómez, Eusebio Valero, Javier de Vicente
Expert Systems With Applications 277 (2025)