Group on Applied Mathematical Modeling, Statistics, and
Optimization (MATHMODE)
MathMode is recognized
as a Research Group of
Excellence (A) by the Basque Government
Group on Applied Mathematical Modeling, Statistics, and
Optimization (MATHMODE)
MathMode is recognized
as a Research Group of
Excellence (A) by the Basque Government
Inmaculada Arostegui
Judit Muñoz
Shima Baharlouei
Alejandro Duque
Xalbador Otxandorena
Leire Garmendia
Ibai Laña
Gontzal Sagastabeitia
MATHMODE areas of knowledge:
1) Applied Numerical Methods for PDEs, to model physical systems and predict their evolution.
2) Applied Artificial Intelligence (AI), often used in combination with advanced numerical and statistical methods.
3) Applied Statistics and Optimization, to analyze data and model real-world problems.
4) Scientific Computing, to efficiently implement numerical, statistics, and AI algorithms.
MATHMODE goals:
1) Develop knowledge on Applied Mathematics, AI, Statistic, and Scientific Computing.
2) Transfer this mathematical knowledge to the industry and institutions for the benefit of society.
3) Train new researchers and attract and retain talent in the area.
Joint Research Lab on Applied Artificial Intelligence (JRL-A2I, www.jrl-a2i.science)
Talent Attraction´s Day in Applied Artificial Intelligence
Neural Network-based numerical methods
We explore various strategies to integrate traditional numerical techniques for (parametric) PDEs with DL algorithms. Our approach leverages strong, weak, and ultraweak variational formulations, including minimal-residual methods, Fourier residual techniques with adaptive versions, and Ritz-based formulations.
A Deep Neural Network method for solving PDEs
From a technical perspective, our focus includes:
Loss Functions: We develop loss functions that: (i) accurately reflect approximation errors, and (ii) minimize integration errors. For (i), we select appropriate test functions, based on theoretical insights. To address (ii), we employ diverse quadrature techniques, including Monte Carlo for high-dimensional problems, piecewise-polynomial approximations of neural network outputs, adaptive integration, and tailored regularization methods. Additionally, we design optimized quadrature rules using machine learning.
Optimizers: We propose innovative optimization strategies that ensure robust local minimization, addressing challenges posed by false or inaccurate minimizers. Our work includes second-order optimizers and hybrid approaches that combine a Least Squares solver for the last-layer parameters with a Gradient Descent-based optimizer for the remaining network parameters.
Architectures: We design machine learning-driven algorithms that optimize neural network architectures for improved efficiency and performance.
Additionally, we integrate DL with finite element methods (FEM) for solving PDEs.
Advanced traditional numerical methods
We develop and analyze mathematically highly accurate and robust numerical methods for the solution and inversion, via computer simulations, of challenging multiphysics applications. This is also crucial for the generation of sufficiently large and significant datasets for the training of artificial intelligence algorithms. In particular, we focus on:
Time integration of differential equations. This line of research seeks to analyze, design, and implement numerical integration methods for time evolution problems governed by differential equations whose solution cannot be obtained using conventional packages based on multistep methods nor Runge-Kutta schemes.
Mesh-Adaptive Finite Elements: We exploit unconventional error representations and explicit-time domain methods to design goal-oriented adaptivity methods. Also, we employ hierarchical h- and p- basis functions with possibly a large number of Dirichlet nodes to support arbitrary hp-meshes.
Refined Isogeometric Analysis (rIGA): We develop refining mechanisms that consist of reducing the continuity of the solution over local areas of the domain, while keeping an optimal distribution of computational resources.
Applied AI at MATHMODE addresses real-world deployment of AI models through a rigorous mathematical lens. We focus on optimizing performance, safety, and interpretability using tools from optimization theory, linear algebra, probability, and statistical modeling. Our research spans deep learning, AI safety, trustworthy AI, and biologically inspired models, with applications grounded in both theoretical challenges and practical requirements.
Deep Learning: We develop multimodal models that integrate structured and unstructured data using advanced architectures (e.g., graph neural networks, tensor decompositions, and pretrained backbones). Research includes multi-objective optimization of deep networks via pruning, quantization, and automated hyperparameter tuning. We also study foundation models as priors for tasks in reinforcement learning, few-shot learning, and transfer learning, where their impact on generalization is formally analyzed.
Trustworthy AI: We investigate explainability using counterfactuals, topological data analysis, and attribution methods, grounded in formal statistical frameworks. Human-in-the-loop models leverage feedback loops informed by preference modeling and active learning. Our sustainability line focuses on computational complexity reduction, using random projections, incremental learning, and low-rank approximations to minimize resource usage.
AI Safety: Research here addresses open-world generalization, using out-of-distribution detection, novelty detection, and continual learning algorithms that preserve previously acquired knowledge. We model uncertainty using Bayesian neural networks, Monte Carlo sampling, and conformal prediction, aiming for reliable calibrated inference. AI alignment efforts involve preference elicitation, reward modeling, and formal definitions of value consistency.
Bioinspired & Hybrid AI: We study evolutionary algorithms to optimize network architectures and learning dynamics in non-stationary environments. Event-based learning draws from spiking neural models and neuromorphic principles to build efficient, physics-inspired AI systems. Biologically plausible AI incorporates plasticity rules, neoHebbian learning, and unsupervised representation learning, all formulated with mathematical precision to enhance model adaptability and interpretability.
Our goal is to design valid, accurate, reliable, and user-friendly algorithms and statistical methods, which are arising as a key issue in many areas of research: medicine, biology, chemistry, ecology, toxicology, genetics, social sciences, and communication among others. We focus on the following areas:
Statistics to validate and efficiently model real data. We promote the transfer of research in statistics to biomedical, experimental, and social fields through reliable and user-friendly algorithms and statistical methods. In particular, we work on the following topics:
Statistical Modelling: beyond the conditional expected value, modeling other distribution parameters (e.g., variance) is crucial.
Development of prediction models: from variable selection to model evaluation and validation in different sampling design approaches, including, but not limited to, observational and survey data.
Dynamic prediction: development of statistical methods to allow a continuously updated estimation throughout the entire follow-up period
Model performance measures: development of estimators and methods to efficiently evaluate the statistical models’ goodness of fit, prediction, and discrimination ability.
Optimization and control theory techniques in the scope of modeling and performance evaluation of large distributed stochastic systems, such as telecommunications and transportation networks. The main challenge is the need to cope with the randomness in the arrivals and in the behavior of end users of these complex networks. Our goal is to propose algorithms/mechanisms to improve the performance of end users. Technology transfer to the industry is highly promoted.
Our research group develops and applies advanced scientific computing and AI techniques to address complex real-world challenges across multiple domains. By integrating mathematical modeling, data-driven methods, and high-performance computing, we enhance predictive capabilities, optimize decision-making, and improve the efficiency of engineering and scientific applications. Our contributions span several key areas of scientific computing:
Machine Learning & AI for Scientific Applications: Development of deep neural networks, automated machine learning, and uncertainty estimation techniques for applications in industry, energy, healthcare, and mobility. We employ hybrid approaches, including physics-informed neural networks, to enhance model reliability and interpretability.
Numerical Simulation & Modeling: Implementation of finite element methods, computational fluid dynamics, and inverse problem-solving techniques to improve simulations in health, geophysics, energy systems, structural health monitoring, and biomechanics.
Data Assimilation & Uncertainty Quantification: Integration of sensor data into models to enhance predictive accuracy, reduce uncertainty, and improve decision-making in areas such as smart grids, offshore wind energy, and seismic imaging.
Optimization & Control Algorithms: Development of AI-driven optimization techniques for engineering designs, production scheduling, supply chain logistics, and energy management, ensuring efficient resource allocation and improved performance.
High-Performance & Parallel Computing: Acceleration of large-scale simulations using GPU and distributed computing to enable faster and more complex analyses in offshore structures, healthcare applications, and autonomous vehicle navigation.
Big Data Analytics & Statistical Modeling: Application of Bayesian inference, clustering techniques, and causal inference to extract insights from large datasets, improving traffic forecasting, anomaly detection, and risk prediction.
eXplainable AI (XAI) & Responsible AI: Advancing the deployment of AI with a focus on fairness, explainability, and accountability, ensuring large-scale implementation in real-world applications.
Health applications
Advances in real-world industry and healthcare: Our research applies advanced statistical modeling, artificial intelligence, and functional data analysis to address critical challenges in healthcare and industry. We develop methodologies for analyzing complex patient-reported outcomes, leveraging multidimensional beta-binomial regression to improve clinical decision-making. In respiratory health, we introduce novel functional data approaches to assess the impact of physical activity in Chronic obstructive Pulmonary Disease (COPD) patients using telemonitoring data, overcoming challenges related to variable domain data. Additionally, we employ machine learning and feature selection techniques to predict COVID-19 severity, identifying key clinical markers that align with independent medical findings. In medical imaging, our work advances airway assessment for anesthesia safety through deep learning models for automated orofacial landmark detection. These efforts contribute to more accurate predictions, optimized treatment strategies, and improved patient outcomes.
Validation and prediction models for diseases: Our goal is to ensure the transfer of statistics research to medical and experimental fields. In particular, we focus on the validation of prediction models for diseases such as chronic obstructive pulmonary disease (COPD), colon cancer and heart diseases, among others.
Recently, we have designed a computer application to predict adverse events (death and intensive care unit or intermediate respiratory care unit admission), based on five predictive variables: age, previous history of long-term home oxygen therapy, altered consciousness, use of accessory inspiratory muscles, and baseline dyspnea.
Screenshot of the application, running under the Android platform. Data for an imaginary subject with complete information displayed as an example.
In the perspective of a health electronic database available to emergency physicians, our software could serve as an instrument for rapid and reliable decisions in emergency situations, thus ensuring the translation of clinical prediction rules into easy-to-use computer tools suitable for clinical practices.
Ultrasound imaging of the human body: We collaborate with GE Healthcare to enhance ultrasound imaging for women by integrating advanced AI algorithms. Specifically, we develop deep learning models enriched with numerical methods for partial differential equations to extract key parameters from synthetic ultrasound data.
Obstetric models: We investigate obstetric models to enhance risk prediction and clinical decision-making, with a key research focus on the relationship between endometriosis and placenta previa through large-scale retrospective data analysis. We use recorded data from the Cruces University Hospital to refine obstetric predictions and enhance personalized care for high-risk pregnancies.
Disease transmission models: Members of MATHMODE group participated in the modelization of CoVid19 evolution through the use of a SEIR (Susceptible, Exposed, Infectious and Recovered) epidemiology representation. Their results were validated by the data provided by Osakidetza (Basque Health Service) and served to anticipate the number of infected persons who needed basic medical care or admission to intensive care units in the Basque Country.
Energy applications using Geophysics
We provide efficient and reliable mathematical tools and computational algorithms, including deep learning algorithms, to delineate a map of the Earth's subsurface, which is essential for a variety of applications, such as: earthquake prediction and seismic hazard estimation, mining, geothermal energy production, mine detection, underground CO2 storage, among others. We focus on the following aspects:
Simulation and inversion of acoustic fields: We focus on efficient numerical methods for the Helmholtz equation with application in seismic problems. We are currently working on combining physics-informed neural networks with functions that capture the oscillatory and localized nature of wavefields more effectively.
Schematic representation of the neural network architecture that employs Gabor functions to produce the oscillatory behavior of the wave field (left). Evolution of prediction errors on validation points (relative to the finite difference result) for standard PINNs and PINNs with oscillatory (Gabor) functions.
Enhanced seismic data reconstruction: We develop methods to reconstruct missing data and denoise noisy signals in seismic simulations, addressing recording gaps caused by equipment limitations, irregular acquisition geometries, and strong noise.
We work on the structural health of bridges, viaducts, and offshore wind energy platforms. Our goal is to create mathematical models - supervised or unsupervised AI learning approaches - that are trained with synthetic and experimental data and generate accurate health structural diagnostics of civil and industrial engineering infrastructures.
SHM of bridges and viaducts: We design real-time data-based supervision tools, able to monitor the global behavior of critical components in the structures of interest. We integrate local and global variables to enhance damage detection in full-scale applications. Our research applies deep learning techniques, such as autoencoder-based neural networks, to reconstruct healthy structural conditions while accounting for environmental and operational variability.
SHM of offshore wind turbines: We design algorithms for the early detection of failures or monitoring of difficult-to-access or expensive-to-sensor components/subsystems, such as fractures in towers and support structures, gearboxes, and blades, among others. We model the dynamic response of the publicly available DeepCWind OC4 semi-submersible platform, and we use key statistical metrics to describe the platform’s displacements and rotations.
Given the scarcity of real data, we employ simulations performed in OpenFAST, recreating both healthy and damaged mooring systems. We use these datasets to train and validate Deep Learning algorithms.
NREL’s 5 MW Baseline FOWT mounted on the DeepCWind OC4 floater.