Upcoming talk
(2024/12/13) A hybrid, iterative, numerical, and transferable solver (HINTS) for parametric partial differential equations 이영규(Brown University)
We present HINTS, a Hybrid, Iterative, Numerical, and Transferable Solver that combines Deep Operator Networks (DeepONet) with classical numerical methods to efficiently solve partial differential equations (PDEs). By leveraging the complementary strengths of DeepONet’s spectral bias for representing low-frequency components and relaxation or Krylov methods’ efficiency at resolving high-frequency modes, HINTS balances convergence rates across eigenmodes. The HINTS is highly flexible, supporting large-scale, multidimensional systems with arbitrary discretizations, computational domains, and boundary conditions, and can also serve as a preconditioner for Krylov methods. To demonstrate the effectiveness of HINTS, we present numerical experiments on parametric PDEs in both two and three dimensions.
Previous talks
(2024/3/8) Generative Modeling through Optimal Transport 최재웅(KIAS)
Optimal Transport (OT) problem investigates a transport map that bridges two distributions while minimizing a specified cost function. OT theory has been widely utilized in generative modeling. Initially, the OT-based Wasserstein metric served as a measure for assessing the distance between data and generated distributions. More recently, the OT transport map, connecting data and prior distributions, has emerged as a new approach for generative models. In this talk, we will introduce generative models based on Optimal Transport. Specifically, we will present our work on a generative model utilizing Unbalanced Optimal Transport. We will also discuss our subsequent efforts to address the challenges associated with this approach.
(2024/4/5) Designing a hardware solution for deep neural network training 전동석(서울대)
The size and complexity of recent deep learning models continue to increase exponentially, causing a serious amount of hardware overheads for training those models. Contrary to inference-only hardware, neural network training is very sensitive to computation errors; hence, training processors must support high-precision computation to avoid a large performance drop, severely limiting their processing efficiency. This talk will introduce a comprehensive design approach to arrive at an optimal training processor design. More specifically, the talk will discuss how we should make important design decisions for training processors in more depth, including i) hardware-friendly training algorithms, ii) optimal data formats, and iii) processor architecture for high precision and utilization.
(2024/4/12) Roles of machine learning for solving differential equations 최준호(KAIST)
In the past decade, machine learning methods (MLMs) for solving partial differential equations (PDEs) have gained significant attention as a novel numerical approach. Indeed, a tremendous number of research projects have surged that apply MLMs to various applications, ranging from geophysics to biophysics. This surge in interest stems from the ability of MLMs to rapidly predict solutions for complex physical systems, even those involving multi-physics phenomena, uncertainty, and real-world data assimilation. This trend has led many to hopeful thinking MLMs as a potential game-changer in PDE solving. However, despite the hopeful thinking on MLMs, there are still significant challenges to overcome. These include limits compared to conventional numerical approaches, a lack of thorough analytical understanding of its accuracy, and the potentially long training times involved. In this talk, I will first assess the current state of MLMs for solving PDEs. Following this, we will explore what roles MLMs should play to become a conventional numerical scheme.
(2024/4/26) An Information-Theoretic Analysis of Nonstationary Bandit Learning 민승기(KAIST)
In nonstationary bandit learning problems, the decision-maker must continually gather information and adapt their action selection as the latent state of the environment evolves. In each time period, some latent optimal action maximizes expected reward under the environment state. We view the optimal action sequence as a stochastic process, and take an information-theoretic approach to analyze attainable performance. We bound per-period regret in terms of the entropy rate of the optimal action process. The bound applies to a wide array of problems studied in the literature and reflects the problem’s information structure through its information-ratio.
(2024/5/10) The qualitative theory of dynamical systems for deep learning Woochul Jung(Konyang Medical center)
The qualitative theory of dynamical systems mainly provides a mathematical framework for analyzing the long-time behavior of systems without necessarily finding solutions for the given ODEs. The theory of dynamical systems could be related to deep learning problems from various perspectives such as approximation, optimization, generalization, and explainability. In this talk, we first introduce the qualitative theory of dynamical systems. Then, we present numerical results as the application of the qualitative theory of dynamical systems to deep learning problems.
(2024/5/24) Neural Quantum Embedding: Pushing the Limits of Quantum Supervised Learning Kyungdeock Park(Yonsei University)
Quantum embedding is a fundamental prerequisite for applying quantum machine learning techniques to classical data, and has substantial impacts on performance outcomes. In this study, we present Neural Quantum Embedding (NQE), a method that efficiently optimizes quantum embedding beyond the limitations of positive and trace-preserving maps by leveraging classical deep learning techniques. NQE enhances the lower bound of the empirical risk, leading to substantial improvements in classification performance. Moreover, NQE improves robustness against noise. To validate the effectiveness of NQE, we conduct experiments on IBM quantum devices for image data classification, resulting in a remarkable accuracy enhancement. In addition, numerical analyses highlight that NQE simultaneously improves the trainability and generalization performance of quantum neural networks, as well as of the quantum kernel method.
(2024/6/7) Towards Optimal Investment Strategy with Deep Learning Jeonggyu Huh(Sungkyunkwan University)
Deep learning has shown remarkable success in various fields, and efforts continue to develop investment methodologies using deep learning in the financial sector. Despite numerous successes, the fact is that the revolutionary results seen in areas such as image processing and natural language processing have not been seen in finance. There are two reasons why deep learning has not led to disruptive change in finance. First, the scarcity of financial data leads to overfitting in deep learning models, so excellent backtesting results do not translate into actual outcomes. Second, there is a lack of methodological development for optimizing dynamic control models under general conditions. Therefore, I aim to overcome the first problem by artificially augmenting market data through an integration of Generative Adversarial Networks (GANs) and the Fama-French factor model, and to address the second problem by enabling optimal control even under complex conditions using policy-based reinforcement learning. The methods of this study have been shown to significantly outperform traditional linear financial factor models such as the CAPM and value-based approaches such as the HJB equation.
(2024/9/6) What kind of validation do we need for PINNs? Changhoon Song(KAIST)
Physics-Informed Neural Networks (PINNs) have emerged as a promising method for solving partial differential equations (PDEs) by embedding physical laws directly into the learning process. However, a critical question remains: How do we validate that PINNs accurately solve these PDEs? This talk explores the types of mathematical validation required to ensure that PINNs can reliably approximate solutions to PDEs. We will discuss the conditions under which PINNs can converge to the correct solution, the relationship between minimizing residuals and achieving accurate results, and the role of optimization algorithms in this process. Our goal is to provide a clear understanding of the theoretical foundations needed to trust PINNs in practical applications while addressing the challenges in this emerging field.
( 2024/9/20) A Multivariate Spline Based Collocation Method for Numerical Solution of Partial Differential Equations Jinsil(KAIST)
We propose a collocation method based on multivariate polynomial splines over triangulation or tetrahedralization for numerical solution of partial differential equations. We start with a detailed explanation of the method for the Poisson equation and then extend the study to other PDEs. We shall show that the numerical solution can approximate the exact PDE solution very well. Then we present a large amount of numerical experimental results to demonstrate the performance of the method over the two- and three-dimensional settings.
(2024/10/11) Physics-Informed Deep Inverse Operator Networks for solving PDE inverse problems 손휘재(건국대학교)
This talk will introduce Physics-Informed Deep Inverse Operator Networks (PI-DIONs) for solving PDE-based inverse problems. Traditional operator learning methods typically require large amounts of labeled training data, which is often impractical in real-world applications. PI-DIONs, however, learn the solution operator without relying on labeled data. Experimental results show that PI-DIONs enable real-time inference across various inverse problems, achieving test errors comparable to supervised models. Additionally, we extend stability estimates from the inverse problem literature to the operator learning framework, providing a robust theoretical foundation for PI-DIONs.
(2024/10/18) Solving Parameterized PDEs with Low-rank Structured Physics-Informed Neural Networks and Fast Learning Algorithms (FastLNRN) Kookjin Lee (Arizona State University)
In various engineering and applied science applications, repetitive numerical sim- ulations of partial differential equations (PDEs) for varying input parameters are often required (e.g., aircraft shape optimization over many design parameters) and solvers are required to perform rapid execution. In this study, we suggest a path that potentially opens up a possibility for physics-informed neural networks (PINNs), emerging deep-learning-based solvers, to be considered as one such solver. Although PINNs have pioneered a proper integration of deep-learning and scientific computing, they require repetitive time-consuming training of neural networks, which is not suitable for many-query scenarios. To address this issue, we propose lightweight low-rank PINNs containing only hundreds of model parameters and an associated hypernetwork-based meta-learning algorithm, which allow efficient solution approximations for varying PDE input parameters. Moreover, we show that the proposed method is effective in overcoming a challenging issue, known as “failure modes” of PINNs.
(2024/11/1) Modeling and Simulations of Vortex-Body Interactions: Swimming & Falling 손성익(국립강릉원주대학교)
The motion of a body moving through a stationary fluid involves unsteady fluid-solid interaction, and vortices separated from a body play an important role in the body motion. Examples of such coupled motion are ubiquitous in nature, and thus understanding of vortex-body interactions is of considerable practical and fundamental interest. In this talk, we present an inviscid model for vortex-body interactions, based on vortex sheets. We apply the model to simulate swimming and falling plates. For swimming, we consider the undulatory motion of a fish-like body. The model successfully demonstrates the self-propulsion of the body and the formation of pairs of anti-rotating vortices shed from the body. For the falling plate, computational limitations of the model are discussed. The model is extended to overcome the limitations and is applied to a falling plate for various flow regimes. Three different falling pattern are identified from simulations of the model: fluttering, tumbling, and chaotic motions.
(2024/11/8) Virtual element method and mixed virtual volume method 조광현(한양대학교 ERICA)
Virtual element method (VEM) is a generalization of the finite element method to general polygonal (or polyhedral) meshes. The term ‘virtual’ reflects that no explicit form of the shape function is required. The discrete space on each element is implicitly defined by the solution of certain boundary value problem. As a result, the basis functions include non-polynomials whose explicit evaluations are not available. In implementation, these basis functions are projected to polynomial spaces. In this talk, we briefly introduce the basic concepts of VEM. Next, we introduce mixed virtual volume methods (MVVM) for elliptic problems. MVVM is formulated by multiplying judiciously chosen test functions to mixed form of elliptic equations. We show that MVVM can be converted to SPD system for the pressure variable. Once the primary variable is obtained, the Darcy velocity can be computed locally on each element.
(2024/11/29) Formation-Controlled Dimensionality Reduction 정윤모(성균관대학교)
Dimensionality reduction represents the process of generating a low dimensional representation of high dimensional data. In this talk, I explain what dimensionality reduction is and shortly mention formation control. After that, I will introduce a nonlinear dynamical system designed for dimensionality reduction. I briefly discuss mathematical properties of the model and demonstrate numerical experiments on both synthetic and real datasets.