Program

January 22 (Monday)     Applied Mathematics

12:50-13:00 Welcome remarks

Afternoon session Chair: 최준호

13:00-14:15 (박혁주) An Introduction to the Virtual Element Method

14:15-15:30 (이완호) 수리 모델링을 통한 산업 및 의료 문제 해결

15:30-16:45 (윤성하) Numerical Investigation and Developing Methods for Several Reaction-Diffusion Models

16:45-18:00 (조준홍) Axial Green Function Method for Incompressible Viscous Flows

18:00-19:30 Dinner ('KAIST 패컬티클럽' 식사 제공)

January 23 (Tuesday)     Machine Learning

Morning session Chair: 이명수

09:30-10:15 (최재무) Optimal Transport-based Generative Modeling

10:15-12:00 (최재웅) Analyzing and Improving Optimal Transport-based Generative Models

12:00-13:00 Lunch (도시락 제공)

Afternoon session Chair: 이재용

13:00-14:15 (심윤수) Data-driven Design of Energy Materials Utilizing Deep Neural Network

14:15-15:30 (신효민) Physics-informed Variational Inference for Stochastic Partial Differential Equations

15:30-16:45 (김현주) Modern Numerical Methods with Machine Learning and Fractional Differential Equations

16:45-18:00 (조성웅) Deep Learning for Advanced PDE Solvers and Operator Learning

18:00-21:00 Banquet ('동천홍')

January 24 (Wednesday)     PDE and Theory

Morning session Chair: 설윤창

09:30-10:15 (박예찬) Tools of Algebra and Machine Learning Theory

10:15-12:00 (조남경) Neural Network Approximations of PDEs and Regularity Theory Beyond Linear Growth

12:00-13:15 (이재용) Learning Solution Operators of Partial Differential Equations and Their Application

13:15-13:20 Closing remarks

January 22 (Monday) Applied Mathematics

An Introduction to the Virtual Element Method

박혁주 (KAIST) 13:00--14:15

Recently, the virtual element method was introduced as an extension of the finite element method to arbitrary polygonal or polyhedral meshes, and it was successfully applied to various problems, such as elasticity, fluid mechanics, electromagnetics, and so on. In this talk, we discuss some basics on the virtual element method and compare it to the finite element method. Furthermore, we briefly present some of our recent results and some topics that are considered in our future investigation.


수리 모델링을 통한 산업 및 의료 문제 해결

이완호 (NIMS) 14:15--15:30

산업 및 의료 분야에서 발생하는 복잡한 문제들을 해결하는 데는 수리 모델링이 핵심적인 역할을 합니다. 본 워크샵에서는 국가수리과학연구소의 수리모델링연구팀이 어떻게 생체 현상을 수학적 모델링을 통해 질병 예측 및 진단에 접근하고 있는지를 탐구합니다. 연구팀은 세포 수준에서부터 인체 장기 수준에 이르기까지 다양한 생명 현상을 수학적으로 모델링하며, 이를 통해 인체 내 시스템의 작동 원리와 기능을 분석합니다. 이를 바탕으로 질병에 의한 생체 신호의 변화를 정량적으로 측정하고, 의료적 치료의 효과를 평가합니다. 이번 세션에서는 실시간 혈압파형 분류 연구, 혈류 순환계 모델링, 생명현상과 관련된 편미분 방정식 분석 등의 주요 연구 주제들을 소개하며, 이를 통해 산업 문제 해결에 접근하는 사례들을 공유할 예정입니다.


Numerical Investigation and Developing Methods for Several Reaction-Diffusion Models

윤성하 (EWHA WOMANS UNIVERSITY) 15:30--16:45

In this presentation, we discuss several issues related to reaction-diffusion models in terms of numerical investigation and the development of numerical solvers. First we cover the widely known phase-field models, Allen-Cahn and Cahn-Hilliard equations. Several numerical simulation results and previous research on convex splitting and operator splitting methods are presented. For the next step, we extend the solvers to some other reaction-diffusion models and provide both theoretical and numerical results. Lastly, we delve some topics currently being researched, focusing especially on the rapid solidification in dilute alloys.


Axial Green Function Method for Incompressible Viscous Flows

조준홍 (INHA UNIVERSITY) 16:45--18:00

This talk introduces numerical methods for solving incompressible viscous flows governed by the Stokes and Navier-Stokes equations, using the axial Green function method (AGM). AGM enables the numerical solution of multi-dimensional problems by employing one-dimensional Green functions for axially split differential operators.

For incompressible Stokes flow in an arbitrary unbounded domain, challenges arise due to its infinite nature. We address this by applying a far-field asymptotic condition and introducing an efficient numerical technique for solving steady Stokes flows in a truncated domain, along with informative boundary conditions. The proposed method, using a specific one-dimensional Green function over a half-infinite axis-parallel line, demonstrates versatility in handling various infinite domain cases.

In the case of incompressible Navier-Stokes flow, a projection scheme is employed. The projected solution acts as a predictor, and one-dimensional integral equations are derived using axial Green functions on line segments parallel to each axis in the flow domain. The approach is demonstrated through numerical examples that show convergence and flexibility in constructing axis-parallel lines.

January 23 (Tuesday) Machine Learning

Optimal Transport-based Generative Modeling

최재무 (SEOUL NATIONAL UNIVERSITY) 09:30--10:45

Optimal Transport (OT) problem investigates a transport map that bridges two distributions while minimizing a given cost function. In this regard, OT between tractable prior distribution and data has been utilized for generative modeling tasks. In this presentation, we first introduce various OT-based generative models. However, existing OT-based models are susceptible to outliers and face optimization challenges during training. To address this issue, we suggest a novel generative model based on the semi-dual formulation of Unbalanced Optimal Transport (UOT). Unlike standard OT, UOT relaxes the hard constraint on distribution matching. This approach provides better robustness against outliers, stability during training, and faster convergence. We validate these properties empirically through experiments. Moreover, we study the theoretical upper bound of divergence between distributions in UOT. If time permits, we will also discuss our ongoing work, which involves applying the Gromov-Wasserstein Transport problem to generative modeling.


Analyzing and Improving Optimal Transport-based Generative Models

최재웅 (KIAS) 10:45--12:00

Optimal Transport (OT) problem aims to find a transport plan that bridges two distributions while minimizing a given cost function. OT theory has been widely utilized in generative modeling. In the beginning, OT distance has been used as a measure for assessing the distance between data and generated distributions. Recently, OT transport map between data and prior distributions has been utilized as a generative model. These OT-based generative models share a similar adversarial training objective. In this talk, we begin by unifying these OT-based adversarial methods within a single framework. Then, we elucidate the role of each component in training dynamics through a comprehensive analysis of this unified framework. Moreover, we suggest a simple but novel method that improves the previously best-performing OT-based model. Intuitively, our approach conducts a gradual refinement of the generated distribution, progressively aligning it with the data distribution. If time permits, we will also discuss our ongoing work to incorporate the JKO-scheme to generative modeling. This talk is based on the joint work with Jaemoo Choi and Myungjoo Kang.


Data-driven Design of Energy Materials Utilizing Deep Neural Network

심윤수 (KAIST) 13:00--14:15

Understanding the structure-property-process relationship is crucial to the design and optimization of materials in various materials fields. There has been a paradigm shift in materials science and engineering from empirical science to theoretical science, computational science, and now data-driven science. The data-driven approach utilizes big data and machine learning to extract structural and property features from research data, accelerating materials discovery. To facilitate data-driven materials design, the process involves the acquisition, management, analysis, and application of research data. The research data, such as imaging and modeling, is used to extract structural and property features through machine learning to establish correlations between them. This enables inverse design, where desired structures and properties are derived from optimal property points, accelerating materials development. 

In the context of energy materials research, advanced battery technologies require a deep understanding of the structure-property relationship. The complicated systems of materials such as NCM and NVPF necessitate novel research methodologies. The integration of data-driven materials design into energy materials research enables the analysis of electrode materials in the complicated systems for lithium-ion batteries and sodium-ion batteries. The integration workflow includes data acquisition, data analysis, and the establishment of structure-property relationships. By leveraging the data-driven approach with deep learning and inverse design, new materials with improved performance can be designed more efficiently. The data-driven approach has the potential to revolutionize materials development and open up new possibilities for designing advanced materials with tailored properties.


Physics-informed Variational Inference for Stochastic Partial Differential Equations

신효민 (POSTECH) 14:15--15:30

Physics-informed neural network has gained significant attention due to its ability to solve nonlinear partial differential equations effectively using meshless methods. The idea can be extended to solve stochastic partial differential equations(SPDE) where the governing equation contains uncertainty. In this talk, we introduce physics-informed variational inference model, integrating a physics-informed learning scheme within the variational autoencoder framework to address SPDE problems in data-driven way. The model employs an encoder to infer the latent random variable which represents the solution, and a decoder to reconstruct a solution from the encoded variable. The decoder consists of two neural networks, where one network learns the spatial behavior and the other learns the randomness of the solution. For the training, an evidence lower bound which incorporates the given physical laws is derived. We first apply the model to approximate high-dimensional gaussian process to study the performance of our model and to demonstrate the efficiency of the model. We next apply the model to solve forward and inverse SPDE problems related to elliptic equations.


Modern Numerical Methods with Machine Learning and Fractional Differential Equations

김현주 (KENTECH) 15:30--16:45

Classical numerical methods such as Galerkin methods (finite element methods, spectral methods, meshless methods, etc.), collocation methods, and explicit/implicit methods (finite difference methods, Adams methods, etc.) have been dominating the area of numerical solvers for decades ago to analyze/simulate physical models and natural phenomena. Now, machine learning encroaches all majors of science and engineering thanks to high-performance machines, and we see a potential for classical numerical methods to be transformed by leveraging the machine learning algorithm. As another possibility, machine learning itself takes the field of classical numerical methods. In this talk, the possibility of merging conventional numerical methods with machine learning techniques will be demonstrated. Specifically, we will introduce how the Isogeometric Analysis is combined with artificial neural networks.


Deep Learning for Advanced PDE Solvers and Operator Learning

조성웅 (KAIST) 16:45--18:00

Partial differential equations (PDEs) are fundamental in modeling complicated systems across various scientific and engineering disciplines. This presentation will introduce deep learning methods aimed at improving the approximation of PDE solutions. I will discuss two core deep learning strategies: 1) Physics-Informed Neural Networks (PINNs), which integrate physical laws into the learning algorithm, and 2) Deep Operator Networks (DeepONet), which learn mappings from PDE parameters to their solutions. The talk will present the Augmented Lagrangian Physics-Informed Neural Networks (AL-PINNs), which adaptively refine the learning process to focus on more challenging regions of the domain. Furthermore, I will feature a graph neural network-based model grounded in DeepONet for simulating time-dependent PDEs on arbitrary grids. Experimental results indicate that the proposed model enhances the prediction of system dynamics beyond the time of training with improved accuracy.

January 24 (Wednesday) PDE and Theory

Tools of Algebra and Machine Learning Theory

박예찬 (KIAS) 09:30--10:45

In this talk,  we delve into the intersection of machine learning theory and algebraic tools, revealing unexpected connections a. Although algebra may initially appear distant from theoretical machine learning, analytical methods are prevalent, it emerges as a surprisingly valuable resource. We will discuss on the loss surface, training dyanamics. In the loss surface, we investigate the existence of strict suboptimal local minimum of the 2-layer neural networks with general smooth activation functions for positive measure of data. The key idea is to investigate the positive-definiteness of Hessian matrix, employing tools from differential algebra to substantiate our findings. In training dyanamics, we investigate that the trajectory of gradient flow is not integrable (no closed-form solution) like 3-body problem. To establish this, we utilize the differential Galois theory which is analogous to the classical Galois theory's role in solving polynomial equations.  In convergence analysis, to show the convergence of the gradient flow and gradient descent, the concept of definiable function playes a key role. Overall, we present several tools in algebra to foster a deeper understanding of theoretical machine learning theory.


Neural Network Approximations of PDEs and Regularity Theory Beyond Linear Growth

조남경 (POSTECH) 10:45--12:00

The approximation of solutions to high-dimensional partial differential equations (PDEs) is increasingly garnering interest in the machine learning (ML) community. Particularly, evading the 'curse of dimensionality' has become critical. In this talk, we will discuss the theoretical foundations for this research direction, explore recent studies, and share some of our research findings as well.


Learning Solution Operators of Partial Differential Equations and Their Application

이재용 (KIAS) 12:00--13:15

Many physical phenomena in nature are modeled using mathematical expressions through differential equations and partial differential equations (PDEs). Recently, there has been a growing interest in using neural networks for operator learning to approximate PDE solution operators, which refers to an operator that maps the parameters of the PDE to its solution. In this talk, I will introduce several recently proposed neural network architectures for approximating PDE solution operators and my research related to them.