Potential of Mathematics for Industry, and the Dilemma in the Midstream
Tetsuji Taniguchi (Hiroshima Institute of Technology / Math. Research Institute Calc for Industry)
In an era where data utilization is essential across all industries, the importance of mathematics for industry has grown dramatically. However, we face the harsh reality that the value of our work is not properly recognized, and our services are often underpriced.
In particular, we mathematicians who operate in the "midstream"—translating business challenges into mathematical models—face a serious dilemma: our technical skills are appreciated, but they fail to translate into tangible business outcomes.
In this presentation, I will report on my company's real-world experience in confronting this "midstream dilemma." Based on this, I will present to you the fundamental, inherent challenges that we have uncovered within the business model of applying mathematics to industry.
A Data-Driven Framework for Predicting Liver Failure Dynamics and Living Donor Transplant Prognosis
Raiki Yoshimura (Nagoya University)
Acute liver failure (ALF) is one of the most critical hepatic conditions, characterized by rapid progression and a high risk of multi-organ failure and death. Liver transplantation remains the only curative treatment, yet predicting which patients will require it is still a major clinical challenge due to the significant heterogeneity in disease progression. In our first study, we analyzed time-series clinical data from 320 patients with acute liver injury and applied machine learning techniques to identify key prognostic indicators. We found that prothrombin time (PT) serves as a central biomarker for tracking individual disease trajectories. By stratifying patients into six distinct PT dynamic patterns, we were able to quantify the severity of ALF and predict its progression from admission data. Furthermore, we demonstrated the feasibility of modeling future PT dynamics using mathematical approaches, offering a personalized framework for understanding and anticipating ALF progression.
While liver transplantation offers a potential cure for end-stage liver disease, outcome prediction remains a critical issue, particularly in living donor liver transplantation (LDLT), which has gained prominence due to shorter wait times and better graft quality. In our second study, we retrospectively analyzed data from 748 LDLT recipients and developed a machine learning model to predict early graft loss (within 180 days postoperatively) with higher accuracy than conventional models. We stratified patients into five groups based on risk and further identified a specific intermediate-risk group (G2) with characteristics similar to those who experienced early graft loss (G1), but with different survival outcomes. Using data available within the first 30 days post- transplant, we constructed a hierarchical model capable of distinguishing these populations, facilitating earlier clinical intervention such as consideration of retransplantation or alternative donor strategies.
Together, these studies address the continuum of liver disease—from acute liver injury to post-transplant outcomes—through the lens of time-resolved, individualized prediction. By leveraging machine learning and mathematical modeling, we present a framework that supports more precise and proactive clinical decision-making across the full trajectory of severe liver disease.
Quantifying the Topological Structure of Graphs: The Total Persistence Difference
Eunwoo Heo (POSTECH)
Persistent homology (PH) has been widely applied to graph data to extract topological features. However, little attention has been paid to how different distance functions on a graph affect the resulting persistence diagrams and their interpretations. In this paper, we define a class of distances on graphs, called path-representable distances, and investigate structural relationships between their induced persistent homologies. In particular, we identify a nontrivial injection between the 1-dimensional barcodes induced by two commonly used graph distances: the unweighted and weighted shortest-path distances. We formally establish sufficient conditions under which such embeddings arise, focusing on a subclass we call cost-dominated distances. The injection property is shown to hold in 0- and 1-dimensions, while we provide counterexamples for higher-dimensional cases. To make these relationships measurable, we introduce the total persistence difference (TPD), a new topological measure that quantifies changes between filtrations induced by cost-dominated distances on a fixed graph. We prove a stability result for TPD when the distance functions admit a partial order and apply the method to the SNAP EU Research Institution E-Mail dataset. TPD captures both periodic patterns and global trends in the data, and shows stronger alignment with classical graph statistics compared to previously proposed PH-based measures.
Sungrim Seirin-Lee (Kyoto University)
Pathological State Inference System based on Mathematical Model and TDA for Personalized Treatment in Dermatology
Skin diseases typically appear as visible information-skin eruptions distributed across the body. However, the biological mechanisms underlying these manifestations are often inferred from fragmented, time-point-specific data such as skin biopsies. The challenge is further compounded for human-specific conditions like urticaria, where animal models are ineffective, leaving researchers to rely heavily on in vitro experiments and sparse clinical observations. In this presentation, I will introduce an innovative methodology that combines mathematical modeling with topological data analysis, allowing for the estimation of patient-specific parameters directly from morphological patterns of skin eruptions. This framework offers a new pathway for personalized analysis and mechanistic insight into complex skin disorders.
Emerson Escolar (Kobe University)
A topological analysis of the space of recipes
In recent years, the use of data-driven methods has provided insights into underlying patterns and principles behind culinary recipes. In this exploratory work, we introduce the use of topological data analysis, especially persistent homology, in order to study the space of culinary recipes. In particular, persistent homology analysis provides a set of recipes surrounding the multiscale “holes” in the space of existing recipes. We then propose a method to generate novel ingredient combinations using combinatorial optimization on this topological information. We made biscuits using the novel ingredient combinations, which were confirmed to be acceptable enough by a sensory evaluation study. Our findings indicate that topological data analysis has the potential for providing new tools and insights in the study of culinary recipes. This talk is based on https://doi.org/10.1016/j.ijgfs.2024.101088
Ellipse Cloud: Anisotropy-Aware Persistent Homology
Tomoki Uda (University of Toyama)
Persistent homology is a widely used tool in topological data analysis, yet standard filtration methods often fail to capture anisotropic structures inherent in real-world data. We propose Ellipse Cloud, a preprocessing-based approach that enhances anisotropic features in persistent homology. Our method constructs a Vietoris–Rips (VR) filtration using ellipse tangency times instead of pairwise Euclidean distances, extending the standard VR filtration to an anisotropic setting. This formulation allows anisotropy to be incorporated into persistent homology while remaining compatible with standard computational frameworks. A key computational challenge in this framework involves determining critical time points at which expanding ellipses first interact, which we address through an efficient numerical algorithm.
To evaluate the effectiveness of our approach, we apply it to a toy problem involving a highly noisy two-dimensional point cloud with multiple ring structures. While standard persistent homology struggles to capture the underlying rings due to excessive noise, our anisotropic filtration successfully identifies optimal 1-cycles that preserve the original structures to a greater extent. More generally, our proposed preprocessing technique tends to increase the lifetime of significant persistence pairs, lowering birth values and raising death values compared to standard VR filtrations. These results suggest that incorporating anisotropic filtrations can provide more informative topological summaries of geometrical structures in data. Potential applications include sensor coverage problems, where sensors often exhibit directional sensitivity rather than isotropic coverage.
In the talk, we will also introduce the Python library `EllPHi` for anisotropic persistent homology analysis. `EllPHi` provides the fast and accurate ellipse-tangency solver. The related source codes are available in [GitHub (https://github.com/t-uda/ellphi)](https://github.com/t-uda/ellphi).
Keunsu Kim (Kyushu University)
Nonnegative Matrix Factorization with Topological Regularization
In this study, we propose Top-NMF, a novel model that incorporates topological regularization into Nonnegative Matrix Factorization (NMF), a widely used dimensionality reduction technique. While conventional regularization methods focus on preserving relationships between data points to guide low-dimensional representations, Top-NMF explicitly controls the topological structure of the support of each basis vector. We interpret each data point as a real-valued function defined over a structured domain (such as a grid or a graph), and treat each basis vector in the same way. Our focus is on the support of each basis vector, and we introduce quantitative topological descriptors derived from persistent homology as regularization terms. These descriptors encourage the support to exhibit desirable properties such as connectedness and modularity. These regularization terms can be applied across diverse domains including time series, images, and graphs and guide the model to learn basis vectors that reflect meaningful structures. We provide a theoretical formulation, describe the optimization scheme, and demonstrate through experiments that Top-NMF achieves structurally faithful and interpretable decompositions.