Abstracts

Masashi Sugiyama "Recent Advances in Machine Learning from Noisy Labels"

Supervised learning from noisy output is one of the classical problems in machine learning. While this task is relatively straightforward in regression since independent additive noise cancels with big data, classification from noisy labels is still a challenging research topic. Recently, it has been shown that when the noise transition matrix which specifies the label flipping probability is available, the bias caused by label noise can be canceled by appropriately correcting the loss function. However, when the noise transition matrix is unknown, which is often the case in practice, its estimation only from noisy labels is not straightforward due to its non-identifiability. In this talk, I will give an overview of recent advances in classification from noisy labels, including joint estimation of the noise transition matrix and a classifier, analysis of identifiability conditions, and extension to instance-dependent noise.

Patrick Gelß "Low-rank tensor representations of quantum circuits"

Quantum computing is arguably one of the most revolutionary and disruptive technologies of this century. Due to the ever-increasing number of potential applications as well as the continuing rise in complexity, the development, simulation, optimization, and physical realization of quantum circuits is of utmost importance for designing novel algorithms. We show how matrix product states (MPSs) and matrix product operators (MPOs) can be used to express not only the state of the system but also quantum gates and entire quantum circuits as low-rank tensors. This allows us to analyze and simulate complex quantum circuits on classical computers and to gain insight into the underlying structure of the system. We present different examples to demonstrate the advantages of MPO formulations and provide a new perspective on the construction of quantum algorithms.

Shintaro Momose "Aurora Vector Annealing to Solve Social Issues and Acceleration by NEC's Supercomputer, SX-Aurora TSUBASA"

This presentation consists of two parts, discussing SX-Aurora TSUBASA vector supercomputer and introducing digital annealer working on SX-Aurora TSUBASA called Aurora Vector Annealer. The first half of the presentation shows the vector architecture of SX-Aurora TSUBASA, especially its latest vector processors having the highest-level memory bandwidth. Sustained performance and power efficiency are also discussed, as well as NEC's future plans and roadmap. The second half of the presentation shows NEC's quantum computing strategies and their products to provide higher sustained performance in the annealing/optimization fields. NEC developed the Aurora Vector Annealer as a digital annealer and has a strong business relationship with D-Wave providing a quantum annealer. NEC aims at solving various social issues by using the quantum/digital annealing technologies and by developing a hybrid platform with supercomputer and quantum/digital annealer to provide much higher sustained performance.

Kazuma Tsuji "Pairwise Conditional Gradients without Swap Steps and Sparser Kernel Herding"

The Pairwise Conditional Gradients (PCG) algorithm is a powerful extension of the Frank-Wolfe algorithm leading to particularly sparse solutions, which makes PCG very appealing for problems such as sparse signal recovery, sparse regression, and kernel herding. Unfortunately, PCG exhibits so-called swap steps that might not provide sufficient primal progress. The number of these bad steps is bounded by a function in the dimension and as such known guarantees do not generalize to the infinite-dimensional case, which would be needed for kernel herding. We propose a new variant of PCG, the so-called Blended Pairwise Conditional Gradients (BPCG) which does not exhibit swap steps. The convergence rate of BPCG is basically that of PCG if no drop steps would occur and as such is no worse than PCG but improves and provides new rates in many cases. Moreover, we observe in the numerical experiments that BPCG’s solutions are much sparser than those of PCG. We apply BPCG to the kernel herding setting, where we derive nice quadrature rules and provide numerical results demonstrating the performance of our method.

Christoph Spiegel "Proofs in Extremal Combinatorics through Optimization"

We explore how techniques from Optimization can be used to obtain improved and even tight theoretical results in Extremal Combinatorics. In particular, we will study two related problems concerning the number of monochromatic cliques in two-colorings of the complete graph that go back to questions of Erdős, most notably improving the 25-year-old upper bounds of Thomason on the Ramsey multiplicity of K4. The extremal constructions for each problem turn out to be blow-ups of a finite graph and were found through Discrete Search Heuristics. They are complemented by lower bounds and sometimes even stability results established using Flag Algebras and Semidefinite Programming, resulting in a fully computer-assisted approach. Joint work with: Olaf Parczyk, Sebastian Pokutta, and Tibor Szabó.

Yuji Shinano "The UG framework version 1.0: An update"

The Ubiquity Generator Framework (UG) version 1.0 was released last year. It was designed to parallelize powerful state-of-the-art branch-and-bound based solvers externally in order to exploit their powerful performance. We call the underlying solvers ``base solvers''; originally, a base solver is a branch-and-bound based solver, but in the current release, it is redefined as any solver that is being parallelized by UG, since, in version 1.0, it was generalized to be a software framework for high-level task parallelization. In this talk, we present the concept of high-level task parallelization and its flexibility.We will show a few recent success stories of the instantiated parallel solvers by UG version 1.0.

Junko Hosoda "A parallel algorithm combining relaxation and heuristic for the integrated long-haul and local vehicle routing problem on an adaptive transportation network"

Consolidation of commodities and coordination of vehicle routes are fundamental features of supply chain management problems. Supply chain management problems integrating the designation of consolidation locations with the coordination of long haul and local vehicle routing is a complicated problem. An algorithm combining relaxation and heuristic is proposed to find high quality solutions. The relaxation solver bounds the solution space taking into account the tendency of solution space and heuristic solver founds the high quality solution in the bounded solution space that satisfies all constraints. The UG framework is used to parallel execution of relaxation and heuristic solvers. The results demonstrate impact of parallel execution of relaxation and heuristic on the execution time for the integrated long-haul and local vehicle routing problem on an adaptive transportation network.

Koichi Fujii "Solving Large Scale QAPs by Massively Parallel DNN-based Branch-and-bound Method"

We report our progress on the project for solving large scale quadratic assignment problems (QAPs). Our main approach to solve QAPs is a parallel NLP-based branch-and-bound method efficiently implemented on a powerful computer system, using the Ubiquity Generator Framework (UG) which can utilize more than 100,000 cores. Though the Lagrangian doubly nonnegative (DNN) relaxation of QAPs leads to less nodes than other lower bound procedures as LP relaxation, it requires much more computational time to solve each node. Since solving QAPs is different from solving MIP by LP-based branch-and-bound at this point, adding some new features to UG was required. The checkpointing mechanism is implemented in UG to save nodes in branch-and-bound tree for the calculation of the supercomputers with limited available time. Usually UG avoid to collect all the open nodes in branch-and-bound tree at generating checkpoint file. The new feature Enhanced Checkpoint is added to UG to collect all the open nodes at the end of the execution. This helps a lot to avoid calculating redundant nodes in multiple execution. The new feature Huge Checkpoint File Split is also added to UG to deal with huge number of nodes in checkpoint files by split nodes into two groups. In this talk, we describe the details of new features of UG for solving QAPs and present some preliminary numerical results including QAPLIB open instance tho40.

Elias Wirth "Approximate Vanishing Ideal Computations at Scale"

The approximate vanishing ideal of a set of points X={x1,…,xm}⊆[0,1]n is the set of polynomials that approximately evaluate to 0 over all points x∈X and admits an efficient representation by a finite set of polynomials called generators. Algorithms that construct this set of generators are extensively studied but ultimately find little practical application because their computational complexities are thought to be superlinear in the number of samples m. In this paper, we focus on scaling up the Oracle Approximate Vanishing Ideal algorithm (OAVI), one of the most powerful of these methods. We prove that the computational complexity of OAVI is not superlinear but linear in the number of samples m and polynomial in the number of features n, making OAVI an attractive preprocessing technique for large-scale machine learning. To further accelerate OAVI's training time, we propose two changes: First, as the name suggests, OAVI makes repeated oracle calls to convex solvers throughout its execution. By replacing the Pairwise Conditional Gradients algorithm, one of the standard solvers used in OAVI, with the faster Blended Pairwise Conditional Gradients algorithm, we illustrate how OAVI directly benefits from advancements in the study of convex solvers. Second, we propose Inverse Hessian Boosting (IHB): IHB exploits the fact that OAVI repeatedly solves quadratic convex optimization problems that differ only by very little and whose solutions can be written in closed form using inverse Hessian information. By efficiently updating the inverse of the Hessian matrix, the convex optimization problems can be solved almost instantly, accelerating OAVI's training time by up to multiple orders of magnitude. We complement our theoretical analysis with extensive numerical experiments on data sets whose sample numbers are in the millions.

Katsuki Fujisawa "Mobility Optimization Engine and its Real-world Applications"

Various efforts have been made to realize a so-called super-smart society recently. Our project team builds services to create new industries and other services through corporate collaboration. We have utilized large-scale computing infrastructures and developed the Cyber-Physical System Mobility Optimization Engine (CPS-MOE) that provides various functions, including creating new industries. It can reduce cost and industrial waste and constructing services to calculate the optimum control schedule of transportation agencies. The latest research results and industry-academia collaborative projects using CPS-MOE will be presented in this talk.

Hiroki Ishikura "Towards an optimal operation of automated storage and retrieval system with multiple machines"

We aim to improve the efficiency of a new type of automated storage and retrieval systems (AS/RSs) called multi-control automated storage and retrieval systems (MC-AS/RSs). MC-AS/RSs have multiple storage/retrieval (S/R) machines that operate independently according to storage and retrieval requests. Consequently, MC-AS/RSs can transport loads farther without using human labor, thereby requiring fewer human resources than conventional AS/RSs. However, the structure and control method of AS/RSs are complex because multiple S/R machines must be controlled simultaneously. Therefore, when operating an MC-AS/RS, many factors must be considered, such as the sequence and transport timing. We propose an optimization method using a time-expanded network (TEN) to solve these problems and generate optimal operational methods. First, our method models an AS/RS with a TEN to calculate the optimal sequence and conveyance timing while considering the movements of multiple S/R machines. Second, we formulate the operational efficiency of the MC-AS/RS as a problem of minimizing the sum of execution times of requests on the TEN. Finally, we generate the request order necessary for practical use based on the results. The mechanisms implemented to achieve include a generator, optimizer, and scheduler. Our experiments confirm that this method reduces the total execution time of requests compared with other rule-based methods. This method enables us to propose an efficient operation method for AS/RSs with a complex structure of multiple carriers.

Nozomi Hata "Theoretical Analysis for Representation Learning Methods of Graph-Structured Data"

Graph-structured data is one of the representative discrete data structures, while continuous data is usually represented as vectors. Continuous representation of relational data refers to the assignment of vectors to nodes in the relational data so that the relationship between two nodes can be recovered using these vectors. By representing relational data in continuous space, we can apply various algorithms in continuous space to real-world applications such as link prediction, attribute prediction, information retrieval, and question answering while preserving combinatorial characteristics of graph-structured data. In this presentation, we focus on the theoretical representational power of representation methods. Some representation methods can represent any inputs accurately. Such a property is called full expressiveness. We theoretically proved that some representation methods which are not fully expressive are, in fact, almost fully expressive. This presentation introduces almost fully expressive models and shows numerical results for link prediction tasks.

Mark Turner "Adaptive Cut Selection in Mixed-Integer Linear Programming"

Cut selection is a subroutine used in all modern mixed-integer linear programming solvers with the goal of selecting a subset of generated cuts that induce optimal solver performance. These solvers have millions of parameter combinations, and so are excellent candidates for parameter tuning. Cut selection scoring rules are usually weighted sums of different measurements, where the weights are parameters. We present a parametric family of mixed-integer linear programs together with infinitely many family-wide valid cuts. Some of these cuts can induce integer optimal solutions directly after being applied, while others fail to do so even if an infinite amount are applied. We show for a specific cut selection rule, that any finite grid search of the parameter space will always miss all parameter values, which select integer optimal inducing cuts in an infinite amount of our problems. We propose a variation on the design of existing graph convolutional neural networks, adapting them to learn cut selection rule parameters. We present a reinforcement learning framework for selecting cuts, and train our design using said framework over MIPLIB 2017. Our framework and design show that adaptive cut selection does substantially improve performance over a diverse set of instances, but that finding a single function describing such a rule is difficult.

Ryohei Yokoyama "A Quadratic Programming Approach for Performance Analysis of Energy Systems"

The optimization of energy systems for their design and operation necessitates the analysis of their performances undermany different conditions. To analyze static (steady) and dynamic (unsteady) performances, it is necessary to solve nonlinear algebraic equations and nonlinear differential algebraic equations, respectively. Nonlinear equations have been solved conventionally by the Newton-Raphson method, where the solution of linearized equations is repeated until convergence. On one hand, however, the Jacobian matrices may not be regular because of network structures and operating conditions of systems. On the other hand, they may not be calculated because of violated restrictions on variables used for equations. It is a burden for analysts to take account of avoiding these situations in modeling systems. Thus, an alternative approach is necessary to reduce the burden. The singular value decomposition approach, which derives least squares and minimum norm solutions, can resolve the former situation, but cannot resolve the latter situation. In this work, a quadratic programming approach will be proposed to derive least squares and minimum norm solutions under restrictions on variables. Some examples will be presented to show the effectiveness of the proposed approach.

Shizuo Kaji "Geometric Learning of Ranking Distributions"

Given a finite set X of n items, a complete order (permutation) of the items is called a ranking of X. A ranking distribution over X is a collection of rankings of X. We will discuss a high fidelity geometric model of a ranking distribution together with efficient learning and sampling algorithms.

Akiko Takeda "Bilevel Optimization for Machine Learning Problems"

Recently, bilevel optimization methods have been actively studied in machine learning (ML). Various ML models are described as bilevel optimization problems, and new approaches that take advantage of the characteristics of the models have been proposed. One of the representative ML applications of bilevel optimization is hyperparameter optimization. Most ML models are equipped with parameters that need to be prefixed, and such parameters are often called hyperparameters. In this talk, we review some bilevel formulations and approaches developed for optimizing an ML model together with hyperparameter values. The talk will explore new bilevel formulations of hyperparameter optimization for more complicated ML models that are formulated as nonsmooth optimization problems and bilevel optimization problems and show new solution methodologies.

Sebastian Pokutta "Convex integer optimization with Frank-Wolfe methods"

Mixed-integer nonlinear optimization is a broad class of problems that feature combinatorial structures and nonlinearities. Typical exact methods combine a branch-and-bound scheme with relaxation and separation subroutines. We investigate the properties and advantages of error-adaptive first-order methods based on the Frank-Wolfe algorithm for this setting, requiring only a gradient oracle for the objective function and linear optimization over the feasible set. In particular, we will study the algorithmic consequences of optimizing with a branch-and-bound approach where the subproblem over the convex hull of the mixed-integer feasible set due to Frank-Wolfe linear oracles, compared to solving the subproblems over the continuous relaxation of the same set. This novel approach computes feasible solutions while working on a single representation of the polyhedral constraints, leveraging the full extent of Mixed-Integer Programming (MIP) solvers without an outer approximation scheme. (joint work with Deborah Hendrych, Hannah Troppens, and Mathieu Besançon)

Shota Takahashi "Bregman Proximal DC Algorithms and Their Application to Blind Deconvolution with Nonsmooth Regularization"

Blind deconvolution is a technique to recover an original signal without knowing a convolving filter. It is naturally formulated as a minimization of a quartic objective function under some assumption. Because its differentiable part does not have a Lipschitz continuous gradient, existing first-order methods are not theoretically supported. In this presentation, we reformulate the objective function as a difference of convex (DC) functions and add nonsmooth regularization. Then, we apply the Bregman proximal DC algorithm (BPDCA) and the BPDCA with extrapolation (BPDCAe), whose convergences are theoretically guaranteed under the L-smooth adaptable (L-smad) property. BPDCAe outperformed other existing algorithms in image deburring applications.

Akira Tanaka "Port Set Clustering for Internet-Wide Scanner"

Indiscriminate IoT attacks have increased in recent years. Adversaries confirm if vulnerable destination ports are open as a preliminary step of the attack, and this procedure is called port scan. The darknet, also known as a network telescope, is used for observing such port scan activities. It passively monitors network traffic with an unreachable dark IP address block; thus, it receives not regular network traffic but Internet-wide scans for attack or investigation. Our goal is to specify scan activities for attack purposes by focusing on the destination ports of packets collected from a darknet. We treat each source IP address as a multiset made from the pairs of the destination port and the number of packets. We create a distance on the multisets and perform clustering using the distance. Multisets contribute to more fine clustering than clustering using port sets or the number of packets. We also propose the speedup technique for clustering based on the property of the distance.

Atsushi Miyauchi "Finding densest k-connected subgraphs"

Dense subgraph discovery is an important graph-mining primitive with a variety of real-world applications. One of the most well-studied optimization problems for dense subgraph discovery is the densest subgraph problem, where given an edge-weighted undirected graph, we are asked to find a subgraph that maximizes the average degree. Although this problem can be solved exactly in polynomial time and well-approximately in almost linear time, a densest subgraph has a structural drawback, namely, the subgraph may be disconnected by removing only a few vertices/edges within it. In this talk, we propose an algorithmic framework to find a dense subgraph that is well-connected in terms of vertex/edge connectivity. This is joint work with Francesco Bonchi (CENTAI), David García-Soriano (ISI Foundation), and Charalampos E. Tsourakakis (Boston University).

Antoine Deza "Worst-case constructions for linear optimization"

Worst-case constructions have helped providing a deeper understanding of how the structural properties of the input affect the computational performance of linear optimization. Recent examples include the construction of Allamigeon et al. for which the interior point method performs an exponential number of iterations, and thus is not strongly polynomial. In a similar spirit, recent lower bounds on the number of simplex pivots required in the worst-case to perform linear optimization over a lattice polytope will be presented, as well as the first worst-case instances for geometric scaling methods that solve integer optimization problems by primal augmentation steps. This talk is based on joint works with Shmuel Onn, Sebastian Pokutta, and Lionel Pournin.

Xun Shen "Approximate Methods for Solving Chance Constrained Linear Programs in Probability Measure Space"

Many risk-aware decision-making problems can be formulated as a chance constrained linear program in probability measure space, which is NP-hard and unsolvable directly. In this paper, we propose approximate methods to address this NP-hard problem. In the proposed methods, the original problem is approximated by two kinds of solvable optimization problems in finite-dimension space. We prove the convergence of the approximations and give numerical experiments including a stochastic control problem for validation.

Jun-ya Gotoh "Knot Selection of B-Spline Regression via Trimmed Regularizer"

B-spline regression is a nonparametric nonlinear model estimation method that uses piecewise polynomial basis functions whose shape is determined by the given positions of knots. In this study, we propose a continuous optimization approach for accomplishing the selection of used knots and the model fitting at the same time. More specifically, we introduce the cardinality constraint on the linearly transformed coefficient vector of the model, reduce it to a partially-regularized least square problem, and apply GIST (Gong, et al. 2013). Numerical results show that the proposed method found better models than existing method.

Keisuke Yano "Minimum information dependence modeling and its application"

We propose a method of dependence modeling for a broad class of multivariate data. Our class is characterized by two orthogonal sets of parameters: the parameters of dependence and those of marginal distributions. We present the existence and uniqueness theorem for our model. To estimate the dependence parameter, we establish conditional inference together with a sampling procedure and show that conditional inference is asymptotically indistinguishable from the maximum likelihood inference. We also discuss the information-geometrical structure and the connection to the entropic optimal transport and the Schrödinger bridge problems. Finally we illustrate an application to the earthquake data. This is joint work with Tomonari Sei (the university of Tokyo).

Naoki Marumo "A generalized Levenberg–Marquardt method for large-scale composite minimization"

We propose a new generalized Levenberg–Marquardt method for minimizing the sum of a smooth composite function and a convex function. The method enjoys three theoretical guarantees: iteration complexity bound, oracle complexity bound, and local convergence under an error bound condition. Numerical results show that the proposed method performs well for some large-scale problems.

Shunji Umetani "BIPSOL: A metaheuristic solver for large-scale binary integer programs"

Metaheuristics have proven to be a comprehensive approach to attain good solutions for hard combinatorial optimization problems. However, they are usually based on specific characters of the problem to be solved, which makes them hard to develop efficient general purpose solvers for such as mixed integer programs (MIPs) and constraint satisfaction problems (CSPs). In the design of metaheuristics for combinatorial optimization problems, the quality of solutions typically improves if larger and sophisticated neighborhoods are used, while computation time for searching the neighborhood also increases rapidly. BIPSOL is a metaheuristic solver for large-scale binary integer programs (BIPs) that introduces a generalized technique of the neighbor-list used for traveling salesman problem (TSP) to generate smaller and structured neighborhoods automatically. We also incorporate an efficient incremental evaluation of solutions and a dynamic control mechanism of penalty weights into BIPSOL. In this talk, we show some progress of development in BIPSOL and future directions.

Masahiro Nakao "Performance of the supercomputer Fugaku for Graph500 benchmark"

This presentation presents the performance of the supercomputer Fugaku for breadth-first search (BFS) problem in the Graph500 benchmark, which is known as a ranking benchmark used to evaluate large-scale graph processing performance on supercomputer systems. Fugaku is a huge-scale Japanese exascale supercomputer that consists of 158,976 nodes. We evaluate the BFS performance for a large-scale graph consisting of about 2.2 trillion vertices and 35.2 trillion edges using the whole Fugaku system, and achieve 102,955.5 giga-traversed edges per second, resulting in the first position of Graph500 BFS ranking.

Thorsten Koch "Notes on Solving QUBOs and Quantum Computing"

It is regularly claimed that quantum computers will bring breakthrough progress in solving challenging combinatorial optimization problems relevant in practice. In particular, Quadratic Unconstraint Binary Optimization (QUBO) problems are said to be the model of choice for use in (adiabatic) quantum systems. Combinatorial Optimization searches for an optimum object in a finite but usually vast collection of objects. This can be used for many practical purposes, like efficient allocation of limited resources, network planning, and hundreds of other applications in almost all fields, e.g., finance, production, scheduling, and inventory control. However, many combinatorial optimization problems are known to be NP-hard, often translated simplistically as "intractable." The practical side looks a bit different though. In many cases it is well possible to solve such problems to proven global optimality. We explain some of the meaning and implications, review the state of affairs, the potential of quantum computing, and give new computational results regarding solution of OUBOs.

João Doriguello "Quantum algorithm for stochastic optimal stopping problems with applications in finance"

The famous least squares Monte Carlo (LSM) algorithm combines linear least square regression with Monte Carlo simulation to approximately solve problems in stochastic optimal stopping theory. In this work, we propose a quantum LSM based on quantum access to a stochastic process, on quantum circuits for computing the optimal stopping times, and on quantum techniques for Monte Carlo. For this algorithm, we elucidate the intricate interplay of function approximation and quantum algorithms for Monte Carlo. Our algorithm achieves a nearly quadratic speedup in the runtime compared to the LSM algorithm under some mild assumptions. Specifically, our quantum algorithm can be applied to American option pricing and we analyze a case study for the common situation of Brownian motion and geometric Brownian motion processes.

Ralf Borndörfer "Multicriteria Shortest Path Algorithms"

The optimization of paths subject to different criteria such as length, duration, cost, etc. comes up in all kinds of route planning applications; they lead to the Multiobjective Shortest Path Problem (MOSP) of computing the Pareto front of efficient solutions. We propose a new “Multiobjective Dijkstra” label-setting algorithm that computes a minimum complete set of Pareto optimal paths; it is based on a lexicographic organization of the label exploration process. In this way, the main data structure, a priority queue, can be kept small, holding at most one label per node of the underlying graph, and all extracted labels are guaranteed to be efficient. The resulting algorithm improves the best know complexity bounds in this area. It gives rise to an FPTAS approximation variant, it can be generalized to a time dependent setting (in the FIFO case), it is parallelizable, and it works in practical implementations for more than two objectives.

Pierre-Louis Poirion "Randomized subspace regularized Newton method for unconstrained non-convex optimization"

In this talk we present a randomized subspace regularized Newton method for a non-convex function. We show that our method has global convergence under appropriate assumptions and its convergence rate is the same as that of the full regularized Newton method. Furthermore, we can obtain a local linear convergence rate, under some additional assumptions, and prove that this rate is the best we can hope when using random subspace.

Akifumi Okuno "Minimax Analysis for Inverse Risk in Nonparametric Invertible Regression"

In this talk, we study a minimax risk for estimating invertible functions. We first introduce an inverse L2-risk to evaluate an estimator which preserves invertibility. Then, we derive lower and upper rates for a minimax inverse risk by exploiting a representation of invertible functions using level-sets. To obtain an upper bound, we develop an estimator asymptotically almost everywhere invertible, whose risk attains the derived minimax lower rate up to logarithmic factors. This work is a joint work with M. Imaizumi (U. Tokyo). arXiv:2112.00213.

Niels Lindner "On the geometry of periodic timetables in public transport"

What rhythm is to music, is the timetable to a public transportation system. Many public transportation networks are operated periodically, and therefore the computation and optimization of periodic timetables is a frequent and important task. The mathematical foundation of periodic timetabling is the Periodic Event Scheduling Problem, which is easy to formulate, has a rich theory, but is notoriously hard to solve. In order to obtain a better understanding of how to solve periodic timetabling problems, we analyze the geometry of periodic timetables, and discover surprising connections to tropical and discrete geometry that are beyond the scope of the standard toolbox of combinatorial optimization. We outline how tropical neighborhood search, a new heuristic developed from these geometric insights, helped to compute new incumbent solutions for instances of the timetabling benchmarking library PESPlib.

Inci Yüksel-Ergün "Improving data quality in the presence of superhuman complexity in data errors"

In our studies to analyze gas network systems, we study building public research data sets from incomplete data scattered around various data sources. These data sources may not be consistent with each other or accurate. Thus, during these studies, we use our domain-specific mathematical modeling know-how to eliminate the data errors by filling missing data, or fixing inconsistencies. However, when working with the resulting highly-connected data, we encountered several cases where our analysis detected data errors that were too complex for humans to understand. Examples are irreducible infeasible subsystems (IIS) of large mixed-integer programs (MIP) or bottlenecks in the pressure-coupled pipeline network that is non-linear. While detecting these errors is a significant achievement, removing such errors is extremely difficult. Hence, quantifying the data quality is also a key enabler in this study to tell whether the data is of sufficient quality for the aimed analysis. We present our studies on data quality improvement in the presence of superhuman complexity in data errors, and explain the challenges. We report our results on the German high-pressure gas transport network data set.

Jaap Pedersen "Optimal discrete pipe sizing for tree-shaped CO2 networks"

While energy-intensive industries like the steel industry plan to switch to renewable energy sources, other industries, such as the cement industry, have to rely on carbon capture storage and utilization technologies to reduce the inevitable carbon dioxide (CO2) emissions of their production processes. In this context, we investigate the problem of finding optimal pipeline diameters from a discrete set of diameters for a tree-shaped network transporting captured CO2 from multiple sources to a single sink. The general problem of optimizing arc capacities in potential-based fluid networks is a challenging mixed-integer nonlinear program. Additionally, the behaviour of CO2 is highly sensitive and nonlinear regarding temperature and pressure changes. We propose an iterative algorithm splitting the problem into two parts: a) the pipe-sizing problem under a fixed supply scenario and temperature distribution and b) the thermophysical modelling including mixing effects, the Joule-Thomson effect, and heat exchange with the surrounding environment. We show the effectiveness of our approach by applying our algorithm to a real-world network planning problem for a CO2 network in Western Germany.

Uwe Gotzes "Spotlights on success stories of public-private partnership"

The Zuse Institute Berlin and OGE, Germany's largest natural gas transmission system operator, have been cooperating for more than a decade in a variety of application-oriented research projects. In my talk I will briefly go through the history of the projects and the collaboration. A more recent project addressing telecommunication network design will be presented in detail.

Ying Chen "Deep Switching State Space Model (DS3M) for Nonlinear Time Series Forecasting with Regime Switching"

We propose a deep switching state space model (DS3M) for efficient inference and prediction of nonlinear time series with stochastic regime transitions. The dynamics are captured by latent discrete variables representing the regimes, latent continuous variables for stochastic signals, and latent hidden states in recurrent neural networks. The model is estimated by variational inference using a reparameterization trick. We show implementations on different types of simulated and real datasets from various disciplines with different data characteristics for short- to long-term forecasting. In all cases, DS3M achieves competitive performance compared to several state-of-the-art methods, with superior forecasting accuracy and convincing interpretability for irregular regime switching. This is a collaborative work with Xiuqin Xu. The paper is available at arXiv:2106.02329.

Osamu Saeki "Institute of Mathematics for Industry: its uniqueness, strength and prospects"

As director of the Institute of Mathematics for Industry (IMI), Kyushu University, which is a unique institute of industrial mathematics in Japan, the speaker will present various activities of IMI which characterize its strength; researches in fundamental mathematics, mathematics applied to other disciplines, joint projects with industry, together with educational activities, including the WISE program funded by the Japanese government. Some prospects of IMI for the future will also be presented.