Cones: Theory and Optimization
December 1st-3rd, 2025
within the 2025 RIMS research project Advances in Theoretical Research on Mathematical Optimization
December 1st-3rd, 2025
within the 2025 RIMS research project Advances in Theoretical Research on Mathematical Optimization
Dates: December 1st-3rd, 2025
Location: Research Institute for Mathematical Sciences, Kyoto University (Room 420)
Access information: https://www.kurims.kyoto-u.ac.jp/kyoten/en/access.html
Registration: https://forms.gle/ZXH4KD4h7n1CjDv57
The goal of this workshop is to gather researchers to discuss the latest developments on conic optimization and related topics. In addition, we will also mark the retirement of Prof. Takashi Tsuchiya and celebrate his decades of research and service to the optimization community.
Godai Azuma
Andrew Calcan
Jordan Collard
Daniel Dadush
Chao Ding
Ellen H. Fukuda
Masaru Ito
Xudong Li
Ying Lin
Scott B. Lindstrom
Bruno F. Lourenço
Tom Luo
Renato Monteiro
Mitsuhiro Nishijima
Takayuki Okuno
Michael Orlitzky
Vera Roshchina
James Saunderson
Anthony Man-Cho So
Takashi Tsuchiya
Yinyu Ye
Akiko Yoshise
Godai Azuma (Aoyama Gakuin University)
Exactness of SDP Relaxation for Robustness Verification of Neural Networks
Neural networks (NNs) in safety-critical applications are required to be safe and robust against adversarial attacks and uncertainty. For NNs with rectified linear unit (ReLU) activations, the robustness verification can be formulated as a nonconvex quadratically constrained quadratic program (QCQP). However, QCQPs cannot be solved exactly in general. In this talk, we discuss sufficient conditions under which this QCQP admits exact semidefinite programming (SDP) relaxation, ensuring accurate robustness verification. We focus on the verification problem for single-layer NNs and derive conditions for two types of input sets by applying the collinearity-based exactness condition proposed by Zhang. Various observations, including geometric insights and experimental results, are also presented.
Andrew Calcan (Curtin Centre for Optimisation and Decision Science)
Beyond Conic Optimization: Operator Splitting for Sparse Models with Nonconvex Regularisation
Conic optimization is a powerful tool for convex problems. In practice, nonconvex problems are often tackled by applying efficient conic solvers to convex relaxations. However, recent evidence suggests that applying tailored algorithms directly to nonconvex regression problems can yield solutions with superior accuracy and structure. We propose an operator splitting approach for nonconvex problems that intends to achieve practical efficiency comparable to conic programming. The method is built on the classical Douglas–Rachford method and incorporates warm starting to potentially avoid sub-optimal convergence. Numerical results demonstrate the method’s competitiveness with conic relaxations and its ability to recover more accurate linear regression models.
Jordan W. Collard (Curtin University)
Quadratic Convergence of a Lyapunov-Based Projection Algorithm for Nonconvex Feasibility Problems
This talk examines a dynamical projection algorithm motivated by Lyapunov constructions and establishes its quadratic convergence for a nonconvex feasibility problem: finding a point in the intersection of a hyperplane and the graph of a smooth function. The setting generalises the classical circle-line problem in two dimensions, which has been studied as a prototypical model for inverse problems in signal processing and image recovery. We outline the key geometric ideas underlying the proof and conclude with numerical results demonstrating analogous behaviour in higher-dimensional semidefinite feasibility problems.
Daniel Dadush
TBA
TBA
Chao Ding (Academy of Mathematics and Systems Science, Chinese Academy of Sciences)
Stratification for Nonlinear Semidefinite Programming
Nonlinear semidefinite programs (NLSDPs) are traditionally analyzed through nonsmooth KKT systems in the ambient space, which necessitates strong regularity assumptions and obscures the reasons why Newton-type methods fail near degeneracy. In this talk, we introduce a stratification framework that resolves this tension by moving the analysis onto the strata of the symmetric matrices space. First, we prove that transversality for stratification is equivalent to W-SRCQ, establish its stability along the relevant strata, and show the genericity of W-SRCQ. Under this framework, the cone projection $\Pi_{\mathbb S_+^n}$ becomes a $C^\infty$ mapping between the appropriate strata and the ambient space; thus, the KKT system, nonsmooth globally but smooth on strata, can be solved by a strata Gauss–Newton method. We further prove that the restricted KKT mapping is a locally Lipschitz homeomorphism on the pertinent stratum if and only if W-SOC and W-SRCQ hold, yielding a local quadratic convergence rate for the strata Gauss–Newton method. Finally, we propose a globalized strategy tailored to the stratified geometry and establish global convergence, along with a quadratic local rate.
Ellen Hidemi Fukuda (Kyoto University)
Strong second-order approximate KKT conditions for nonlinear semidefinite optimization
Many sequential optimality conditions, which do not require constraint qualifications and can improve the convergence assumptions of various algorithms, have been studied over the past two decades. Among them, second-order sequential optimality conditions have been proposed for nonlinear optimization and nonlinear second-order cone optimization. In this talk, we consider the more general nonlinear semidefinite optimization problem (NSDP) and present the so-called strong approximate Karush-Kuhn-Tucker 2 (SAKKT2) conditions. We show that constructing such conditions in the NSDP setting is nontrivial. We also demonstrate the relation between SAKKT2 and the usual second-order condition, and show that certain algorithms can, in fact, generate points satisfying SAKKT2. This is a joint work with Huimin Li and Yuya Yamakawa.
Masaru Ito (Nihon University)
Optimization problems with eigenvalue constraints
We consider nonlinear optimization problems whose constaints are described in terms of generalized eigenvalues. Here, we employ the notion of the so-called Fan-Theobald-von Neumann system to introduce generalized eigenvalues, which allows several examples such as eigenvalues in Euclidean Jordan algebra. Although the feasible set is nonconvex in general, we show that the projection onto the feasible set can be reformulated into a convex program. We then analyze the standard projected gradient method to solve our optimization problem. Furthermore, some applications of our optimizatoin model and the performance of the projected gradient method are discussed.
Xudong Li (Fudan University)
A quadratically convergent semismooth Newton method for nonlinear semidefinite programming without the subdifferential regularity
In this talk, we introduce a quadratically convergent semismooth Newton method for nonlinear semidefinite programming that eliminates the need for the generalized Jacobian regularity, a common yet stringent requirement in existing approaches. Our strategy involves identifying a single nonsingular element within the Bouligand generalized Jacobian, thus avoiding the standard requirement for nonsingularity across the entire generalized Jacobian set, which is often too restrictive for practical applications. The theoretical framework is supported by introducing the weak second order condition (W-SOC) and the weak strict Robinson constraint qualification (W-SRCQ). These conditions not only guarantee the existence of a nonsingular element in the generalized Jacobian but also forge a primal-dual connection in linearly constrained convex quadratic programming. The theoretical advancements further lay the foundation for the algorithmic design of a novel semismooth Newton method, which integrates a correction step to address degenerate issues. Preliminary numerical experiments corroborate our theoretical findings and underscore the practical effectiveness of our method.
Ying Lin (The University of Hong Kong)
Convex Facial Reduction Algorithm and Strong Extended Duals
The facial reduction algorithm plays an important role in regularizing any conic convex programming problems and establishing the extended duals that enjoy strong duality without requiring any constraint qualifications. When the cone is nice, the extended dual has a relatively simpler formula. In this work, we generalize both the facial reduction algorithm and the concept of niceness from closed convex cones to convex sets. Our convex facial reduction algorithm can handle a convex problem with a nonlinear objective function and an intersection of two convex sets, and so is able to deal with problems that are inefficiently SDP representable. Based on the convex facial reduction algorithm we establish a class of extended duals that covers the extended duals in conic cases. Similarly, when the two sets are both nice, the formula of the extended dual is relatively simpler.
Scott B. Lindstrom (Kyoto University RIMS & Curtin University)
On tight error bounds for cone problems
Conic-linear programming allows researchers to tackle many important problems. Popular programs such as Mosek, Alfonso, Hypatia, and CVX solve problems with conic-linear methods. Such software packages provide users with so-called "backward error bounds," which may not be useful accuracy guarantees. After Joss Sturm found "forward" error bounds (i.e. useful guarantees) for the second order cone, 23 years passed before forward error bounds were found for any other cones on the Mosek conic modeling wheel. I will describe the framework that we built for obtaining such guarantees, a framework we used to find forward error bounds for several new classes of cones in short succession. This talk is based on joint work with Bruno Lourenço (ISM), Ying Lin, and Ting-Kei Pong (both The Hong Kong Polytechnic University).
Bruno F. Lourenço (The Institute of Statistical Mathematics)
Error bounds for some persectives cones and almost everywhere exponent 1/2
In previous works, through our framework based on one-step facial residual functions (1-FRFs) we computed error bounds for several cones. Curiously, in every single example "almost every" 1-FRF can be taken to be Hölderian of exponent at least 1/2. Inspired by this observation, in this talk we present our results showing that for 3D convex cones constructed from certain perspective functions, Hölderian 1-FRFs of exponent at least 1/2 are indeed valid almost everywhere with respect to the two-dimensional Hausdorff measure.This a joint work with Ting Kei Pong (The Hong Kong Polytechnic University) and Xiazhou Wang (South China Normal University).
Tom Luo
TBA
TBA
Renato D.C. Monteiro (Georgia Institute of Technology)
An Accelerated Hybrid Low-Rank Augmented Lagrangian Method for Large-Scale Semidefinite Programming on GPU
This talk discusses an SDP solver, cuHALLaR, which is a GPU-accelerated implementation of the hybrid low-rank augmented Lagrangian method, HALLaR, proposed earlier by three of the authors. The proposed Julia-based implementation is designed to exploit massive parallelism by performing all core operator evaluations—linear maps, their adjoints, and gradients—entirely on the GPU. Extensive numerical experiments are conducted on three problem classes: matrix completion, maximum stable set, and phase retrieval. The results demonstrate substantial performance gains over both optimized CPU implementations and existing GPU-based solvers. On the largest instances, cuHALLaR achieves speedups of up to 165x, solving a matrix completion SDP with a matrix variable of size $2 \times 10^6 \times 2 \times 10^6$ and over $2.6 \times 10^8$ constraints to a relative precision of $10^{-5}$ in 53 seconds. Similarly, speedups of up to 135x are observed for maximum stable set problems on graphs with over one million vertices. These results establish cuHALLaR as a state-of-the-art solver for extremely large-scale SDPs.
Mitsuhiro Nishijima (RIKEN)
Facial Structure of Copositive and Completely Positive Cones over Symmetric Cones
A copositive cone over a symmetric cone is the cone of self-adjoint linear transformations whose associated quadratic forms are nonnegative over the symmetric cone. Many NP-hard problems admit a unified formulation as minimizing or maximizing a linear function over the intersection of an affine space with the copositive cone or its dual (i.e., the completely positive cone over the symmetric cone). In this talk, we present our recent findings on the facial structure of copositive and completely positive cones over symmetric cones. The topics of this talk include the length of a longest chain of faces of the copositive and completely positive cones and properties of their faces, such as exposedness and dimension.
Takayuki Okuno (Seikei University)
Analysis of the primal-dual central path for nonlinear semidefinite optimization without the nondegeneracy condition
We study properties of the central path underlying a nonlinear semidefinite optimization problem, called NSDP for short. The latest radical work on this topic was contributed by Yamashita and Yabe (2012): they proved that the Jacobian of a certain equation-system derived from the Karush-Kuhn-Tucker (KKT) conditions of the NSDP is nonsingular at a KKT point under the second-order sufficient condition (SOSC), the strict complementarity condition (SC), and the nondegeneracy condition (NC). This yields uniqueness and existence of the central path through the implicit function theorem. In this paper, we consider the following three assumptions on a KKT point: the enhanced SOSC, the SC, and the Mangasarian-Fromovitz constraint qualification. Under the absence of the NC, the Lagrange multiplier set is not necessarily a singleton and the nonsingularity of the above-mentioned Jacobian is no longer valid. Nonetheless, we establish that the central path exists uniquely, and moreover prove that the dual component of the path converges to the so-called analytic center of the Lagrange multiplier set. As another notable result, we clarify a region around the central path where Newton’s equations relevant to primal-dual interior point methods are uniquely solvable.
Michael Joseph Orlitzky (University of Maryland Baltimore County)
Isometric automorphism identities for symmetric, homogeneous, and hyperbolicity cones
Motivated by an identity characterizing the norm-preserving automorphisms of a symmetric cone, we formulate kindred identities for homogeneous and hyperbolicity cones and then discuss the notion of "eigenvalue" in each of these settings.
Vera Roshchina (UNSW Sydney)
Constructing five-dimensional convex cones
A convex cone is said to be projectionally exposed if every face arises as a projection of the original cone. It is known that, in dimension at most four, projectional exposure is preserved under intersections. We show that this is not true in general by constructing a five-dimensional counterexample. This construction also leads to the first example of an amenable cone that is not projectionally exposed, showing that these properties, which coincide in dimension at most 4, are distinct in dimension 5.
In order to achieve these goals, we develop a new technique for constructing arbitrarily tight inner convex approximations of compact convex sets with desired facial structure. These inner approximations have the property that all proper faces are extreme points, with the exception of a specific exposed face of the original set.
James Saunderson (Monash University)
Operator convexity along lines and self-concordant barriers
Barrier methods play a central role in the theory and practice of convex optimization. One of the most general and successful analyses of barrier methods for convex optimization, due to Nesterov and Nemirovskii, relies on the notion of self-concordance. While a powerful property, proving self-concordance of a barrier is often challenging.
This talk focuses on self-concordant barriers for the epigraphs of convex functions. The main result is that if every univariate restriction of a convex function is operator convex, then a natural logarithmic barrier for the epigraph of the function is self-concordant. It turns out that this sufficient condition holds for various matrix relative entropies of significant interest in quantum information, allowing us to construct self-concordant barriers for their epigraphs and integrate them into the open-source solver QICS.
This is based on joint work with Hamza Fawzi (Cambridge) and Kerry He (Monash)
Anthony Man-Cho So
TBA
TBA
Takashi Tsuchiya (National Graduate Institute for Policy Studies)
Closing duality gap of singular SDP with nonzero duality gap with higher singularity degrees (tentative)
TBA
Yinyu Ye
TBA
TBA
Akiko Yoshise (University of Tsukuba)
Procrustean Obfuscation for Privacy-Preserving Collaborative Learning
Privacy-preserving collaborative learning enables multiple parties to jointly train machine learning models without directly sharing sensitive data. While approaches such as Federated Learning and Homomorphic Encryption offer robust privacy guarantees, they often incur significant computational and communication overhead as model complexity increases. In contrast, multiplicative perturbation techniques promise enhanced efficiency; however, they are typically hampered by increased privacy risks from collusion or by a reliance on extensive deep learning-based training to achieve satisfactory performance. In this work, we introduce a novel framework that bridges these extremes by integrating the analytic Gaussian mechanism of differential privacy with the Generalized Orthogonal Procrustes Problem. Our method delivers adjustable privacy–performance trade-offs through tunable differential privacy parameters, allowing practitioners to balance protection and efficiency according to specific application needs. We substantiate our approach with theoretical guarantees and numerical analyses that evaluate its performance across varying privacy levels, data dimensions, and numbers of collaborating parties.
Ellen H. Fukuda (Kyoto University)
Bruno F. Lourenço (Institute of Statistical Mathematics)
Kazuhisa Makino (Kyoto University)
Mitsuhiro Nishijima (RIKEN-AIP)
Akiyoshi Shioura (Institute of Science Tokyo)