Dealing with uncertainty in optimization parameters is an important and long standing challenge. Typically, uncertain parameters are predicted accurately, and then a deterministic optimization problem is solved. However, the decisions produced by this so-called predict-then-optimize procedure can be highly sensitive to uncertain parameters. We contribute to recent efforts in producing decision-focused predictions, i.e., to build predictive models that are constructed with the goal of minimizing a regret measure on the decisions taken with them. Specifically, in this work, we study how to produce predictive models that induce risk-averse decisions. Formulations and valid inequalities are discussed.
We propose a robust formulation for the topology optimization of continuous structures. The objective is to determine the optimal distribution of a linear elastic material within a reference domain subjected to both stochastic and deterministic external loads. A key feature of this formulation is the incorporation of a failure probability constraint defined in terms of compliance. The Bernstein approximation is used to derive an upper bound on the failure probability, yielding a more tractable formulation. By using the Solid Isotropic Material with Penalization (SIMP) method, where the material density is the main design variable, we reformulate the original stochastic optimization problem into a standard nonlinear optimization problem. We develop a numerical algorithm to solve this reformulation by iteratively solving a sequence of linear conic subproblems, which can be efficiently handled in polynomial time via interior-point methods. Numerical experiments demonstrate the effectiveness of the proposed approach.
Traditionally, the problem of apportioning the seats of a legislative body has been viewed as a one-shot process with no dynamic considerations. While this approach is reasonable for some instances of the problem, dynamic aspects play an important role in many others. In this paper, we initiate the study of apportionment problems in an online setting. Specifically, we introduce an online algorithmic framework to handle proportional apportionment with no information about future events. In this model, time is discrete and there are $n$ parties that receive a certain share of the votes at each time step. An online algorithm needs to irrevocably assign a prescribed number of seats at each time, ensuring that each party receives its fractional share rounded up or down, and that the cumulative number of seats allocated to each party remains close to its cumulative share up to that time.
We consider deterministic and randomized online apportionment methods. For deterministic methods, we construct a family of adversarial instances that yield a lower bound, linear in $n$, on the worst-case deviation between the seats allocated to a party and its cumulative share. We show that this bound is best possible and is matched by a natural greedy method. As a consequence, a method guaranteeing that the cumulative number of seats assigned to each party up to any step equals its cumulative share rounded up or down (global quota) exists if and only if $n\leq 3$. Then, we turn to randomized allocations and show that, when $n\leq 3$, we can randomize over methods satisfying global quota with the additional guarantee that each party receives, in expectation, its proportional share in every step. Our proof is constructive: We show that any method satisfying these properties can be obtained from a flow on a recursively constructed network.
This is joint work with Jose Correa, Svenja M. Griesbach, and Victor Verdugo.
Packing problems are classical Discrete Optimization problems that involve selecting or allocating items while satisfying capacity constraints. Many of them have been generalized to the two-dimensional geometric setting (e.g. Knapsack, Bin Packing, Makespan Minimization), where now items are rectangles to be packed into a region in the plane, achieving interesting recent advances. On the other hand, classical Covering problems (e.g. Knapsack Cover, Bin Covering, Machine Covering) do not allow for good approximation algorithms in multidimensional geometric contexts.
Recently, motivated by Energy Management scenarios, two-dimensional packing problems with ``partial'' geometric constraints have been considered. In 2D Demand Packing, instead of requiring the rectangles to be embedded in the plane so that they do not intersect, we place them along a horizontal line and account for the accumulated load at each point. In this work, we initiate the study of Covering problems in the 2D Demand Setting, considering a generalization of Machine Covering called 2D Demand Strip Covering, where we seek to allocate all the items so as to maximize the minimum load along the horizontal line. As opposed to the direct 2D geometric generalization of Machine Covering, which cannot be approximated within any factor, we prove that there exists a (2+eps)-approximation for 2D Demand Covering, and also provide an almost matching lower bound. The result is achieved through computing sheathing-based allocations, which might be useful in other contexts.
We propose a practical sequential test for auditing differential privacy guarantees of black-box mechanisms. The test processes streams of mechanisms’ outputs providing anytime-valid inference while controlling Type I error, overcoming the fixed sample size limitation of previous batch auditing methods. Experiments show this test detects violations with sample sizes that are orders of magnitude smaller than existing methods, reducing this number from 50K to a few hundred examples, across diverse realistic mechanisms. Notably, it identifies DP-SGD privacy violations in under one training run, unlike prior methods needing full model training.
Anchoring-based acceleration, first introduced by Halpern, has been actively studied in the fixed-point iteration and minimax optimization literature. More recently, anchoring acceleration has shown promise in Reinforcement Learning (RL) setups, specifically when applied to Value Iteration. In this talk, we will introduce Anchored Value Iteration and discuss how it improves convergence and sample complexity across various average-reward RL settings, including the tabular setup, the generative model setup, and offline RL with function approximation.
In recent years, prophet inequalities have become one of the most popular tools to analyze the performance of online algorithms. While there are at least 6 known proofs of the prophet inequality, we believe this number is not high enough.
In this talk, we describe a new approach for prophet inequalities via differential equations. We describe a differential equation (ODE) that pins down the expected reward of a specific threshold in the worst possible instance, parametrized by the distribution of the maximum. This ODE can be solved explicitly, providing the first approach that encompasses all known single-threshold algorithms for prophet inequalities. In particular, it equips us with an optimality condition for the optimal threshold, and can be applied to recover and unify known results as well as provide new ones.
In this work, we introduce a new sparse classification model that combines the nonconvex $\ell_p$ quasi-norm ($0 < p < 1$) with second-order cone constraints encoding the first- and second-order moments of the data. This integration promotes strong sparsity while improving the robustness of the classifier under moderate data perturbations.
We derive explicit lower bounds on the nonzero components of any local minimizer, providing the first analytical sparsity guarantee for $\ell_p$-SOCP models. We also propose an efficient iteratively reweighted $\ell_1$ algorithm to solve the resulting nonconvex problem and establish its convergence to first-order stationary points.
Numerical experiments on several datasets demonstrate the effectiveness and robustness of the proposed approach.
In inverse problems, the goal is to modify the costs in an underlying optimization problem in such a way that a given solution becomes optimal, while the difference between the new and the original cost functions is minimized with respect to some objective function. We generalize this problem by replacing the fixed solution with a subset of feasible solutions S' and imposing different optimality requirements on S'. We study six variants under the infinity norm when the underlying optimization problem is finding a base of maximum weight.
We study the problem of listing all faces of prominent combinatorial polytopes, including hypercubes, permutahedra, associahedra, and generalizations. To this end, we analyze the face lattice , which encodes the inclusion order of all faces. Our objective is to construct a Hamiltonian cycle in the cover graph of this lattice, enabling us to find an structured listing of the faces of the polytope. Note that consecutive faces in the listing will differ by exactly one in dimension, and one is contained in the other.
Perhaps surprisingly, we seem to be the first to ask the following natural question: Is the face lattice of every polytope Hamiltonian? While we possess neither a general proof nor a specific counterexample, we provide some evidence for the conjecture. We explicitly construct these Hamiltonian cycles for many of the families under consideration. Our constructions yield time- and space-efficient algorithms for computing the cycles, enabling the efficient listing of associated combinatorial objects, including ordered set partitions and dissections of a convex polygon.
Joint work with Nastaran Behrooznia (Warwick), Sofia Brenner, Torsten Mütze, Christian RIeck and Francesco Verciani (Kassel).
We study cutting planes for nonconvex quadratically constrained quadratic optimization problems (QCQPs) derived from semidefinite relaxations. Starting from classical eigenvector inequalities, we apply a Chvátal–Gomory rounding to obtain a family of cuts we call Eigen-CG inequalities. We show that this construction is closely related to a highly expressive family: the Boros–Hammer inequalities.
We also show that, when added to standard SDP-based relaxations, many dense Eigen-CG cuts are weak. This provides a complementary take of a common empirical observation: that sparse cuts are more effective. Guided by this insight, we design a practical separation strategy based on sparse Eigen-CG cuts. Computational results confirm the importance of sparsity, and that well-chosen cuts beyond triangle inequalities can significantly strengthen dual bounds.
In this talk, the non-monotone line-search methods are presented and analyzed for minimizing an objective function on a smooth manifold. Particularly, we study the number of iterations necessary for this class of schemes to obtain $\epsilon$-approximated stationary points. We prove that under a regularity Lipschitz-type condition on the pullbacks of the cost function to the tangent spaces of the manifold and other mild assumptions, the Riemannian non-monotone line-search methods generate points with Riemannian gradient norm smaller than $\epsilon$ in $O(\epsilon^{-2})$ iterations. Our worst-case complexity analysis includes a wide variety of known non-monotone strategies existing in the literature. Additionally, we establish the global convergence for this family of methods. The bounds obtained in our analysis agree with the bounds known for linesearch methods in the field of unconstrained nonlinear optimization and hence generalize previous work.
We study the pooling of multiple orders into a single trip, a strategy widelyadopted by online delivery platforms. When an order has to be dispatched, the platformmust determine which (if any) of the other available orders to pool it with,weighing theimmediate efficiency gains against the uncertain, differential benefits of holding eachorder for future pooling opportunities. In this paper, we demonstrate the effectivenessof using the length of each job as its opportunity cost, via a potential-based greedyalgorithm (PB). The algorithm is very simple, pooling each departing job with theavailable job that maximizes the savings in travel distance minus a half of its distance(i.e. the potential). On the theoretical front, we show that PB significantly improvesupon a naive greedy algorithm in terms of worst-case performance: as the density ofthe market increases, the regret per job vanishes under PB but remains constant undernaive greedy. In addition, we show that the potential approximates the marginal cost ofdispatching each job in a stochastic setting with sufficient density. Finally, we conductextensive numerical experiments and show that despite its simplicity, our notion ofpotential consistently outperforms forecast-aware heuristics that estimate the marginalcosts of dispatching different jobs using historical data. Across the greedy dispatchpolicies we study, PB achieves the strongest performance. Moreover, compared withbatching-based heuristics widely used in practice, PB achieves comparable performancewith substantially lower computation, and these policies can also benefit fromincorporating potential to better capture opportunity costs.
It is common opinion that generalized Nash equilibrium problems are
harder than Nash equilibrium problems. In this work, we show that by
adding a new player, it is possible to reduce many generalized
problems to standard equilibrium problems. The reduction holds for linear
problems and smooth convex problems verifying a Slater-type
condition. We also derive a similar reduction for quasi-variational
inequalities to variational inequalities under similar
assumptions. The reduction is also obtained for purely integer linear
problems. Interestingly, we show that, in general, our technique does
not work for mixed-integer linear problems. The present work is built
upon the recent developments in exact penalization for generalized
games.
This presentation addresses the integrated planning of vehicle routes and driver schedules with flexible departure times. We consider long-distance transportation systems where drivers and vehicles are assigned to perform given transportation tasks. Each transportation task corresponds to a trip from an origin to a destination with a predefined schedule, operated by a vehicle and a driver. A key feature is that we assume that there is some flexibility in task departure times. While drivers are subject to strict limitations on working and driving hours, vehicles can operate continuously. Specifically, one vehicle can be operated by different drivers over time, and drivers may change vehicles or travel as passengers when necessary. The objective is to minimize the total number of vehicles and drivers required and reduce idle time.
To solve large-scale instances in a reasonable amount of time, we decompose the problem into two subproblems and propose a two-phase heuristic algorithm. First, we solve the driver scheduling subproblem using a column generation approach. Then, given the resulting driver schedules, we solve the vehicle routing subproblem using a compact formulation. Computational results demonstrate the efficiency of this approach confirming higher vehicle utilization, as it allows feasible driver-vehicle pairing. Moreover, The flexibility on departure times can lead to solutions with either fewer drivers and/or vehicles or less idle time.
I will present the model of multiplayer stopping games, both in discrete-time and continuous-time, and discuss the existence of equilibrium in these models, including illuminating examples as time will permit.
We determine the optimal constant for the Erdős-Pyber theorem on hypergraphs. Namely, we prove that every n-vertex d-uniform hypergraph H can be written as the union of a family F of complete d-partite hypergraphs such that every vertex of H belongs to at most (n choose d)/(n lg n) graphs in F. This improves on results of Csirmaz, Ligeti, and Tardos (2014), gives the best upper bound for some secret sharing questions, and answers several 40-year-old questions of Chung, Erdős, and Spencer (1983). The heart of our proof is in the usage of words combinatorics to equitably assign hypergraph edges. I will also show one of the many algorithmic applications of this line of work: a subquadratic approximation algorithm for the classic densest subgraph problem. This talk is based on joint work with Andrew Krapivin, Benjamin Przybocki, and Nicolás Sanhueza-Matamala.
We consider bilevel problems perturbed on the right hand side of the constraints of both the leader and the follower and we study the calmness of the optimal solution as dependent on the perturbation. We provide some positive results for the families of linear bilevel problems and for pricing problems. However, when we consider also coupling constraints, the results are far less favorable, and we show some examples.
Mathematical programs are typically solved with specialized software, such as mixed-integer linear programming (MILP) solvers. While often powerful, these software suites are complex and may contain bugs due to implementation errors or numerical issues. To formally certify that a claimed solution is indeed optimal, we propose a framework that supplements solver results with a proof of correctness that can be independently and automatically verified by a user. This work extends on seminal work by Cheung et al. (2017) Bogaerts et al. (2023), and our unified framework for symmetry handling (vD and Hojny, 2023).
To a mathematical program, we introduce a configuration, and say that a configuration is valid for the mathematical program if certain conditions are met. Starting with an initially trivial configuration, validity-preserving rules update this configuration to a state that certifies the claimed optimum. We provide an abstract framework that provides rules for general mathematical programs, and provide a realization geared to certifying cutting plane proofs and common symmetry handling reductions mixed-integer linear programs.
Jasper van Doornmalen, Leon Eifler, Ambros Gleixner, and Christopher Hojny. A proof system for certifying symmetry and optimality reasoning in integer programming, 2023. DOI 10.48550/arXiv.2311.03877. Preprint.
TBA
Abstract: The presence of symmetries is one of the central structural features that make some integer programs challenging for state-of-the-art solvers. In this work, we study the efficacy of Linear Programming (LP) hierarchies in the presence of symmetries.
Our main theorem unveils a connection between the algebraic structure of these relaxations and the geometry of the initial integer-empty polytope: We show that under $(k+1)$-transitive symmetries--a measure of the underlying symmetry in the problem--the corresponding relaxation at level $k$ of the hierarchy is non-empty if and only if the initial polytope intersects all $(n-k)$-dimensional faces of the hypercube.
In particular, the hierarchies of Sherali-Adams, Lovász-Schrijver, and the Lift-and-Project closure are equally effective at detecting integer emptiness.
Our result provides a unifying, group-theoretic characterization of the poor performance of LP-based hierarchies, and offers a simple procedure for proving lower bounds on the integrality gaps of symmetric polytopes under these hierarchies.