Organizers:
Gonzalo Muñoz (DII-UChile)
José Verschae (IMC-UC)
Title: Column Generation and the Feature Selection Problem
When: October 22, 3:00 pm - 4:00 pm.
Speakers: Fernando Ordóñez, U Chile.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
Column generation is a well-known decomposition method to solve linear and mixed integer problems with a large number of variables. A similar column generation decomposition method can be constructed for conic optimization problems. In this talk we present work that explores whether this can be a competitive solution method for the continuous relaxation of the feature selection problem.
Joint work with Nicolas Acevedo and Renaud Chicoisne.
Title: Prophet Inequalities with Moment Knowledge
When: October 15, 3:00 pm - 4:00 pm.
Speakers: Vasilis Livianos, U Chile.
Where: Sala BP-316, Beauchef 851.
Abstract:
In this talk, we study a variant of the prophet inequality with limited information, where the decision maker only has access to the first k moments of each random variable, rather than their full distributions. Our main result is that, for any k, even one dependent on n, the best possible competitive ratio is Θ(1/ log(n)), which we show can already be achieved with knowledge of the first moment only.
Our result implies that the moments are not very informative in this setting, so extra information is needed if one aims for better guarantees. To showcase the impact of extra information, we investigate the case of monotone-hazard-rate (MHR) distributions, showing that a guarantee of Ω( 1 / log(log(n)))$ is possible already with a single moment. In addition, we show that under a condition on the growth rate of the moments, constant-competitive guarantees are possible with k = Ω(log(n))$ moments.
This work provides an answer to a prior open question about the prophet inequality with limited information, and also presents a framework for online selection problems under constraints of moment knowledge. This talk is based on joint work with José Correa (U. de Chile), Andrés Cristi (EPFL), Victor Verdugo (PUC Chile) and Jiechen Zhang (EPFL).
Title: Fingerprinting Techniques for Lower Bounds in Differential Privacy and a New Fingerprinting Lemma
When: October 01, 3:00 pm - 4:00 pm.
Speakers: Victor Sanches Portella, IME-USP.
Where: Sala 1, Edificio Felipe Villanueva, Campus San Joaquín, UC.
Abstract:
Analyzing sensitive data presents a fundamental dilemma: how can we extract population-level insights while protecting individual privacy? Differential Privacy (DP) provides a rigorous mathematical framework to address this challenge, offering formal guarantees against sensitive data exposure. Beyond its widespread adoption in practice, DP has revealed surprising connections to various fields like online learning, machine learning generalization, and robust statistics.
In this talk, I'll provide a brief introduction to the core ideas of differential privacy. I will then give an overview of fingerprinting techniques, a powerful tool for proving lower bounds for DP problems. Finally, I'll present a recent work of mine with Professor Nick Harvey from the University of British Columbia, where we expand upon these techniques to establish tight lower bounds for privately estimating Gaussian covariance matrices. A key technical challenge is handling the dependencies between the parameters being estimated. I will show how we leverage the Stein-Haff identity to overcome this issue, and how this work suggests a way to use Stokes' theorem as a more general tool to circumvent this kind of obstacle.
Title: Strategyproof mechanisms without money for independence systems
When: September 24, 3:00 pm - 4:00 pm.
Speakers: Javier Cembrano, MPII.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
We consider mechanisms without money for combinatorial independence systems. We are given an independence system where the ground set of items is partitioned into sets owned by strategic agents. Each item has a weight that is the private information of the agent owning it. A mechanism takes an independence system, the weights, and the partitioning as input and returns an independent set; it is strategyproof if no agent can increase the total weight of their items in the solution by reporting lower weights of their items to the mechanism or withholding a subset of their items from the mechanism, and it is \alpha-approximate if the total weight of items selected is an \alpha-fraction on the total weight of an optimal solution.
In this talk, we will first address the case where the independence system is defined by a knapsack constraint. In this setting, we give the first strategyproof deterministic mechanism achieving a constant approximation (equal to the square of the golden ratio), and we improve the best known approximation guarantee for randomized mechanisms from 576 to 2. We will then construct a 6.018-approximate strategyproof mechanism for independence systems satisfying a combinatorial condition that is fulfilled, e.g., by b-matchings, intersection of strongly base-orderable matroids, and unit interval scheduling. This generalizes and improves over a logarithmic approximation for matchings and yields an O(\log k)-approximate strategyproof mechanism for generalized assignment with k jobs, providing the first non-trivial approximation for this setting. Finally, we will briefly discuss mechanisms with improved constant approximations for special cases such as unweighted matchings and knapsack with unit-density items.
This is based on joint work with Max Klimm, Martin Knaack, and Arturo Merino.
Title: Open Problem Session.
When: August 27, 3:00 pm - 4:30 pm.
Speakers: Alexander Kozachinskiy, Gonzalo Muñoz, José Verschae
Where: Sala 1, Edificio Felipe Villanueva, Campus San Joaquín, UC.
Speaker: Cristián Palma, Cornell U.
Title: Selfish Learners in Queueing Systems with Small Buffers
When: August 20, 3:00 pm - 4:00 pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
Algorithmic Game Theory has provided a good understanding of the impact of strategic behavior on outcomes in many one-shot games, including traffic routing and online auctions. This has been extended to repeated games under the assumption players use some form of (no-regret) learning. However, in many real-life scenarios, rounds are not repetitive and may involve carryover effects from one round to the next. This is the case of budgeted auctions and queuing systems.
In this talk, we analyze a queueing system modeling packet routing in data centers. Routers compete for getting their packets treated by servers. Rejected packets must be resent in subsequent rounds, creating dependencies across time. We study the resulting process and show bounds on the excess server capacity needed to guarantee that all packets get served despite the selfish and myopic behavior of the routers.
Speaker: Felix Hommelsheim, Bremen U.
Title: Improved Approximation Algorithms for Path and Forest Augmentation via a Novel Relaxation
When: August 13, 3:00 pm - 4:00 pm.
Where: Campus San Joaquín, UC, Room 5, Facultad de Matemáticas.
Abstract:
The Forest Augmentation Problem (FAP) asks for a minimum set of additional edges (links) that make a given forest 2-edge-connected while spanning all vertices. A key special case is the Path Augmentation Problem (PAP), where the input forest consists of vertex-disjoint paths. Grandoni, Jabal Ameli, and Traub [STOC'22] recently broke the long-standing 2-approximation barrier for FAP, achieving a 1.9973-approximation. A crucial component of this result was their 1.9913-approximation for PAP; the first better-than-2 approximation for PAP.
In this work, we improve these results and provide a 1.9412-approximation for PAP, which implies a 1.9955-approximation for FAP. One of our key innovations is a $(\frac{7}{4} + \varepsilon)$-approximation preserving reduction to so-called structured instances, which simplifies the problem and enables our improved approximation. Additionally, we introduce a new relaxation inspired by 2-edge covers and analyze it via a corresponding packing problem, where the relationship between the two problems is similar to the relationship between 2-edge covers and 2-matchings. Using a factor-revealing LP, we bound the cost of our solution to the packing problem w.r.t. the relaxation and derive a strong initial solution. We then transform this solution into a feasible PAP solution, combining techniques from FAP and related connectivity augmentation problems, along with new insights. A key aspect of our approach is leveraging the properties of structured PAP instances to achieve our final approximation guarantee. Our reduction framework and relaxation may be of independent interest in future work on connectivity augmentation problems.
Speaker: Mario Bravo, UAI.
Title: Almost sure convergence in evolutionary models with vanishing mutations
When: July 30, 3:00 pm - 4:00 pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
We analyze the long-term behavior of evolutionary models where mutations gradually vanish over time. Our focus is on the almost sure convergence of the empirical occupation measure—that is, the time-averaged distribution of states—as the mutation parameter decays in a controlled manner. Under suitable conditions, we prove that this measure converges almost surely to a specific invariant distribution of a limiting Markov chain, while also establishing explicit convergence rates. Our analysis is carried out within a class of time-inhomogeneous Markov chains on a finite state space, where the so-called energy barrier of the limiting dynamics determines the required decay rate of the mutation parameter.
This is joint work with Michel Benaïm and Mathieu Faure.
Speaker: Svenja Griesbach, U Chile.
Title: Deterministic Impartial Selection with Weights
When: July 2, 3:00 pm - 4:00 pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
In the impartial selection problem, a subset of agents up to a fixed size k among a group of n is to be chosen based on votes cast by the agents themselves. A selection mechanism is impartial if no agent can influence its own chance of being selected by changing its vote. It is \alpha-optimal if, for every instance, the ratio between the votes received by the selected subset is at least a fraction of \alpha of the votes received by the subset of size k with the highest number of votes.
We study deterministic impartial mechanisms in a more general setting with arbitrarily weighted votes and provide the first approximation guarantee, roughly k/2n. When the number of agents to select is large enough compared to the total number of agents, this yields an improvement on the previously best-known approximation ratio of 1/k for the unweighted setting. We further show that our mechanism can be adapted to the impartial assignment problem, in which multiple sets of up to k agents are to be selected, with a loss in the approximation ratio of 1/2.
This is joint work with Javier Cembrano and Maximilian J. Stahlberg
Speaker: Miguel Romero, UC
Title: On the complexity of the CSP
When: June 11, 3:00 pm - 4:00 pm.
Where: Campus San Joaquín UC (room tba).
Abstract:
The Constraint Satisfaction Problem (CSP) is defined as follows: we are given a set of variables, a set of values, and a set of constraints, where each constraint restricts the combination of values that certain tuple of variables can take. The question is whether there exists an assignment of values to the variables that satisfies all the constraints. The CSP is a well-known NP-complete problem, and hence much research has been done to identify restrictions of this problem that can be solved in polynomial time.
In this talk, we focus on the so-called structural restrictions, that limit the way the variables interact with each other through the constrains. We start by briefly surveying some landmark results characterizing polynomial-time solvable structural restrictions of the CSP. Then we consider the optimization version of the CSP, where the goal is to maximize the number of satisfied constraints (or the sum of the weights of the satisfied constraints in the weighted version). We present some results characterizing the structural restrictions that can be optimized in polynomial-time, which extend previous known results for the (decision version) CSP. We conclude discussing some results regarding approximability and some open problems. This talk is based on joint work with Standa Zivny, Clement Carbonnel and Marcin Wrochna.
Speaker: Andrés Abeliuk, Universidad de Chile.
Title: Price of Anarchy in Algorithmic Matching of Romantic Partners.
When: May 28, 3:00 pm - 4:00 pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
Algorithmic matching is a pervasive mechanism in our social lives and is becoming a major medium through which people find romantic partners and potential spouses. However, romantic matching markets pose a principal-agent problem with the potential for moral hazard. The agent’s (or system’s) interest is to maximize the use of the matching website, while the principal’s (or user’s) interest is to find the best possible match. This creates a conflict of interest: the optimal matching of users may not be aligned with the platform’s goal of maximizing engagement, as it could lead to long-term relationships and fewer users using the site over time. Here, we borrow the notion of price of anarchy from game theory to quantify the decrease in social efficiency of online algorithmic matching sites where engagement is in tension with user utility. We derive theoretical bounds on the price of anarchy and show that it can be bounded by a constant that does not depend on the number of users in the system. This suggests that as online matching sites grow, their potential benefits scale up without sacrificing social efficiency. Further, we conducted experiments with human subjects in a matching market and compared the social welfare achieved by an optimal matching service against a self-interested matching algorithm. We show that introducing competition among matching sites aligns the self-interested behavior of platform designers with their users and increases social efficiency.
Speaker: David Salas, Universidad de O'Higgins.
Title: Mean-Field Opinion Dynamics in Random Graphs.
When: May 14, 3:00 pm - 4:00 pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
We consider a set of agents in a network having different opinions over a binary subject. The network is encoded as a (undirected or directed) graph, and each opinion is represented as a value between 0 and 1. At each discrete stage, each agent updates her opinion as a convex combination between the average opinion of her neighbors and her intrinsic opinion, which coincides with its initial opinion. It is well known that such dynamic converges to a stable opinion, which can be computed by inverting a matrix associated with the adjacency matrix of the network. When the network itself is a random graph, the stable opinion becomes a random variable, and when the number of agents is large, computing the expected value of the stable opinion by sampling methods can become prohibitively expensive: indeed, for each sample, it is necessary to invert a large matrix. In this talk, we study the pertinence of considering a mean-field model to approximate the expected value of the stable opinion. That is, by considering an “average network”, we study the gap between the expected value of stable opinions and the stable opinion over the average network. We show, under mild hypotheses, that for undirected Erdös-Rényi random graphs under the connectivity regime, the gap measured with the $\ell_{\infty}$-norm vanishes as the size of the network grows to infinity. Moreover, we show that for directed Erdös-Rényi random graphs, the same result holds for the gap measured with any $\ell_{\rho}$-norm, for $\rho \in (1,\infty]$, by slightly increasing the connectivity regime. This talk is based on a joint work with Javiera Gutiérrez-Martínez (Universidad de Chile) and Víctor Verdugo (Pontificia Universidad Católica de Chile).
Speaker: Gustavo Angulo, UC.
Title: Robust Admission via Two-Stage Stable Matching under Ranking Uncertainty
When: April 30, 3:00 pm - 4:00 pm.
Where: Sala de Estudio Raúl Devés, 1er piso edificio Raúl Devés, Campus San Joaquín, UC, Avda. Vicuña Mackenna 4860.
Abstract:
The Bachillerato Inicia UC program offers a pathway for students from Chilean technical high schools to articulate into undergraduate programs at Pontificia Universidad Católica de Chile. Upon applying, candidates rank up to three preferred programs. However, articulation is determined only after one year, based on their academic ranking within the cohort - a value unknown at the time of admission.
This setting gives rise to a two-stage admission problem with downstream matching constraints and exogenous uncertainty. The challenge is to select a feasible subset of students to admit, respecting the order of applicant scores, while guaranteeing that, under any realization of final rankings, each admitted student can be matched to one of their declared preferences, respecting program capacities.
We formalize this as a two-stage stable matching problem under ranking uncertainty and design an algorithm that characterizes the set of robustly admissible candidates. The model ensures articulation guarantees that are consistent with declared preferences and capacity constraints. We present implementation results from the 2024 and 2025 admission cycles. Joint work with Matías Giddings and Pablo Marshall.
Speaker: Tjark Vredeveld, Maastricht U.
Title: Analyzing the (k-)swap neighborhood for makespan scheduling.
When: April 23, 3:00 pm - 4:00 pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
Analyzing the behavior of local search methods has received considerable attention over the last two decades. One interesting question is how the simplest form of local search, i.e., iterative improvement, behaves w.r.t. a certain neighborhood both in quality of the solution as well as number of iterations needed to obtain a local optimal solution.
In this talk, we consider the basic scheduling problem in which n jobs need to be scheduled on m identical machines so as to minimize the makespan, i.e., the completion time of the last job. Finn and Horowitz (1979) showed that the folklore jump neighborhood, that moves one job from its machine to another machine, yields a (2-2/(m+1))-approximation and this was shown to be tight by Schuurman and Vredeveld (2001/2007). Brucker, Hurink and Werner (1996/1997) showed that the iterative improvement procedure needs O(n^2) iterations to obtain a jump optimal solution, which was shown to be tight by Hurkens and Vredeveld (2003).
Schuurman and Vredeveld (2007) left as an open question to bound the number of iterations to obtain a swap-optimal solution.
In this talk, we consider the k-swap neighborhood, which is a generalization of the jump and the swap neighborhood. In the k-swap neighborhood, a neighboring solution is obtained by selecting at most k jobs scheduled on at most two machines and then interchanging the machine allocation of these jobs. For k=1 this is equivalent to the jump neighborhood and for k=2 it is equivalent to the swap neighborhood.
We analyze the number of iterations needed to obtain a k-swap optimal solution by iterative improvement; thereby answering the question posed in Schuurman and Vredeveld.
This is joint work with Lars Rohwedder and Ashkan Safari.
Speaker: Paul Golz, Cornell U.
Title: Generative Social Choice
When: April 2, 3:00 pm - 3:30 pm .
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
The mathematical study of voting, social choice theory, has traditionally only been applicable to choices among a few predetermined alternatives, but not to open-ended decisions such as collectively selecting a textual statement. We introduce generative social choice, a design methodology for open-ended democratic processes that combines the rigor of social choice theory with the capability of large language models to generate text and extrapolate preferences. Our framework divides the design of AI-augmented democratic processes into two components: first, proving that the process satisfies representation guarantees when given access to oracle queries; second, empirically validating that these queries can be approximately implemented using a large language model. We apply this framework to the problem of summarizing free-form opinions into a proportionally representative slate of opinion statements; specifically, we develop a democratic process with representation guarantees and use this process to portray the opinions of participants in a survey about abortion policy. In a trial with 100 representative US residents, we find that 84 out of 100 participants feel "excellently" or "exceptionally" represented by the slate of five statements we extracted.
Speaker: Tjark Vredeveld, Maastricht U.
Title: Analyzing the (k-)swap neighborhood for makespan scheduling
When: April 25, 3:00 pm - 4:00 pm .
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
Analyzing the behavior of local search methods has received considerable attention over the last two decades. One interesting question is how the simplest form of local search, i.e., iterative improvement, behaves w.r.t. a certain neighborhood both in quality of the solution as well as number of iterations needed to obtain a local optimal solution.
In this talk, we consider the basic scheduling problem in which n jobs need to be scheduled on m identical machines so as to minimize the makespan, i.e., the completion time of the last job. Finn and Horowitz (1979) showed that the folklore jump neighborhood, that moves one job from its machine to another machine, yields a (2-2/(m+1))-approximation and this was shown to be tight by Schuurman and Vredeveld (2001/2007). Brucker, Hurink and Werner (1996/1997) showed that the iterative improvement procedure needs O(n^2) iterations to obtain a jump optimal solution, which was shown to be tight by Hurkens and Vredeveld (2003).
Schuurman and Vredeveld (2007) left as an open question to bound the number of iterations to obtain a swap-optimal solution.
In this talk, we consider the k-swap neighborhood, which is a generalization of the jump and the swap neighborhood. In the k-swap neighborhood, a neighboring solution is obtained by selecting at most k jobs scheduled on at most two machines and then interchanging the machine allocation of these jobs. For k=1 this is equivalent to the jump neighborhood and for k=2 it is equivalent to the swap neighborhood.
We analyze the number of iterations needed to obtain a k-swap optimal solution by iterative improvement; thereby answering the question posed in Schuurman and Vredeveld.
This is joint work with Lars Rohwedder and Ashkan Safari.
Speaker: Ulrike Schmidt-Kraepelin, TU Eindhoven.
Title: Truthful Budget Aggregation: Beyond Moving-Phantom Mechanisms
When: April 2, 3:30 pm - 4:00 pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
We study a budget-aggregation setting in which a number of voters report their ideal distribution of a budget over a set of alternatives, and a mechanism aggregates these reports into an allocation. Ideally, such mechanisms are truthful, i.e., voters should not be incentivized to misreport their preferences. For the case of two alternatives, the set of mechanisms that are truthful and additionally meet a range of basic desiderata (anonymity, neutrality, and continuity) exactly coincides with the so-called moving-phantom mechanisms, but whether this space is richer for more alternatives was repeatedly stated as an open question. We answer this question in the affirmative by presenting a class of truthful mechanisms that are not moving-phantoms but satisfy the three properties. Since moving-phantom mechanisms can only provide limited fairness guarantees (measured as the worst-case distance to a fair share solution), one motivation for broadening the class of truthful mechanisms is the hope for improved fairness guarantees. We dispel this hope by showing that lower bounds holding for the class of moving-phantom mechanisms extend to all truthful, anonymous, neutral, and continuous mechanisms.
This is joint work with Mark de Berg, Rupert Freeman, and Markus Utke.
Speaker: Jamie Tucker-Foltz, Harvard U.
Title: Sampling tree-weighted balanced graph partitions in polynomial time.
When: April 2, 4:00 pm - 4:30 pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
When judging the fairness of a political redistricting map, it is useful to be able to generate a large ensemble of "random" alternative maps. Formally, this is a graph partitioning problem. The objective is to output random partitions of the vertex set into k equal-sized pieces inducing connected subgraphs. Numerous algorithms have been developed to sample either exactly or approximately from the so-called "spanning tree distribution," where each partition is weighted by the product of the numbers of spanning trees in each part. However, none of these algorithms had been shown to run in polynomial time, even on very tame families of graphs. In a recent paper, Charikar, Liu, Liu, and Vuong proposed a simple algorithm, and conjectured that it runs in polynomial time on grid graphs for constant k. We prove this conjecture, and extend to a wide class of "grid-like" planar graphs encapsulating the kinds of graphs typically encountered in real geographical data. Based on joint work with Sarah Cannon and Wesley Pegden: https://arxiv.org/abs/2310.15152
Speaker: Felix Hommelsheim, Bremen U.
Title: Two-Edge Connectivity via Pac-Man Gluing
When: March 19, 3:00pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
We study the 2-edge-connected spanning subgraph (2-ECSS) problem: Given a graph G, compute a connected subgraph H of G with the minimum number of edges such that H is spanning, i.e., V(H) = V(G), and H is 2-edge-connected, i.e., H remains connected upon the deletion of any single edge, if such an H exists. The 2-ECSS problem is known to be NP-hard. In this work, we provide a polynomial-time (5/4 + epsilon)-approximation for the problem for an arbitrarily small epsilon>0, improving the previous best approximation ratio of 13/10 + epsilon.
Our improvement is based on two main innovations: First, we reduce solving the problem on general graphs to solving it on structured graphs with high vertex connectivity. This high vertex connectivity ensures the existence of a 4-matching across any bipartition of the vertex set with at least 10 vertices in each part. Second, we exploit this property in a later gluing step, where isolated 2-edge connected components need to be merged without adding too many edges. Using the 4-matching property, we can repeatedly glue a huge component (containing at least 10 vertices) to other components. This step is reminiscent of the Pac-Man game, where a Pac-Man (a huge component) consumes all the dots (other components) as it moves through a maze. These two innovations lead to a significantly simpler algorithm and analysis for the gluing step compared to the previous best approximation algorithm, which required a long and tedious case analysis.
This is a joint work with Mohit Garg and Alexander Lindermayr. An independent work by Miguel Bosch-Calvo, Fabrizio Grandoni and Afrouz Jabal Ameli obtained essentially the same result. A merge of these two papers will appear at STOC 2025.
Speaker: Martin Schmidt, Trier University.
Title: The Burial of Coupling Constraints in Linear Bilevel Optimization
When: March 12, 3:00pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
It has been common sense in (linear) bilevel optimization that problems with coupling constraints are more difficult to tackle than those without such constraints. While the modeling capabilities in terms of the feasible sets are indeed richer because coupling constraints allow to model disconnected feasible sets, complexity theory did not see any difference between problems with our without coupling constraints. In this talk, we show that there is no difference at all when one considers optimal solutions instead of just feasible points. For the case of optimistic bilevel problems, we proved this in 2024. The same result for pessimistic bilevel problems is from last week. In total, our results show that - on the level of optimal solutions - there is no difference between optimistic as well as pessimistic bilevel optimization with or without coupling constraints. As a "corollary" and rather surprisingly, all theoretical results and algorithms for pessimistic bilevel optimization without coupling constraints also apply to pessimistic bilevel optimization with coupling constraints.
Speaker: Neil Olver, LSE.
Title: A strongly polynomial algorithm for the minimum-cost generalized flow problem
When: December 18, 3:00pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
We give a strongly polynomial algorithm for minimum cost generalized flow, and hence for optimizing any linear program with at most two non-zero entries per row, or at most two non-zero entries per column. Our result can be viewed as progress towards understanding whether all linear programs can be solved in strongly polynomial time, also referred to as Smale's 9th problem.
Our approach is based on the recent primal-dual interior point method (IPM) due to Allamigeon, Dadush, Loho, Natura and Végh (FOCS '22). They show that the number of iterations needed by the IPM can be bounded in terms of the straight line complexity of the central path; roughly speaking, this is the minimum number of pieces of any piecewise linear curve that multiplicatively approximates the central path. As our main contribution, we show that the straight line complexity of any minimum cost generalized flow instance is polynomial in the number of arcs and vertices.
Joint work with Daniel Dadush, Zhuan Khye Koh, Bento Natura and László Végh.
Speaker: Svenja Griesbach, CMM.
Title: Optimizing Throughput and Makespan of Queuing Systems by Information Design.
When: November 13, 3:00pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
We study the optimal provision of information for two natural performance measures of queuing systems: throughput and makespan. A set of parallel links (queues) is equipped with deterministic capacities and stochastic travel times where the latter depend on a realized scenario, and the number of scenarios is assumed to be constant. A continuum of flow particles (users) arrives at the system at a constant rate. A system operator knows the realization of the scenario and may (partially) reveal this information via a public signaling scheme to the flow particles. Upon arrival, the flow particles observe the signal issued by the system operator, form an updated belief about the realized scenario, and decide on a link to use. Inflow into a link exceeding the link's capacity builds up in a queue that increases the travel time on the link. Dynamic inflow rates are in a Bayesian dynamic equilibrium when the expected travel time along all links with positive inflow is equal at every point in time. For a given time horizon T, the throughput induced by a signaling scheme is the total volume of flow that leaves the links in the interval [0,T]. We show that the public signaling scheme that maximizes the throughput may involve irrational numbers and provide an additive polynomial time approximation scheme (PTAS) that approximates the optimal throughput by an arbitrary additive constant \epsilon >0. The algorithm solves a Langrangian dual of the signaling problem with the Ellipsoid method whose separation oracle is implemented by a cell decomposition technique. We also provide a multiplicative fully polynomial time approximation scheme (FPTAS) that does not rely on strong duality and, thus, allows to compute also the optimal signals. It uses a different cell decomposition technique together with a piece-wise convex under-estimator of the optimal value function. Finally, we consider the makespan of a Bayesian dynamic equilibrium which is defined as the last point in time when a total given value of flow leaves the system. Using a variational inequality argument, we show that full information revelation is a public signaling scheme that minimizes the makespan.
This is joint work with Max Klimm, Philipp Warode, and Theresa Ziemke.
Speaker: Mirabel Mendoza, CMM.
Title: Inverse problems in combinatorial optimization.
When: October 30, 3:00pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
In combinatorial optimization problems, we are given a family of feasible sets and a cost function, and the goal is to find a minimum-cost feasible set among all. On the other hand, an inverse problem aims to find new costs that would make a given solution to be optimal while minimizing the distance between the original and the new costs. In this talk, I'll present Newton-type algorithms for inverse problems where we measure the distance between the cost functions by the infinity norm and the span function. Additionally, I'll present our min-max results. This is a joint work with Kristóf Bérczi and Kitti Varga.
Speaker: Jérôme De Boeck, University of Liège.
Title: Bidding problems in European deregulated electricity markets.
When: October 23, 3:00pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
In this talk, we consider a bidding problem in European deregulated electricity markets. The goal of this problem is to maximize the profit of a Generation Company (GC) by choosing the best possible bids to propose to a Market Operator (MO) whose tasks is to minimize the daily price of electricity for the retailers. This problem has many challenging features to consider such as unit commitment for the GC, a market clearing system for the MO and demand and productions capacities are subject to increasing uncertainty due to renewable energies. Most studies in the literature try to solve these problems separately in order to build the best possible bidding strategy. The focus in our studies of bidding problems is to build a model integrating all features together to be able to prove the optimality of the solutions
found.
Two variants of a general bidding problem are considered, one focussing on integrating uncertainty, the other on integrating a complete unit commitment model. We present general observations of the structure of a bidding problem as well as specific observations for both problems. This leads us to two different solving methodologies, one based on dynamic programming, the other based on a classical bi-level problem reformulation through KKT conditions. The observations of both problems considered allow to significantly improve previous formulations presented in the literature in term of solution quality and solving times.
Speaker: Arturo Merino, UOH.
Title: Set Selection with Uncertain Weights: Non-Adaptive Queries and Thresholds.
When: October 16, 3:00pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
We study set selection problems where the weights are uncertain. Instead of its exact weight, only an uncertainty interval containing its true weight is available for each element.
In some cases, some solutions are universally optimal; i.e., they are optimal for every weight that lies within the uncertainty intervals.
However, it may be that no universally optimal solution exists, unless we are revealed additional information on the precise values of some elements.
In the minimum cost admissible query problem, we are tasked to (non-adaptively) find a minimum-cost subset of elements that, no matter how they are revealed, guarantee the existence of a universally optimal solution.
This belongs to the setting of explorable uncertainty and while there is a significant body of work in the adaptive setting, non-adaptive versions, such as the one in this paper, are far-less understood.
We provide efficient algorithms for finding minimum cost admissible queries thresholds in the settings of minimum spanning trees, matroids, and matchings in trees; and NP-hardness results in the settings of s-t shortest paths and bipartite matching.
We obtain our results by introducing thresholds under uncertainty. Roughly speaking, for every element e, there is a threshold for its weight, below which e is included in all optimal solutions and a second threshold above which e is excluded from all optimal solutions. We show that computing thresholds and finding minimum cost admissible queries are essentially equivalent problems. Thus, reducing the analysis of the minimum admissible query problem to the problem of computing thresholds.
(Joint work with Cristoph Dürr, José Soto, and José Verschae)
Speaker: Alexander Kozachinskiy, CENIA.
Title: Constructions of circuits for the majority function.
When: September 25, 3:00pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
We will consider a task of computing the majority function by Boolean circuits. This function has logarithmic-depth circuits. Moreover, this remains true when circuits consist of just binary AND and OR, no negations. However, in this regime, no simultaneously explicit and "simple" construction is known (with "simple" being an informal property, referencing a subjective expositional complexity of a construction). In the talk, I will present a small piece of progress towards getting such a construction, and I will probably explain some classical constructions along the way.
Speaker: Jongmin Lee, Seoul National University.
Title: Convergence Analysis of Davis-Yin Splitting via Scaled Relative Graphs
When: August 28, 3:00pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
Davis-Yin splitting (DYS) has found a wide range of applications in optimization, but its linear rates of convergence have not been studied extensively. The scaled relative graph (SRG) simplifies the convergence analysis of operator splitting methods by mapping the action of the operator onto the complex plane, but the prior SRG theory did not fully apply to the DYS operator. In this work, we formalize an SRG theory for the DYS operator and use it to obtain tighter contraction factors
Speaker: Corinna Mathwieser, RWTH Aachen.
Title: Precedence Constraint Matching.
When: August 21, 3:00pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
In the precedence-constrained perfect matching problem, one needs to incrementally build a matching, whereby the order in which edges join the matching is subject to precedence constraints. Given a graph G = (V, E), a precedence constraint is a pair (X, e) with e being an edge and X a set of vertices, meaning that e may only be added to the matching after covering at least one vertex in X. In this talk, I will introduce C-canonical precedence constraints, where an edge may join a matching if both end-vertices have (shortest path) distance at most C to the current matching. I will talk about the complexity of the problem for different values of C and for different graph classes and present some open questions.
Speaker: Andreas Göbel, U Potsdam .
Title: Analysis of the survival time of the SIRS process via expansion.
When: June 19, 3:00pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
We study the SIRS process---a continuous-time Markov chain modelling the
spread of infections on graphs. In this model, vertices are either
susceptible, infected, or recovered. Each infected vertex becomes
recovered at rate 1 and infects each of its susceptible neighbors
independently at rate~$\lambda$, and each recovered vertex becomes
susceptible at a rate~$\rho$, which we assume to be independent of the
graph size. A central quantity of the SIRS process is the time until no
vertex is infected, known as the survival time. Surprisingly though, to
the best of our knowledge, all known rigorous theoretical results that
exist so far immediately carry over from the related SIS model and do
not completely explain the behaviour of the SIRS process.
We address this imbalance by conducting theoretical analyses of the SIRS
process via the expansion properties of the underlying graph.
Our first result shows that the expected survival time of the SIRS
process on stars is at most polynomial in the graph size for any value
of~$\lambda$. This behaviour is fundamentally different from the SIS
process, where the expected survival time is exponential already for
small infection rates. This raises the question of which graph
properties result in an exponential survival time. Our main result is an
exponential lower bound of the expected survival time of the SIRS
process on expander graphs. Specifically, we show that on expander
graphs $G$ with $n$ vertices, degree close to $d$, and sufficiently
small spectral expansion, the SIRS process has expected survival time at
least exponential in $n$ when $\lambda \geq c/d$ for a constant $c > 1$.
Previous results on the SIS process show that this bound is almost
tight. Additionally, our result holds even if $G$ is a subgraph.
Notably, our result implies an almost-tight threshold for Erdos–Rényi
graphs and a regime of exponential survival time for complex network
models. The proof of our main result draws inspiration from Lyapunov
functions used in mean-field theory to devise a two-dimensional
potential function and from applying a negative-drift theorem to show
that the expected survival time is exponential.
This is joint work with Tobias Friedrich, Nicolas Klodt, Martin Krejca
and Marcus Pappik.
Speaker: Debmalya Panigrahi, Duke U.
Title: Learning-augmented Assignment: Santa Claus does Load Balancing.
When: June 12, 3:00pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
Assignment problems are among the most well-studied in online algorithms. In these problems, a sequence of items arriving online must be assigned among a set of agents so as to optimize a given objective. This encompasses scheduling problems for minimizing makespan, p-norms, and other objectives, as well as fair division problems such as the Santa Claus problem and Nash welfare maximization. One common feature is that many of these problems are characterized by strong worst-case lower bounds in the online setting. To circumvent these impossibility results, recent research has focused on using additional (learned) information about the problem instance and this has led to dramatic improvements in the competitive ratio over the worst case. In this talk, I will first survey some of this literature (Lattanzi et al., SODA 20; Li and Xian, ICML 21; Banerjee et al., SODA 22; Barman et al., AAAI 22) that addresses specific problems in this domain. I will then proceed to describe recent work with Ilan Cohen that brings these problems under one umbrella: we give a single algorithmic framework for learning-augmented online assignment for a large class of maximization and minimization objectives.
Speaker: Claudio Telha, U de los Andes.
Title: A sequential Stackelberg game for dynamic inspection problems.
When: May 29, 3:00pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
This talk considers a game where an inspection authority must verify that a set of operators adhere to a certain rule. The inspector has time to inspect only a few operators, and this must be done sequentially. The operators disclose the sequence of inspections as they occur.
We will discuss variants of this sequential game. The talk focuses on the mathematical structure of the set of equilibria of this inspection game, where the inspector is a Stackelberg leader and is capable of performing exactly two inspections. A static and dynamic version of the game are analyzed. In the static game, the inspection paths are the solutions to a transportation problem. This equivalence is then used to determine an explicit solution. We discuss how the static and dynamic games relate to each other. In particular, we show they lead to the same utility value for the inspector.
Most of the contents of this talk is based on joint work with Cristóbal Guzmán, Javiera Riffo, and Mathieu Van Vyve and was published at EJOR.
Speaker: Javier Marinkovic, U Chile.
Title: Attention is Turing Complete.
When: May 22, 3:00pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
Alternatives to recurrent neural networks, in particular, architectures based on self-attention, are gaining momentum for processing input sequences. In spite of their relevance, the computational properties of such networks have not yet been fully explored. We study the computational power of the Transformer, one of the most paradigmatic architectures exemplifying self-attention. We show that the Transformer with hard-attention is Turing complete exclusively based on their capacity to compute and access internal dense representations of the data. Our study also reveals some minimal sets of elements needed to obtain this completeness result.
Speaker: Andrés Cristi, CMM, Uchile.
Title: Prophet Inequalities Require Only a Constant Number of Samples
When: May 15, 3:00pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
In a prophet inequality problem, n independent random variables are presented to a gambler one by one. The gambler decides when to stop the sequence and obtains the most recent value as a reward. We evaluate a stopping rule by the worst-case ratio between its expected reward and the expectation of the maximum variable. Because of its connections with resource allocation and posted-price mechanisms, this problem has been intensively studied, and several variants have been introduced. While most of the literature assumes that distributions are known to the gambler, recent work has considered the question of what is achievable when the gambler has access only to a few samples per distribution. We provide a unified proof that for all three major variants of the single-selection problem (i.i.d, random-order, and free-order), a constant number of samples (independent of n) for each distribution is good enough to approximate the optimal ratios. Prior to our work, this was known to be the case only in the i.i.d. variant.
Previous works relied on explicitly constructing sample-based algorithms that match the best possible ratio. Remarkably, the optimal ratios for the random-order and the free-order variants with full information are still unknown. Consequently, our result requires a significantly different approach than for the classic problem and the i.i.d. variant, where the optimal ratios and the algorithms that achieve them are known. A key ingredient in our proof is an existential result based on a minimax argument, which states that there must exist an algorithm that attains the optimal ratio and does not rely on the knowledge of the upper tail of the distributions.
Based on joint work with Bruno Ziliotto that will appear in STOC'24.
Speaker: Alexander Kozachinskyi, UC.
Title: Aggregating opinions -- do we need (Kolmogorov) complexity?
When: May 8, 3:00pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
Imagine a group of people that need each day to make some collective decision (for the entire group), choosing one of two options A and B. Some people prefer A, some prefer B, and others are OK with any of the two options. (The next day the preferences may change arbitrarily, and a new group choice is made.) The group management needs to choose A or B, in both cases making some people unhappy. This cannot be avoided completely (if someone wants A and someone else wants B, one of them will be unhappy), so the goal is more modest: each person should not be unhappy too many times.
Even this modest goal may be hard to achieve: if one person always wants A, and another person always wants B, then one of them will be unhappy at least T/2 times during a T-day period. To ensure that this kind of obstacle does not appear, let us make an additional assumption. We say that two people are in a conflict at some day d if one of them wants A, and the other one wants B. (The people that are OK with both options do not conflict with anyone.) The additional assumption: for every pair of people there is O(1) days where they are in a conflict.
We show that there exists a strategy which makes every person unhappy at most sqrt{T} * polylog(N) times, where N is the size of the group. This bound is optimal, but our proof is very strange and somehow follows from inequalities about Kolmogorov complexity. In particular, it does not give an efficient explicit strategy. Kolmogorov complexity is often used as a tool to prove combinatorial results but, in most cases, one can reformulate the argument in terms of counting, or probabilities, or Shannon entropies, or at least find an alternative purely combinatorial argument. We do not know how to do this in this case. So far, using a combinatorial argument, we only obtained a worse bound T^{2/3} * polylog(N) by analyzing the multiplicative weight algorithm.
Joint with Alexander Shen and Tomasz Steifer
Speaker: Philipp Pabst, RWTH Aachen, Germany.
Title: Tariff Zone Assignment: Complexity Results and Solution Approaches
When: Apr 24, 3:00pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
In the Tariff Zone Assignment problem, an operator is tasked with partitioning a transport network (i.e., a graph) into tariff zones in order to maximize revenue under a linear pricing model. In this model, customers pay a price proportional to the number of zones traveled through, up to some maximum accepted price. If this price gets exceeded, customers drop out of the model and thus generate no revenue at all.
In this talk, first we will look at the complexity of this problem and show APX-hardness on star graphs and strong NP-hardness on paths. These complexity results motivate the introduction of a variant of the problem, which we will show to be polynomial time solvable when restricted to paths. Finally, we will discuss an integer programming approach to the original formulation of the problem.
This is joint, ongoing work with Lennart Kauther, Sven Müller and Britta Peis.
Speaker: Nishant Mehta, University of Victoria, Canada.
Title: Structured bandits meets classification: the thresholded linear bandits problem
When: Apr 24, 2:00pm.
Where: Sala de Seminario, 6th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
In this talk, I’ll introduce the thresholded linear bandit problem, a novel sequential decision making problem at the interface of structured stochastic multi-armed bandits and learning halfspaces. In this problem, the set of arms is [0, 1]^d, the arms follow an unknown linear utility model, and the Bernoulli revenue of an arm has a success probability based on an unknown thresholding of the utility. Arms are also associated with a known linear cost. The goal of a learning algorithm is to sequentially play arms so as to maximize cumulative reward. This problem is motivated by several practical applications. For instance, imagine tuning the continuous features of an offer to a consumer; higher values incur higher cost to the vendor but result in a more attractive offer. At some threshold, the offer is attractive enough for a random consumer to accept at the higher probability level. Another motivation is selling perishable goods of tunable quality to consumers, some of whom have an outside option. For the one-dimensional case, I’ll present Leftist, which enjoys $\log^2 T$ problem-dependent regret in favorable cases and has $\log(T) \sqrt{T}$ worst-case regret; I'll also give a lower bound suggesting this is unimprovable. I’ll then sketch MD-Leftist, our algorithm for the multi-dimensional case. This algorithm obtains similar regret bounds but with $d^2 \log d$ and $d^{1.5} \log d$ dependence on dimension for the two types of bounds respectively. This work was presented at AISTATS 2023 and is joint with Junpei Komiyama, Vamsi Potluru, Andrea Nguyen, and Mica Grant-Hagen.
Speaker: Vasilis Livanos, U Chile.
Title: Oracle-Augmented Prophet Inequalities
When: Apr 17, 3:00pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
In the top-1-of-k prophet inequality setting, a gambler is given a sequence of n random variables X_1, ..., X_n, taken from known distributions, observes their values in an adversarial order, and selects up to k of them, immediately after they are observed, so that the top value selected is as high as possible, comparing against a prophet who always selects the maximum realization. For k = 1, one recovers the classical prophet inequality, and thus this model serves as a generalization.
In this talk, we look at a new approach to solve the top-1-of-k model. We generalize the standard prophet inequality for k = 1, allowing the gambler some additional information about the future that is otherwise privy only to the prophet. Specifically, at any point in the process, the gambler is allowed to query an oracle O. The oracle responds with a single bit answer: YES, if the current realization is the largest of the remaining realizations, and NO otherwise. We show that the oracle model with k oracle calls is equivalent to the top-1-of-(k+1) model when the objective is maximizing the probability of selecting the maximum. This equivalence fails to hold when the objective is maximizing the competitive ratio, but we still show that any algorithm for the oracle model implies an equivalent competitive ratio for the top-1-of-(k+1) model.
We resolve the oracle model for any k, giving almost tight lower and upper bound on the best possible competitive ratio compared to an almighty adversary. As a consequence, we provide new results as well as improvements on known results for the top-1-of-(k+1) model.
Speaker: Víctor Verdugo, UC.
Title: Linear programming hierarchies: Some limitations and strengths
When: Apr 10, 3:00pm.
Where: Sala de Seminario von Neuman, 7th floor, CMM, Av. Beauchef 851, torre norte.
Abstract:
In this talk, I will give an introduction to linear programming hierarchies, with a particular focus on the hierarchy by Sherali & Adams. I will discuss some limitations of the tool, but also ways to get useful and stronger relaxations.
Speaker: Ricardo Baeza-Yates, Institute for Experiential AI @ Northeastern University & DCC, Univ. de Chile.
Title: The Limitations of Machine Learning & Us
When: Mar 25, 3:30pm.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract:
Machine learning (ML), particularly deep learning, is being used everywhere. However, not always is used well, ethically and scientifically. In this talk we first do a deep dive in the limitations of supervised ML and data, its key component. We cover small data, datification, bias, predictive optimization issues, evaluating success instead of harm, and pseudoscience, among other problems. The second part is about our own limitations using ML, including different types of human incompetence: cognitive biases, unethical applications, no administrative competence, copyright violations, misinformation, and the impact on mental health. In the final part we discuss regulation on the use of AI and responsible AI principles, that can mitigate the problems outlined above.
Short Bio:
Ricardo Baeza-Yates is Director of Research at the Institute for Experiential AI of Northeastern University, as well as part-time professor at the Dept. of Computer Science of University of Chile. Before, he was VP of Research at Yahoo Labs, based in Barcelona, Spain, and later in Sunnyvale, California, from 2006 to 2016. He is co-author of the best-seller Modern Information Retrieval textbook published by Addison-Wesley in 1999 and 2011 (2nd ed), that won the ASIST 2012 Book of the Year award. From 2002 to 2004 he was elected to the Board of Governors of the IEEE Computer Society and between 2012 and 2016 was elected for the ACM Council. In 2009 he was named ACM Fellow and in 2011 IEEE Fellow, among other awards and distinctions. He obtained a Ph.D. in CS from the University of Waterloo, Canada, and his areas of expertise are responsible AI, web search and data mining plus data science and algorithms in general.
Speaker: Tomasz Steifer, PUC.
Title: Online learning with consistency oracle
When: Nov 19, 3:00pm.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract:
A binary classification game is being played between a learner and an adversary. At each round, the adversary selects a natural number x and the learner has to predict its label h(x). Is there a bound on the number of mistakes made by the learner, assuming that h comes from some class H? Littlestone gave a learning algorithm which achieves an optimal mistake bound d, provided H has a certain combinatorial property. However, Littlestone's algorithm is believed to be computationally demanding. Can we have a feasible strategy for the learner and if so, how many mistakes would it make? I will introduce such a strategy, which is fast (for finite classes) and achieves an exponential (in d) mistake bound. The algorithm was devised together with Sasha Kozachinskiy.
Speaker: Felipe Subiabre, DII, U Chile.
Title: Optimal Disease Screening Policies Under Budget Constraints
When: Nov 15, 3:00pm.
Where: Sala Multimedia, 6to piso, CMM, Beauchef 851, torre norte.
Abstract:
We study the problem of selecting screening policies for a given disease in a large population which is divided into risk groups, and where we have a measure of benefit and cost for each possible policy. We want to find a selection of them, one for each risk group so that the total benefit is maximized and a budget constraint per time period is satisfied.
To that end we start from an individual base model that accounts for the probabilistic appearance and evolution of the disease along stages. Our main result is an ergodic-like theorem which allows us to calculate the expected costs and benefits for each policy. This result is applicable to a rich class of individual models (discrete and continuous-time semi-Markov processes) currently used in the literature.
The presentation is grounded in the application to cancer screening, but can be extended to other non-contagious diseases. We wil show examples of some individual models and how our methodology allows for the efficient evaluation and optimization of large families of policies. If time permits we will also discuss the relation between our methodology and some other approaches (Markov decision processes, flowgraph models), and some future work.
Speaker: Maximilian Fichtl, DII, U Chile
Title: Computing Bayes-Nash Equilibria in Auctions via Online Learning
When: Nov 08, 3:00pm.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract:
Auction games are a central application of game theory in economics. However, while explicit formulas for Bayes-Nash equilibria are known for particular auction settings, we generally have to rely on numerical methods to predict the outcomes of auctions. We present a simple class of learning algorithms to approximate Bayes-Nash equilibria in these games. Surprisingly, while recent theoretical results suggest that such algorithms perform poorly in general matrix games, we provide experimental evidence that they perform very well in many different auction scenarios.
Despite the positive experimental results, proving the convergence of these algorithms remains an open problem - even in very basic settings. We provide some insights and known preliminary results towards answering this question.
Speaker: Alba Olivares Nadal, UNSW Business School
Title: Optimal Patient Selection into Care Management Programs
When: October 25, 3:00pm.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract:
Care Management Programs (CMPs) coordinate the care for patients with complex care needs and older frail adults, who usually represent the top of healthcare spending. Although CMPs have appeared as credible avenues for reducing healthcare utilization, empirical evidence showed mixed results. Using patient-level data we evaluate the causal impact of the CMP of a major academic medical center, and we find no impact on five healthcare utilization measures. In the light of these negative results, one wonders how can CMPs be improved. To address this question, we use Markov Decision Processes (MDPs) and Dynamic Programming to model the task of optimally allocating treatment amongst patients while fulfilling some capacity constraints. The complexity of such a problem may be very high because healthcare populations may be large enough that gathering information of the current status of each patient and tracking the evolution of their covariates is untenable. To address this challenge we develop the so-called measurized theory, which allows to model MDPs that optimize the distribution of treated and untreated patients instead of dealing with identifed patients. This abstraction transforms a complicated problem into an intuitive formulation and sets the stage for delivering clinically implementable solutions in the future.
Speaker: Gonzalo Muñoz, Universidad de O'Higgins
Title: Constructing separating hyperplanes for non-convex quadratic problems
When: October 11, 3:00pm.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract:
In 1971, Balas introduced the intersection cut framework as a method for generating separating hyperplanes (or "cuts") in integer optimization. These cuts are derived from convex S-free sets, and inclusion-wise maximal S-free sets yield the strongest intersection cuts. When S is a lattice, maximal S-free sets are well-studied from theoretical and computational standpoints. In this talk, we focus on the case when S is defined by a non-convex quadratic inequality and show how to construct basic maximal quadratic-free sets. Additionally, we explore how to generalize the basic procedure to construct a plethora of new maximal quadratic-free sets for homogeneous quadratics. Joint work with Antonia Chmiela, Joseph Paat, and Felipe Serrano.
Speaker: Jasper van Doornmalen, Eindhoven University.
Title: A Unified Framework for Symmetry Handling
When: October 4, 3:00pm.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract:
Discrete optimization problems are often solved using constraint programming or mided-integer programming techniques, using enumeration or branch-and-bound techniques. It is well known that if the problem formulation is very symmetric, there may exist many symmetrically equivalent solutions to the problem. Without handling the symmetries, traditional solving methods have have to check many symmetric parts of the solution space, which comes at a high computational cost.
Handling symmetries in optimization problems is thus essential for devising efficient solution methods. In this presentation, we present a general framework that captures many of the already existing symmetry handling methods. While these methods are mostly discussed independently from each other, our framework allows to apply different symmetry handling methods simultaneously and thus outperform their individual effects. Moreover, most existing symmetry handling methods only apply to binary variables. Our framework allows to easily generalize these methods to general variable types. Numerical experiments confirm that our novel framework is superior to the state-of-the-art methods implemented in the solver SCIP.
Speaker: Cristian Palma, DIM, U Chile
Title: Load Balancing Algorithms on Scheduling Problems with a Concave Function of the Load
When: September 27, 3:00pm.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract:
In load balancing scheduling problems n jobs are to be assigned to m machines. Each job has its own processing time, and we define the load of a machine as the sum of the processing time of all jobs assigned to it.
In this work, the machines have a concave non-decreasing function (accessed through a value oracle) applied to their loads and our goal is to maximize the sum of these valuations.
This setting models a productivity maximization of machines where we aim to allocate indivisible resources, such as fuel tanks.
We prove the existence of a PTAS for the offline version, which is the best possible as the underlying decision problem is strongly NP-hard.
We also explore the online version, where jobs arrive one by one revealing their processing time and must be allocated immediately. We first show that the List Scheduling greedy algorithm is 3/4-competitive in adversarial order. Additionally, we provide an upper bound of ≈ 0.809 (ϕ/2 where ϕ is the golden ratio) for the competitive ratio of any algorithm in this context. For the special case of only two machines, we present an algorithm that matches the given upper bound.
This is joint work with José Soto.
Speaker: Boris Epstein, GSB, Columbia University
Title: Selection and Ordering Policies for Hiring Pipelines via Linear Programming
When: August 16, 3:00pm.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract:
Motivated by hiring pipelines, we study three selection and ordering problems in which applicants for a finite set of positions must be interviewed or made offers to. There is a finite time budget for interviewing or making offers, and a stochastic realization after each decision, leading to computationally-challenging problems. In the first problem, we study sequential interviewing and show that a computationally-tractable, non-adaptive policy that must make offers immediately after interviewing is near-optimal, assuming offerees always accept their offers. In the second problem, we assume that applicants have already been interviewed but only accept offers with some probability; we develop a computationally-tractable policy that makes offers for the different positions in parallel, which can be used even if positions are heterogeneous and is approximately optimal relative to a policy that can make the same amount of offers not in parallel. In the third problem, we introduce a model where the hiring firm is tightly time constrained and must send all offers simultaneously in a single time step, with the possibility of hiring over capacity at a cost; we provide nearly-tight bounds for the performance of practically motivated value-ordered policies. All in all, our paper takes a unified approach to three different hiring problems, based on linear programming. Our results in the first two problems generalize and improve the existing guarantees in the literature that were between 1/8 and 1/2 to new guarantees that are at least 1-1/e ≈ 63.2%.
Speaker: David Lurie, Paris-Dauphine University (CIFRE)
Title: Weak Ergodicity Approach to POMDPs.
When: August 09, 3:00pm.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract:
This seminar introduces Partially Observable Markov Decision Processes (POMDPs), emphasizing their relationship with decision problems and decidability. We then explore a novel theoretical class of POMDPs based on a weak ergodicity property. The seminar concludes by identifying necessary conditions for POMDPs to exhibit this characteristic, bridging the gap between this theoretical result and its potential practical implementation.
Speaker: Ivan Rapaport, DIM, U Chile
Title: Energy-Efficient Distributed Algorithms for Synchronous Networks
When: June 28, 3:00pm.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract:
We study the design of energy-efficient algorithms for well known distributed models of computation: the LOCAL and CONGEST models. Specifically, as a measure of complexity, we consider the maximum, taken over all the edges, or over all the nodes, of the number of rounds at which an edge, or a node, is activein the algorithm. We first show that every Turing-computable problem has a CONGEST algorithm with constant node-activation complexity, and therefore constant edge-activation complexity as well. That is, every node (resp., edge) is active in sending (resp., transmitting) messages for only O(1) rounds during the whole execution of the algorithm. In other words, every Turing-computable problem can be solved by an algorithm consuming the least possible energy. In the LOCAL model, the same holds obviously, but with the additional feature that the algorithm runs in O(poly(n)) rounds in n-node networks. However, we show that insisting on algorithms running in O(poly(n)) rounds in the CONGEST model comes with a severe cost in terms of energy. Namely, there are problems requiring Ω(poly(n)) edge-activations (and thus Ω(poly(n)) node-activations as well) in the CONGEST model whenever solved by algorithms bounded to run in O(poly(n)) rounds. Finally, we demonstrate the existence of a sharp separation between the edge-activation complexity and the node-activation complexity in the CONGEST model, for algorithms bounded to run in O(poly(n)) rounds. Specifically, under this constraint, there is a problem with O(1) edge-activation complexity but Ω(n1/4) node-activation complexity.
(Joint work with Pierre Fraignuiaud, Pedro Montealegre and Ian Todinca)
Speaker: Paul Gölz
Title: In this apportionment lottery, the House always wins
When: June 22, 3:00pm.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract:
Apportionment is the problem of distributing h indivisible seats across states in proportion to the states' populations. In the context of the US House of Representatives, this problem has a rich history and is a prime example of interactions between mathematical analysis and political practice. Grimmett suggested to apportion seats in a randomized way such that each state receives exactly their proportional share qᵢ of seats in expectation (ex ante proportionality) and receives either ⌊qᵢ⌋ or ⌈qᵢ⌉ many seats ex post (quota). However, there is a vast space of randomized apportionment methods satisfying these two axioms, and so we additionally consider prominent axioms from the apportionment literature. Our main result is a randomized method satisfying quota, ex ante proportionality and house monotonicity — a property that prevents paradoxes when the number of seats changes and which we require to hold ex post. This result is based on a generalization of dependent rounding on bipartite graphs, which we call cumulative rounding and which might be of independent interest, as we demonstrate via applications beyond apportionment.
Speaker: Andrés Cristi, CMM, U Chile
Title: A Constant Factor Prophet Inequality for Online Combinatorial Auctions
When: May 24, 3:00pm.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract:
In online combinatorial auctions m indivisible items are to be allocated to n agents who arrive online. Agents have random valuations for the different subsets of items and the goal is to allocate the items on the fly so as to maximize the total value of the assignment. A prophet inequality in this setting refers to the existence of an online algorithm guaranteed to obtain, in expectation, a certain fraction of the expected value obtained by an optimal solution in hindsight. The study of prophet inequalities for online combinatorial auctions has been an intensive area of research in recent years, and constant factor prophet inequalities are known when the agents' valuation functions are submodular or fractionally subadditive. Despite many efforts, for the more general case of subadditive valuations, the best-known prophet inequality has an approximation guarantee of O(log log m).
We prove the existence of a constant factor prophet inequality for the subadditive case, resolving a central open problem in the area.
Our prophet inequality is achieved by a novel, but elementary, sampling idea which we call the Mirror Lemma. This lemma is essentially concerned with understanding algorithms for which the set of items that are allocated and those that are not, distribute equally. The other main ingredient is a nonstandard application of Kakutani's fixed point theorem.
Speaker: Felipe Valdevenito, DIM, U Chile
Title: Single-choice secretary problems with ordinal advice
When: May 17, 3:00pm.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract:
We examine variations of the classical secretary problem in a new setting, where the decision-maker (DM) can request one (or more) bits of ordinal information from agents before they arrive. Specifically, each agent has private information (their weight) that is publicly revealed upon arrival. As usual, the DM must irrevocably decide whether to select the agent or not after this step. Additionally, at any given time, the DM can privately ask yes-or-no questions to agents who have not yet arrived, and the agents can only answer ordinal questions based on comparing their private information with the information publicly available up to that point. We refer to this additional information as ordinal advice. For example, the DM may ask if the agent’s hidden value is higher than that of all agents who have arrived so far, but he cannot ask if that value is the largest of the entire list as the agent does not have information about the other agents that haven’t arrived.
The goal is to select the agent with the highest weight from the list. We consider different ways in which the agents can arrive, such as randomly, in an adversarial order, or with a random sample of agents that are revealed beforehand. In the latter case, the DM is the one who chooses the number of applicants included in the sample, and the rest of the input is presented in an order chosen by an adversary either before or after the realization of the sample set. We explore the impact of varying the amount of ordinal advice allowed by considering the cases where the DM can ask for zero, one, or multiple rounds of ordinal advice. We present algorithms that achieve optimal competitive ratios for almost all the settings considered. Notably, we observe that ordinal advice improves the performance in every scenario, with the exception of the adversarial-order setting.
This talk is based on joint work with José Soto.
Speaker: Tjark Vredeveld, Maastricht University.
Title: Learning in scheduling: simple policies for Bayesian scheduling
When: May 10, 3:00pm.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract:
We consider the problem of scheduling jobs on a single machine, non-preemptively. The goal is to minimize the total completion time of the jobs. It is well known that this problem is solved by scheduling the jobs in order of shortest processing times (SPT) when the jobs’ processing times are known or shortest expected processing times (SEPT), when we have full knowledge on the true expected values.
In this talk, we consider the stochastic scheduling problem in which we have additional uncertainty about the parameters of the distribution of the jobs’ processing times. We assume that jobs appear in $m$ classes and the processing times of jobs in one class are independently and identically distributed. By processing a job from one class one can learn about the true parameters of the distribution. The initial beliefs on these parameters are modelled as a prior distribution and after processing a job the posterior distribution models the updated beliefs on the parameter.
When the processing times are exponentially distributed with an unknown parameter that is gamma distributed, optimal policies based on Gittins-indices exist (see, e.g., Hamada and Glazebrook, 1993). However, computing these indices may be computationally challenging.
In this talk, we consider two simple policies, based on SEPT. The first policy processes the jobs based on the initial beliefs of the expected processing time. This implies that all jobs of one class are processed consecutively. The second policy, always processes a job from a class with shortest expected processing time according to the updated beliefs.
We can show that both policies have a (worst-case) approximation ratio of $m$, the number of classes, and that this ratio is tight.
This talk is based on (updated beliefs of) joint work with Sebastian Marban and Cyriel Rutten.
Speaker: Ravi Sundaram, Northeastern U.
Title: Learning to cope with adversaries (in theory) and noise (in
practice)
When: May 3, 3:00pm.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract:
Learning theory has burgeoned since 1971 (VC-dimension by Vapnik and Chervonenkis) when the framework was established and the fundamental theorem proved. We extend the framework to model a strategic setting that allows for adversarial behavior during the test phase. As an example consider a situation where students may manipulate their application materials knowing universities' admissions criteria. We define a new parameter, SVC (Strategic VC dimension) and use it to characterize the statistical and computational learnability of the strategic linear classification problem. The practice of learning exploded in 2012 (AlexNet by Krizhevsky, Sutskever and Hinton) leading to a profusion of deep neural network (DNN) architectures and training algorithms. We consider the problem of creating tags from a noisy scanning technology (e.g. optochemical inks, magnetic microwires etc.).
Formalizing it as a coding-theoretic problem. we develop a novel neural network architecture (DANNI) and training algorithm to solve it, with provable performance guarantees.
The theory part is joint work with Anil Vullikanti, Haifeng Xu and Fan Yao, and the practice part is joint with Akshar Varma.
Speaker: Evangelia Gergatsouli, University of Wisconsin - Madison
Title: Opening Pandora's Box: the Correlated Case
When: April 19, 3:00pm.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract:
Pandora's Box models optimization problems where the input consists of stochastic random variables, but the uncertainty can be removed (i.e. reveal the exact value of a variable) by paying an extra price. Our goal is to maximize (or minimize) the price of the variable chosen minus (resp. plus) the cost we paid to learn the exact instantiations of the variables. All previous work on this class of problems assumes that different random variables in the input are distributed independently. Until recently nothing was known for the case of correlated input.
In this talk I will introduce Pandora's Box problem and describe the first constant approximation algorithm for Pandora's Box with correlated distributions. I will also briefly mention a newer, simpler algorithm that directly generalizes the independent case algorithm while also improving on the approximation factor.
This is based on joint work with Shuchi Chawla, Yifeng Teng, Christos Tzamos and Ruimin Zhang.
Speaker: Ulrike Schmidt-Kraepelin, CMM.
Title: Lifting Apportionment Methods to Weighted Fair Division
When: April 12, 3:00pm.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract:
We study the problem of fairly allocating indivisible items to agents with different entitlements, which captures, for example, the distribution of ministries among political parties in a coalition government. This setting (known as weighted fair division) constitutes a generalization of the well-studied apportionment problem, and we present two approaches for lifting apportionment methods to weighted fair division.
In the first part of the talk, we focus on additive valuations and show that picking sequences derived from divisor methods provide natural envy-freeness, proportionality, and monotonicity properties. However, picking sequences quickly lose their fairness guarantees when moving beyond additive valuations. In the second part of the talk, we introduce welfare measures based on harmonic numbers and show that variants of maximum weighted harmonic welfare offer strong fairness guarantees under matroid-rank valuations. Surprisingly, these guarantees are even stronger than those that are satisfied by the well-known Nash welfare rule.
This talk is based on joint work with Mithun Chakraborty, Luisa Montanari, Nicholas Teh, and Warut Suksompong.
Speaker: Bruno Ziliotto, CNRS.
Title: Percolation games
When: March 22, 3:00pm.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract:
Inspired by first-passage percolation models, we consider zero-sum games on Z^d and study their limit behavior when the game duration tends to infinity. After reviewing several fundamental results in this literature, we present a generalization and discuss connections with long-term behavior of Hamilton-Jacobi equations.
Speaker: Jannik Matuschke, KU Leuven
Title: Decomposition of Probability Marginals for Security Games in Abstract Networks and Ideal Clutters
When: March 15, 3:00pm.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: Consider a set system on a finite ground set E, where each set P in the system is equipped with a required hitting probability r(P) and each element e of E has a probability marginal p(e). We study the question whether the marginals can be decomposed into a distribution over all subsets of E such that the resulting random set intersects each set P from the system with probability at least r(P). A simple necessary condition is that for every set P in the system, the sum of the marginals of elements in P is at least r(P).
Extending a result by Dahan, Amin, and Jaillet (Mathematics of Operations Research 47:457-484, 2022) motivated by a security game in a directed acyclic graph, we show that the aforementioned necessary condition is also sufficient in the following two settings: (i) the set system fulfills a max-flow/min-cut property and the requirements are affine; (ii) the set system is an abstract network and the requirements fulfill a weak conservation law. We provide algorithms for the computation of the corresponding decompositions and, in some cases, simple explicit descriptions of these decompositions. As a subroutine, we also devise a combinatorial algorithm for computing shortest paths in abstract networks, partially answering an open question by McCormick (SODA 1996). A consequence of our results is that equilibria for the security game studied by Dahan et al. (2022) and similar games involving randomized surveillance strategies can be computed efficiently, even when the underlying digraph contains cycles.
Speaker: Paul Duetting, Google Research
Title: Combinatorial Contracts
When: January 25, 3:00pm hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: An emerging frontier in Algorithmic Game Theory is Algorithmic Contract Theory, which studies the classic hidden-action principal-agent problem of contract theory through the computational lens.
In this talk, I will present three basic ways in which the problem can be combinatorial and survey both hardness and poly-time (approximation) results.
The analysis will uncover some surprising connections (but also fundamental differences) to combinatorial auctions.
Speaker: Gabriel Weintraub, Stanford Graduate School of Business
Title: Experimentation in Two-Sided Marketplaces: The Impact of Interference
When: December 15, 14:30 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: Marketplace platforms use experiments (also known as “A/B tests”) as a method for making data-driven decisions about which changes to make on the platform. When platforms consider introducing a new feature, they often first run an experiment to test the feature on a subset of users and then use this data to decide whether to launch the feature platform-wide. However, it is well documented that estimates of the treatment effect arising from these experiments may be biased due to the presence of interference driven by the substitution effects on the demand and supply sides of the market.
In this work, we develop an analytical framework to study experimental design in two-sided marketplaces. We develop a stochastic market model and associated mean field limit to capture dynamics in such experiments. Notably, we use our model to show how the bias of commonly used experimental designs and associated estimators depend on market balance. We also propose novel experimental designs that reduce bias for a wide range of market balance regimes. We also discuss a simpler model to study the bias-variance trade-off among different experimental choices. Finally, we present results on calibrated models and live implementations on two real-world platforms. Overall, our work shows the potential of structural modeling to yield insight on experimental design for practitioners.
Based on joint work with Ramesh Johari, Hannah Li, Inessa Liskovich, and Geng Zhao.
Speaker: Nick Arnosti, U Minnesota.
Title: Explainable Affirmative Action.
When: December 14, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: When organizations select from a set of applicants, they often use a priority ranking in tandem with policies designed to ensure diversity or boost the chances of applicants from certain groups. These policies add significant complexity to the selection process, and examples from around the world show that they often do not have the intended effect, or are accompanied by unintended consequences.
Motivated by a desire to make these rules more transparent, we provide three axioms intended to capture explainability: monotonicity, lower invariance, and non-bossiness. We show that these axioms characterize a family of "outcome-based" selection rules. This family of selection rules is rich enough to incorporate many (but not all) existing reserve and quota policies. We illustrate our findings using examples from the allocation of affordable housing in New York City.
Speaker: Dana Pizarro, UOH
Title: Competition and Recall in Selection Problems.
When: December 7, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: In this talk, I will present an extension of the prophet inequality problem to a competitive setting. At every period, a new sample from a known distribution arrives, which is publicly observed. Then, two players simultaneously decide whether to pick an available value or to pass and wait until the next period (ties are broken uniformly at random). As soon as a player gets one sample, he leaves the market, and his payoff is the value of the sample. In a first variant, namely no recall case, the agents can only bid in each period for the current value. In a second variant, the full recall case, the agents can also bid at each period for any of the previous samples that have not been already selected. For each variant, we study the two-player game induced by the optimal stopping problem, focusing on subgame-perfect Nash equilibria. In particular, I will describe the set of subgame-perfect Nash equilibria payoffs, I will compare the two model variants in terms of the payoffs of the players and I will provide tight bounds for the Price of Anarchy and Price of Stability of the former setting when the number of arrivals is two.
Joint work with Fabien Gensbittel and Jérôme Renault (TSE).
Speaker: Gustavo Angulo, PUC
Title: Generalized formulations for the traveling salesman problem
When: November 30, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: The traveling salesman problem is a widely studied classical combinatorial problem for which there are several integer linear formulations. In this work, we consider the Miller-Tucker-Zemlin (MTZ) formulation. First, we argue that the choice of some parameters of this formulation is arbitrary and, therefore, there is a family of formulations of which MTZ is a particular case. We analyze this family, noting that in general the formulations involved are not comparable to each other and there is no one that dominates the rest. Then, we study the intersection of this family, which we show to be equivalent to the well-known Circuit Inequalities formulation. Finally, we extend this approach to the Desrochers-Laporte and Single-Commodity Flow formulations, obtaining similar results. Based on joint work with Diego Morán (UAI).
Speaker: Javier Cembrano, TU Berlin
Title: Impartial Selection with Additive Guarantees via Iterated Deletion.
When: November 23, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: Impartial selection is the selection of an individual from a group based on nominations by other members of the group, in such a way that individuals cannot influence their own chance of selection. We give a deterministic mechanism with an additive performance guarantee of $O(n^{(1+\gamma)/2})$ in a setting with $n$ individuals where each individual casts $O(n^{\gamma})$ nominations, where $\gamma \in [0,1]$. For $\gamma=0$, i.e., when each individual casts at most a constant number of nominations, this bound is $O(\sqrt{n})$. This matches the best-known guarantee for randomized mechanisms and a single nomination. For $\gamma=1$ the bound is $O(n)$. This is trivial, as even a mechanism that never selects provides an additive guarantee of $n-1$. We show, however, that it is also best possible: for every deterministic impartial mechanism there exists a situation in which some individual is nominated by every other individual and the mechanism either does not select or selects an individual not nominated by anyone.
This is joint work with Felix Fischer, David Hannon, and Max Klimm.
Speaker: Arturo Merino, TU Berlin
Title: From Combinatorial Optimization to Combinatorial Generation.
When: November 16, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: Combinatorial generation is the task of computing all solutions to a combinatorial problem instead of only one of them. For example, we may be interested in all spanning trees of a given graph or all optimal weight perfect matchings of a given weighted graph. In recent years, there has been much interest in unifying techniques or general approaches that simultaneously apply to various combinatorial generation problems.
We propose a new simple general approach for combinatorial generation. Given some binary strings X⊆{0,1}ⁿ, our approach consists of greedily computing a Hamilton path in a polytope with X as vertices, namely conv(X). Our main result shows that this approach works for all X⊆{0,1}ⁿ and is efficient whenever optimization over X can be efficiently solved, showing an unexplored relation between combinatorial optimization and generation. More specifically, if we can solve certain linear optimization problems for X in time O(T), we can compute a Hamilton path in conv(X) in roughly O(T⋅log(n)) time per vertex.
As a consequence, we obtain new efficient algorithms for combinatorial generation problems that can be binary encoded, like spanning tree generation, 0/1 vertex enumeration, and perfect matching generation. Furthermore, consecutive objects visited by our generation algorithms differ only in a local operation, like an edge exchange in the case of spanning trees or an alternating cycle exchange in the case of perfect matchings.
This is joint work with Torsten Mütze.
Speaker: Alexander Kozachinskyi, IMC, UC
Title: Constant-depth sorting networks.
When: November 9, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: Assume that you want to sort n integers, and you have many copies of a device that can sort k integers. At one step, you can partition n integers into sets of size at most k by putting them into your devices. How many such steps do you need to sort your initial array?
When k = 2, this problem is just a problem of constructing a sorting network. It can be solved in O(log n) steps, though the construction is complicated and impractical. We consider a reversed setting. Namely, we fix a number of steps and seek for the minimal k such that our problem is solvable within this number of steps with this k. We determine the minimal k for 2,3, and 4 steps.
Joint work with Natalia Dobrokhotova-Maikova and Vladimir Podolski.
Speaker: Ana Laura Trujillo Negrete, CMM
Title: Reconstruction of token graphs
When: October 24, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: Let $G$ be a graph of order $n\ge 2$, and let $k$ be an integer with $k\in \{1,2,\dots,n-1\}$. The $k$-token graph $F_k(G)$ of $G$ is the graph whose vertices are all the $k$-subsets of vertices of $G$, and two of such $k$-subsets are adjacent whenever their symmetric difference is an edge of $G$. We denote by $C_4$ the $4$-cycle and by $D_4$ the diamond graph (a $4$-cycle with a chord). We say that $G$ is a $(C_4,D_4)$-free graph if $G$ does not contain $C_4$ nor $D_4$ as a subgraph.
In 2012 Fabila-Monroy et al. conjectured the following: If $G$ and $H$ are two graphs such that $F_k(G)$ and $F_k(H)$ are isomorphic for some $k$, then $G$ and $H$ are isomorphic. In this talk we will show this conjecture for the family of $(C_4,D_4)$-free graphs. Moreover, we will see how this reconstruction problem is related to the automorphism group of token graphs. This is a joint work with Ruy Fabila Monroy.
Speaker: Juan Pereyra, U. República
Title: Rawlsian Assignments
When: October 19, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: We study the assignment of indivisible objects to individuals when transfers are not allowed. Previous literature has mainly focused on efficiency (from ex-ante and ex-post perspectives), and individually fair assignments. Consequently, egalitarian concerns have been overlooked. We are inspired by the assignment of apartments in housing cooperatives where families regard the egalitarianism of the assignments as a first-order requirement. In particular, they want to avoid assignments where some families get their most preferred apartment, while others get options ranked very low in their preferences. Based on Rawls' idea of fairness, we introduce the notion of Rawlsian assignments. We prove that there always exists a unique Rawlsian assignment, which is sd-efficient, and satisfies equal treatment of equals. We illustrate our analysis with preference data from housing cooperatives. Our results show that the Rawlsian assignment substantially improves, from an egalitarian perspective, both the probabilistic serial mechanism, and the mechanism currently in use.
Joint with Tom Demeulemeester
Speaker: Joel Joris Van de Klundert, UAI
Title: Toward Elimination of Infectious Diseases with Mobile Screening Teams: HAT in the DRC
When: October 05, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: In pursuit of Sustainable Development Goal 3 “Ensure healthy lives and promote well-being for all at all ages,” considerable global effort is directed toward elimination of infectious diseases in general and Neglected Tropical Diseases in particular. For various such diseases, the deployment of mobile screening teams forms an important instrument to reduce prevalence toward elimination targets. There is considerable variety in planning methods for the deployment of these mobile teams in practice, but little understanding of their effectiveness. Moreover, there appears to be little understanding of the relationship between the number of mobile teams and progress toward the goals. This research considers capacity planning and deployment of mobile screening teams for one such neglected tropical disease: Human African trypanosomiasis (HAT, or sleeping sickness). We prove that the deployment problem is strongly NP-Hard and propose three approaches to find (near) optimal screening plans. For the purpose of practical implementation in remote rural areas, we also develop four simple policies. The performance of these methods and their robustness is benchmarked for a HAT region in the Democratic Republic of Congo (DRC). Two of the four simple practical policies yield near optimal solutions, one of which also appears robust against parameter impreciseness. We also present a simple approximation of prevalence as a function of screening capacity, which appears rather accurate for the case study. While the results may serve to more effectively allocate funding and deploy mobile screening capacity, they also indicate that mobile screening may not suffice to achieve HAT elimination.
Speaker: Denae Ventura, UNAM
Title: Recursive local amoeba construction.
When: September 28, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: Amoeba graphs were born as examples of balanceable graphs, which are graphs that appear in any 2-edge coloring of the edges of a large enough $K_n$ with a sufficient amount of red and blue edges. As they were studied further, interesting aspects were found.
An edge replacement $e\to e$ in a labeled graph G means to take an edge e in E(G) and replace it with e' \in E(\overline{G})\cup \{e\}$. If $G-e+e'$ is isomorphic to $G$ then we say $e\to e'$ is a \emph{feasible edge replacement}. Every edge replacement yields a set of permutations of the labels in $G$. The set of all permutations associated with all feasible edge replacements in $G$ generates the group $\Gamma_G$. A labeled graph $G$ of order $n$ is a \emph{local amoeba} if $\Gamma_G \cong S_n$. One might think local amoebas are hard to find. However, in this talk we will go over a recursive construction of infinite families of local amoebas.
Es trabajo conjunto con Ludmila Matyskova
Speaker: Alfonso Montes
Title: Bayesian Persuasion With Costly Information Acquisition.
When: September 21, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: We consider a Bayesian persuasion model in which the receiver can gather independent information about the state at a uniformly posterior-separable cost. We show that the sender provides information that prevents the receiver from gathering independent information in equilibrium. When the receiver faces a lower cost of information, her `threat' of gathering independent information increases, thus decreasing the sender's power to persuade. A lower cost of information can also hurt the receiver because the sender may provide strictly less information in equilibrium. Finally, we propose a solution method that can be used to solve our model in specific applications.
Es trabajo conjunto con Ludmila Matyskova
Speaker: Felipe Subiabre, U Chile
Title: The kidney exchange problem: length-constrained cycles and chains optimization on compatibility graphs
When: September 7, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: The kidney exchange problem is a combinatorial optimization problem that arises naturally when implementing centralized kidney exchange programs. Given a directed weighted graph (called the compatibility graph), we aim to find a collection of simple and vertex-disjoint cycles maximizing the total weight of their participating arcs.
Because of logistical considerations, a bound k is placed on the length of each possible cycle. We will briefly explain how the problem is polynomially solvable in the cases k = 2 and unbounded k, and why it turns NP-complete for k >= 3.
MIP formulations are used in practice because of their efficiency and flexibility to accommodate extra constraints of the exchange programs. We will introduce the cycles MIP formulation of Roth et al. and explain why its linear relaxation is polynomially solvable. Inspired by this result, we present a conjecture on the integrality gap of this formulation, which is part of our ongoing work.
Time permitting, we will also discuss the addition of exchange chains starting from a distinguished subset of vertices (representing deceased or altruistic donors) and explain how their introduction makes even the associated linear relaxation problem NP-hard.
Speaker: Laura Vargas Koch, U Chile
Title: Solidarity Cover Problem
When: August 24, 14:30 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: This work started with a real-world problem where the task was to partition a set of locations into disjoint subsets such that each subset is spread in a way that it covers the whole set with a certain radius.
This made us formalizing the following problem which we call Solidarty Cover Problem. Given a finite set S, a metric d, and a radius r, and a number of partitions m. We define a subset C of S to be an r-cover if B(C,r)=S. The Solidarity Cover Problem is the problem of determining whether there exist m disjoint r-covers. We consider the optimization problems of maximizing the number of r-covers which is essentially the domatic number problem, and the optimization problem of minimizing the radius.
This is joint work with Britta Peis and Eran Rosenbluth.
Speaker: Alexandros Tsigonias-Dimitriadis, TUM
Title: Prophet Inequalities via the Expected Competitive Ratio
When: August 17, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: We consider prophet inequalities under general downward-closed constraints. In a prophet inequality problem, a decision-maker sees a series of online elements and needs to decide immediately and irrevocably whether or not to select each element upon its arrival, subject to an underlying feasibility constraint. Traditionally, the decision-maker’s expected performance has been compared to the expected performance of the prophet, i.e., the expected offline optimum. We refer to this measure as the Ratio of Expectations (or, in short, RoE). However, a major limitation of the RoE measure is that it only gives a guarantee against what the optimum would be on average, while, in theory, algorithms still might perform poorly compared to the realized ex-post optimal value.
Hence, we study alternative performance measures. In particular, we suggest the Expected Ratio (or, in short, EoR), which is the expectation of the ratio between the value of the algorithm and the value of the prophet. This measure yields desirable guarantees, e.g., a constant EoR implies achieving a constant fraction of the ex-post offline optimum with constant probability. Moreover, in the single-choice setting, we show that the EoR is equivalent (in the worst case) to the probability of selecting the maximum, a well-studied measure in the literature. This is no longer the case for combinatorial constraints (beyond single-choice), which is the main focus of this paper.
Our main goal is to understand the relation between RoE and EoR in combinatorial settings. Specifically, we establish a two-way black-box reduction: for every feasibility constraint, the RoE and the EoR are at most a constant factor apart. This implies a wealth of EoR results in multiple settings where RoE results are known.
This is joint work with Tomer Ezra, Stefano Leonardi, Rebecca Reiffenhäuser, and Matteo Russo.
Speaker: Waldo Gálvez, UOH
Title: Machine Covering in the Random-Order Model
When: July 27, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: In the Online Machine Covering problem, jobs, defined by their sizes, arrive one by one and have to be assigned to m parallel and identical machines, with the goal of maximizing the load of the least-loaded machine. Unfortunately, the classical model allows only fairly pessimistic performance guarantees: The best possible deterministic ratio of m is achieved by the Greedy-strategy, and the best known randomized algorithm has competitive ratio Õ(m^(1/2)), which cannot be improved by more than a logarithmic factor.
In this talk, we will discuss results for the Machine Covering problem in the random-order model. Here, no extra resources are present but, instead, the adversary is weakened in that it can only decide upon the input set while jobs are revealed uniformly at random. It is particularly relevant to Machine Covering where lower bounds usually come from highly structured input sequences.
We first analyze Graham's Greedy-strategy in this context and show that its competitive ratio decreases slightly to O(m/log(m)), which is asymptotically tight. Then, we will present an improved Õ(m^(1/4))-competitive algorithm for the problem. This result is achieved by exploiting the extra information coming from the random order of the jobs, using sampling techniques to devise an improved mechanism to distinguish jobs that are relatively large from small ones. Time permitting, we will also present a lower bound, showing that no algorithm can have a competitive ratio of O(log(m)/loglog(m)) in the random-order model. This lower bound is achieved by studying a variant of the Secretary problem.
Joint work with Susanne Albers and Maximilian Janke.
Speaker: Benjamín Rubio, UC
Title: Searching infected nodes in uncertain graphs
When: June 1, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: In the context of the COVID-19, the development of methods to trace the spread of the virus is of vital importance. One of such methods relies on PCR testing of wastewater samples to locate sudden the appearance of infection. Given a representation of the wastewater network as a directed graph, we aim for a strategy that finds a new infected node using the worst-case minimum number of tests. This problem proves to be challenging on networks with uncertainty, as is the case of real-world data. We will explore the connection with other known graph problems and show upper bounds for the number of tests in an optimal strategy on graphs of bounded treewidth and planar graphs. Then we analyze experimental results obtained using a greedy algorithm to select nodes to sample, and finally conclude by stating several relevant open questions.
Joint work with I. Beaudry, J. Baboun, M. Castro, A. Jara, B. Rubio, J. Verschae.
Speaker: Bernardo Subercaseaux, Carnegie Mellon U.
Title: Augmenting Online Algorithms with ε-Accurate Predictions.
When: July 20, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: The growing body of work in learning-augmented online algorithms studies how online algorithms can be improved when given access to ML predictions about the future. Motivated by ML models that give a confidence parameter for their predictions, we study online algorithms with predictions that are $\epsilon$-accurate: namely, each prediction is correct with probability (at least) $\epsilon$, but can be arbitrarily inaccurate with the remaining probability. We show that even with predictions that are accurate with a small probability and arbitrarily inaccurate otherwise, we can dramatically outperform worst-case bounds for a range of classical online problems including caching, online set cover, and online facility location. Our main results are an $O(\log(1/\varepsilon))$-competitive algorithm for caching, and a simple $O(1/\varepsilon)$-competitive algorithm for a large family of covering problems, including set cover and facility location, with $\epsilon$-accurate predictions.
Speaker: Benjamín Rubio, UC
Title: Searching infected nodes in uncertain graphs
When: June 1, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: In the context of the COVID-19, the development of methods to trace the spread of the virus is of vital importance. One of such methods relies on PCR testing of wastewater samples to locate sudden the appearance of infection. Given a representation of the wastewater network as a directed graph, we aim for a strategy that finds a new infected node using the worst-case minimum number of tests. This problem proves to be challenging on networks with uncertainty, as is the case of real-world data. We will explore the connection with other known graph problems and show upper bounds for the number of tests in an optimal strategy on graphs of bounded treewidth and planar graphs. Then we analyze experimental results obtained using a greedy algorithm to select nodes to sample, and finally conclude by stating several relevant open questions.
Joint work with I. Beaudry, J. Baboun, M. Castro, A. Jara, B. Rubio, J. Verschae.
Speaker: Tomás González, IMC, UC.
Title: Differentially Private Stationary Points in Stochastic Nonconvex Optimization
When: July 13, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: Differentially private (DP) stochastic nonconvex optimization (SNCO) is a fundamental problem, where the goal is to approximate stationary points (i.e points with small norm of the gradient) of the population loss, given a dataset of i.i.d. samples from a distribution, while satisfying differential privacy with respect to the dataset. Most of the existing works in the literature have addressed either the convex version of this problem (DP-SCO) or the closely related problem of Private Non-Convex Empirical Risk Minimization, where one seeks to approximate stationary points of the empirical loss.
In the first part of this talk I will provide an overview of differential privacy and of (non-private) stochastic nonconvex optimization. Then, I will show how to privatize the well known SPIDER algorithm for SNCO that relies on variance reduction techniques, and how to prove privacy and accuracy guarantees for its private version. The private version of SPIDER leads to the rate of convergence to stationary points of the population loss of O(d^{1/4} / (n\epsilon)^{1/2}), which was the best previously known rate for the empirical case.
The private version of SPIDER appeared in section 4 of arXiv:2206.00846.
Speaker: Niklas Rieken, RWTH Aachen.
Title: Computing buyer-optimal Walrasian prices in multi-unit matching markets via a sequence of max flow computations
When: July 06, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: Given a market where discrete indivisible items of different types are sold to a set of buyers. There is a given supply of each type and each buyer has a given (maximum) demand. Each buyer valuates the items linearly in our setting. We aim for competitive prices, i.e. prices such that an allocation exists where every buyer gets one of his preferred bundles. The prices should be as small as possible and as much as possible should be sold.
We show how to compute these buyer-optimal Walrasian prices. We present an ascending auction which iteratively raises the prices on the goods in the left-most min cut in some associated auxiliary flow network. Given this prices, we can compute an allocation where as much as possible is sold. The structural insights obtained from our flow-based approach furthermore lead to several interesting insights regarding the sensitivity analysis of our ascending auction.
Joint work with Katharina Eickhoff, Tom McCormick, Britta Peis, and Laura Vargas Koch.
Speaker: Pablo Barcelo, Instituto de Ingeniería Matemática y Computacional (IMC), UC
Title: The AGM Bound
When: June 29, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: In several computer science applications one encounters the following problem: Given two edge-labeled graphs G and H, how many homomorphic images of H can be found in G? Atserias, Grohe, and Marx developed a tight bound for this number, denoted #Hom(H,G), which is now known as the AGM bound. The bound relates #Hom(H,G) with the fractional edge covers of H in a very elegant and direct way. We will present a self-contained and simple proof of this result using Shearer's inequality.
Speaker: Waldo Gálvez, UOH
Title: Machine Covering in the Random-Order Model
When: May 25, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: In the Online Machine Covering problem, jobs, defined by their sizes, arrive one by one and have to be assigned to m parallel and identical machines, with the goal of maximizing the load of the least-loaded machine. Unfortunately, the classical model allows only fairly pessimistic performance guarantees: The best possible deterministic ratio of m is achieved by the Greedy-strategy, and the best known randomized algorithm has competitive ratio Õ(m^(1/2)), which cannot be improved by more than a logarithmic factor.
In this talk, we will discuss results for the Machine Covering problem in the random-order model. Here, no extra resources are present but, instead, the adversary is weakened in that it can only decide upon the input set while jobs are revealed uniformly at random. It is particularly relevant to Machine Covering where lower bounds usually come from highly structured input sequences.
We first analyze Graham's Greedy-strategy in this context and show that its competitive ratio decreases slightly to O(m/log(m)), which is asymptotically tight. Then, we will present an improved Õ(m^(1/4))-competitive algorithm for the problem. This result is achieved by exploiting the extra information coming from the random order of the jobs, using sampling techniques to devise an improved mechanism to distinguish jobs that are relatively large from small ones. Time permitting, we will also present a lower bound, showing that no algorithm can have a competitive ratio of O(log(m)/loglog(m)) in the random-order model. This lower bound is achieved by studying a variant of the Secretary problem.
Joint work with Susanne Albers and Maximilian Janke.
Speaker: Benjamín Rubio, UC
Title: Searching infected nodes in uncertain graphs
When: June 1, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: In the context of the COVID-19, the development of methods to trace the spread of the virus is of vital importance. One of such methods relies on PCR testing of wastewater samples to locate sudden the appearance of infection. Given a representation of the wastewater network as a directed graph, we aim for a strategy that finds a new infected node using the worst-case minimum number of tests. This problem proves to be challenging on networks with uncertainty, as is the case of real-world data. We will explore the connection with other known graph problems and show upper bounds for the number of tests in an optimal strategy on graphs of bounded treewidth and planar graphs. Then we analyze experimental results obtained using a greedy algorithm to select nodes to sample, and finally conclude by stating several relevant open questions.
Joint work with I. Beaudry, J. Baboun, M. Castro, A. Jara, B. Rubio, J. Verschae.
Speaker: Pawel Pralat, Ryerson University.
Title: Semi-random process
When: May 18, 15:00 hrs.
Where: https://zoom.us/j/98780407155?pwd=MnR3dGlab2dkcU0xVFprTFRKcnptZz09
Abstract: The semi-random graph process is a single-player game that begins with an empty graph on n vertices. In each round, a vertex u is presented to the player independently and uniformly at random. The player then adaptively selects a vertex v and adds the edge uv to the graph. For a fixed monotone graph property, the objective of the player is to force the graph to satisfy this property with high probability in as few rounds as possible.
During the talk, we will focus on the following problems: a) perfect matchings, b) Hamilton cycles, c) constructing a subgraph isomorphic to an arbitrary fixed graph G. We will also consider a natural generalization of the process to s-uniform hypergraphs.
Speaker: Federico Echeñique, Caltech.
Title: Efficiency and fairness in random resource allocation and social choice
When: May 11, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: We study efficiency in general collective choice problems when agents have ordinal preferences, and randomization is allowed. We establish the equivalence between welfare maximization and ex-ante efficiency for general domains, and relate ex-ante efficiency with ex-post efficiency, characterizing when the two notions coincide. We also propose a new general notion of fairness that is applicable in any social choice environment, not only in resource allocation. Our results have implications for well-studied mechanisms including random serial dictatorship and a number of specific environments, including the dichotomous, single-peaked, and social choice domains.
Joint work with Joe Root and Fedor Sandomirskiy.
Speaker: Tobias Mömke, U. Ausburg
Title: A PTAS for Unsplittable Flow on a Path
When: April 20, 14:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: In the Unsplittable Flow on a Path problem (UFP) we are given a path with edge capacities, and a set of tasks where each task is characterized by a subpath, a demand, and a weight. The goal is to select a subset of tasks of maximum total weight such that the total demand of the selected tasks using each edge e is at most the capacity of e. The problem admits a QPTAS [Bansal, Chakrabarti, Epstein, Schieber, STOC'06; Batra, Garg, Kumar, Mömke, Wiese, SODA'15]. After a long sequence of improvements [Bansal, Friggstad, Khandekar, Salavatipour, SODA'09; Bonsma, Schulz, Wiese, FOCS'11; Anagnostopoulos, Grandoni, Leonardi, Wiese, SODA'14; Grandoni, Mömke, Wiese, Zhou, STOC'18], the best known polynomial time approximation algorithm for UFP has an approximation ratio of 1+1/(e+1) + epsilon < 1.269 [Grandoni, Mömke, Wiese, SODA'22]. It has been an open question whether this problem admits a PTAS. In this paper, we solve this open question and present a polynomial time (1 + epsilon)-approximation algorithm for UFP.
Speaker: Dieter Mitsche, PUC
Title: Percolation on dense random graphs with given degrees.
When: April 20, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: We study the order of the largest connected component of a random graph having two sources of randomness: first, the graph is chosen randomly from all graphs with a given degree sequence, and then bond percolation is applied. Far from being able to classify all such degree sequences, we exhibit several new threshold phenomena for the order of the largest component in terms of both sources of randomness. We also provide an example of a degree sequence, for which the order of the largest component undergoes an unbounded number of jumps in terms of the percolation parameter, giving rise to a behavior that cannot be observed without percolation.
Joint work with Lyuben Lichev and Guillem Perarnau.
Speaker: Andrés Fielbaum, TU Delft
Title: How to split the costs and charge the travellers sharing a ride? aligning system’s optimum with users’ equilibrium
When: April 13, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: Emerging on-demand sharing alternatives, in which one resource is utilised simultaneously by a circumstantial group of users, entail several challenges regarding how to coordinate such users. A very relevant case refers to how to form groups in a mobility system that offers shared rides, and how to split the costs within the travellers of a group. These are non-trivial tasks, as two objectives conflict: 1) minimising the total costs of the system, and 2) finding an equilibrium where each user is content with her assignment. Aligning both objectives is challenging, as users are not aware of the externalities induced to the rest.
In this paper, we propose protocols to share the costs within a ride so that optimal solutions can also constitute equilibria. To do this, we model the situation as a non-cooperative game, which can be seen as a game-version of the well-known set cover problem. We show that the traditional notions of equilibrium in game theory (Nash and Strong) are not useful here, and prove that determining whether a Strong Equilibrium exists is an NP-Complete problem, by reducing it to the k-Exact-Cover problem. Hence, we propose three alternative equilibrium notions (stronger than Nash and weaker than Strong), depending on how users can coordinate. For each of these equilibrium notions, we propose a (possibly overcharging) cost-sharing protocol that yields the optimal solutions equilibria.
Simulations for Amsterdam reveal that our protocols can achieve stable solutions that are close to the optimum, and that having a central coordinator can have a large impact.
Speaker: Frederik Mallmann-Trenn, King's College
Title: Finding the best papers with noisy reviews
When: April 6, 15:00 hrs.
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: Say you are tasked to find the best 150 papers among more than 250 for conference proceedings or as part of a national exercise to maximise workload.
You can ask people to review a given paper by either asking
1) Is paper A better than paper B
or
2) What’s the score of paper A?
The problem is that each review returns an incorrect response with a small probability, say 1/3.
How can you assign reviews so that you will likely find the best 150 papers while the total of queries and rounds is small?
The talk is based on the paper
Instance-Optimality in the Noisy Value-and Comparison-Model---Accept, Accept, Strong Accept: Which Papers get in? [SODA 2020]
https://epubs.siam.org/doi/10.1137/1.9781611975994.131
https://arxiv.org/pdf/1806.08182.pdf
Speaker: Laura Vargas Koch, CMM
Title: Nash Flows over Time: Uniqueness, Continuity and Long-term behavior
When: March 30, 15:00 hrs (Chilean time).
Where: Sala de Seminario von Neuman, CMM, Beauchef 851, torre norte.
Abstract: In the talk, we consider a dynamic model of traffic that has received a lot of attention in the past few years, Nash Flows over time.
Users control infinitesimal flow particles aiming to travel from a source to destination as quickly as possible.
Flow patterns vary over time, and congestion effects are modeled via queues, which form whenever the inflow into a link exceeds its capacity.
We will see that assuming constant inflow into the network at the source, equilibria always settle down into a "steady state'' in which the behavior extends forever in a linear fashion. This extends a result of Cominetti, Correa and Olver, who show a steady-state result in the regime where the input flow rate is smaller than the network capacity.
Surprisingly, the steady state result turns out to be helpful to prove two nice properties:
- Uniqueness of journey times in equilibria
- Continuity of equilibria: small perturbations to the instance or to the traffic situation at some moment cannot lead to wildly different equilibrium evolutions.
This is joint work with Neil Olver and Leon Sering
Speaker: Victor Verdugo, Universidad de O'Higgins
Title: A 2-Approximation for the Bounded Treewidth Sparsest Cut Problem in FPT Time
When: March 23, 15:00 hrs (Chilean time).
Abstract: In the non-uniform sparsest cut problem, we are given a supply graph G and a demand graph D, both with the same set of nodes V. The goal is to find a cut of V that minimizes the ratio of the total capacity on the edges of G crossing the cut over the total demand of the crossing edges of D. In this work, we study the non-uniform sparsest cut problem for supply graphs with bounded treewidth k. For this case, Gupta, Talwar and Witmer [STOC 2013] obtained a 2-approximation with polynomial running time for fixed k, and the question of whether there exists a c-approximation algorithm for a constant c independent of k, which runs in FPT time, remained open. We answer this question in the affirmative. We design a 2-approximation algorithm for the non-uniform sparsest cut with bounded treewidth supply graphs that runs in FPT time when parameterized by the treewidth. Our algorithm is based on rounding the optimal solution of a linear programming relaxation inspired by the Sherali-Adams hierarchy. In contrast to the classic Sherali-Adams approach, we construct a relaxation driven by a tree decomposition of the supply graph by including a carefully chosen set of lifting variables and constraints to encode information of subsets of nodes with super-constant size, and at the same time we have a sufficiently small linear program that can be solved in FPT time.
This is joint work with Vincent Cohen-Addad (Google Research Zurich) and Tobias Mömke (University of Augsburg). This work has been accepted at IPCO 2022.
Speaker: Christoph Dürr, Sorbonne Université
Title: Orienting (hyper)graphs under explorable stochastic uncertainty
When: August 11, 14:30 hrs (Chilean time).
Link: https://zoom.us/j/99541270241?pwd=Tm51dVdlRWFNNmVQRHdQdWh6dEJmQT09
Abstract: Given a hypergraph with uncertain node weights following known probability distributions, we study the problem of querying as few nodes as possible until the identity of a node with minimum weight can be determined for each hyperedge. Querying a node has a cost and reveals the precise weight of the node, drawn from the given probability distribution. Using competitive analysis, we compare the expected query cost of an algorithm with the expected cost of an optimal query set for the given instance. For the general case, we give a polynomial-time f(α)-competitive algorithm, where f(α)∈[1.618+ϵ,2] depends on the approximation ratio α for an underlying vertex cover problem. We also show that no algorithm using a similar approach can be better than 1.5-competitive. Furthermore, we give polynomial-time 4/3-competitive algorithms for bipartite graphs with arbitrary query costs and for hypergraphs with a single hyperedge and uniform query costs, with matching lower bounds.
Joint work with Evripidis Bampis, Thomas Erlebach, Murilo S. de Lima, Nicole Megow, Jens Schlöter.
Speaker: Jannik Matuschke, KU Leuven.
Title: Generalized Malleable Scheduling via Discrete Concavity
When: June 30, 14:30 hrs (Chilean time).
Link: https://zoom.us/j/93921286620?pwd=MTRCdzJuY3ZRUVhRVnlkNEtuc2k2UT09
Abstract: In malleable scheduling, jobs can be executed simultaneously on multiple machines with the processing time depending on the number of allocated machines. Each job is required to be executed non-preemptively and in unison, i.e., it has to occupy the same time interval on all its allocated machines.
In this talk, we discuss a generalization of malleable scheduling, in which a function f(S, j) determines the processing time of job j on machine subset S. Under different discrete concavity assumptions on 1/f(S,j), we show that the scheduling problem can be approximated by a considerably simpler assignment problem and derive LP-based approximation algorithms.
We also highlight a connection to the problem of fairly allocating resources (the Santa Claus problem and its generalizations) and show how our results imply resource-augmented approximation algorithms for this setting.
This is joint work with Dimitris Fotakis and Orestis Papadigenopoulos.
Speaker: Stijn Cambie, Raboud University Nijmegen.
Title: On the general problem of Erdos and Nesetril and its offsprings
When: June 30, 14:30 hrs (Chilean time).
Link: https://zoom.us/j/96139597495?pwd=YzFtak9oRElYN3Mra2NCUmRMMlkyUT09
Abstract: In 1988, Erdos wrote about a problem he proposed with Nesetril:
"One could perhaps try to determine the smallest integer h_t(D) for which every graph, with size h_t(D) edges and maximum degree at most D, contains two edges so that the shortest path joining these edges has length at least t. This problem seems to be interesting only if there is a nice expression for h_t(D)."
This problem can be considered as the edge version of the famous (and hard) degree-diameter problem.
It was the inspiration for a series of research papers on variants of this problem, where one is mainly dealing with the case t=2. We will discuss part of the history that resulted in the strong edge colouring conjecture, which is still widely open.
In the second part of the talk, we will look again to the initial inspirational question of all of this and say more about the cases where t is at least 3.
Speaker: Andrés Perlroth, Google Research.
Title: Auctions with Intermediaries
When: June 23, 14:30 hrs (Chilean time).
Link: https://zoom.us/j/98113903152?pwd=NGdlUVo4TUx0bVVSYzVUUUFjR1RFUT09
Abstract: In many markets, buyers do not bid directly in the auction, but instead, they are represented by intermediaries. This includes ad auctions, where some of the buyers might be represented by intermediaries like Ad agencies, Aggregators (e.g. Home Advisor), or Ad networks participating in an Ad Exchange. The presence of intermediaries creates the potential for collusion across buyers. Also, intermediaries have their own goals that depend on the contract between buyers and the intermediary, and will behave according to the incentives induced by their goals. We will talk about the problem of designing mechanisms for such settings. In particular, we consider the problem of selling k items to unit-demand buyers with private valuations for the items. A buyer either participates directly in the auction or is represented by an intermediary, who can represent a subset of buyers. Our goal is to design robust mechanisms that are independent of the demand structure (i.e. how the buyers are partitioned across intermediaries), and perform well under a wide variety of possible contracts between intermediaries and buyers. We first consider the case of k identical items where each buyer draws its private valuation for an item i.i.d. from a known distribution that is lambda-regular (a weaker condition than monotone hazard rate). Next, we generalize this to the setting of related items.
Speaker: Yuri Faenza, Columbia U.
Title: Some discrete optimization problems in matching markets
When: June 16, 14:30 hrs (Chilean time).
Link: https://zoom.us/j/93599820111?pwd=djhyZGRGTzhiY3ZVdDNSN05PekdQQT09
Abstract: In the classical stable marriage problem, we are given two sets of agents - students and schools - with each student and school having a total order of agents from the opposite set. The goal is to form disjoint pairs of students and schools so that the resulting matching satisfies a fairness property known as stability. In their fundamental work, Gale and Shapley showed that a stable matching always exists, and gave a fast algorithm to find one. These strong structural and algorithmic results propelled the application of the stable marriage model in several contexts, most notably for assigning students to public high schools in many cities in the United States. Although very successful, the marriage model however fails to capture features that have become of crucial importance both inside and outside academia, such as diversity in school cohorts, complex preference patterns, and the need to obtain larger, possibly mildly unstable, matchings. In this talk, I will first present some extensions of the concept of stability and of the marriage model, and investigate them from an optimizer's viewpoint. I will then focus in detail on recent work with Xuan Zhang (Columbia) on algorithms for stable matching models when the preference patterns of agents are described by choice functions. Our approach will allow us to deduce general sufficient conditions for efficiently optimizing linear functions over the elements of a distributive lattice.
Speaker: Raimundo Saona, IST Austria.
Title: Stability in Matrix Games
When: June 09, 14:30 hrs (Chilean time).
Link: https://zoom.us/j/95009019759?pwd=K1VQSUhjSGhMOFpPQWJNY0NxcnFIZz09
Abstract: We consider the simplest game, Matrix Games, and basic stability questions on the value and strategies upon perturbations on the payoff matrix. Considering polynomial perturbations, we design polynomial-time algorithms for the following tasks: (a) ensuring that, for every sufficiently small error, there is a strategy to guarantee that the value is at least the value of the error-free case; (b) ensuring that there is a fixed strategy to guarantee that, for every sufficiently small error, the value is at least the value of the error-free case; and (c) computing the analytical form of the value of the game upon perturbations. We also make the connection with Linear Programming, just as Mills did in 1956 for a related problem.
Speaker: David Conlon, Caltech.
Title: Random multilinear maps and the Erdős box problem
When: June 08, 16:30 hrs (Chilean time).
Link: https://us02web.zoom.us/j/87593953555?pwd=UWl4eTVsanpEUHJDWFo3SWpNNWtxdz09
Abstract: By using random multilinear maps, we provide new lower bounds for the Erdős box problem, the problem of estimating the extremal number of the complete d-partite d-uniform hypergraph with two vertices in each part, thereby improving on work of Gunderson, Rödl and Sidorenko. Joint work with Cosmin Pohoata and Dmitriy Zakharov.
Speaker: Ulrike Schmidt-Kraepelin, TU Berlin
Title: Popular Branchings and Their Dual Certificates
When: June 02, 14:30 hrs (Chilean time).
Link: https://zoom.us/j/93628970125?pwd=L1l6cTVXOVY4Rmh6MllnR2FHanRWUT09
Abstract: Let G be a digraph where every node has preferences over its incoming edges. The preferences of a node extend naturally to preferences over branchings, i.e., directed forests; a branching B is popular if B does not lose a head-to-head election (where nodes cast votes) against any branching. Such popular branchings have a natural application in liquid democracy. The popular branching problem is to decide if G admits a popular branching or not. We give a characterization of popular branchings in terms of dual certificates and use this characterization to design an efficient combinatorial algorithm for the popular branching problem. When preferences are weak rankings, we use our characterization to formulate the popular branching polytope in the original space and also show that our algorithm can be modified to compute a branching with least unpopularity margin. When preferences are strict rankings, we show that “approximately popular” branchings always exist.
This is joint work with Telikepalli Kavitha, Tamás Király, Jannik Matuschke, and Ildikó Schlotter.
Speaker: Waldo Galvez, TU Munich
Title: Improved Approximation Algorithms for 2-Dimensional Knapsack: Packing into Multiple L-Shapes, Spirals, and More
When: May 12, 14:30 hrs (Chilean time).
Link: https://zoom.us/j/95062692613?pwd=YWF2QSsvNVZ5WERYeG1GOXNvMmtMQT09
Abstract: In the 2-Dimensional Knapsack problem (2DK) we are given a square knapsack and a collection of n rectangular items with integer sizes and profits. Our goal is to find the most profitable subset of items that can be packed non-overlappingly into the knapsack. The currently best known polynomial-time approximation factor for 2DK is 17/9+eps<1.89 and there is a (3/2+eps)-approximation algorithm if we are allowed to rotate items by 90 degrees. In this talk, I will present a (4/3+eps)-approximation algorithms in polynomial time for both cases, assuming that all input data are integers polynomially bounded in n.
The previous best algorithm for 2DK partitions the knapsack into a constant number of rectangular regions plus one L-shaped region and packs items into those in a structured way. We generalize this approach by allowing up to a constant number of more general regions that can have the shape of an L, a U, a Z, a spiral, and more, and therefore obtain an improved approximation ratio. In particular, our results are achieved via an algorithm that computes the essentially optimal structured packing into these regions.
Joint work with Fabrizio Grandoni, Arindam Khan, Diego Ramírez-Romero and Andreas Wiese.
Speaker: Cristóbal Guzmán, U Twente
Title: Non-Euclidean Differentially Private Stochastic Convex Optimization
When: May 5, 14:30 hrs (Chilean time).
Link: https://zoom.us/j/98855281600?pwd=Z3A4bGMvSGFwQnl5OXhOWE41UVNIQT09
Abstract: Differentially private (DP) stochastic convex optimization (SCO) is a fundamental problem, where the goal is to approximately minimize the population risk with respect to a convex loss function, given a dataset of i.i.d. samples from a distribution, while satisfying differential privacy with respect to the dataset. Most of the existing works in the literature of private convex optimization focus on the Euclidean (i.e., $\ell_2$) setting, where the loss is assumed to be Lipschitz (and possibly smooth) w.r.t.~the $\ell_2$ norm over a constraint set with bounded $\ell_2$ diameter. Algorithms based on noisy stochastic gradient descent (SGD) are known to attain the optimal excess risk in this setting.
In this talk I will provide a brief overview of differential privacy and its applications in stochastic convex optimization, together with new upper and lower bounds on the population risk for DP-SCO in non-Euclidean setups, in particular where the reference norm is the $\ell_p$-norm, $1\leq p\leq \infty$. Our algorithms include DP versions of variance-reduced stochastic Frank-Wolfe and mirror-descent methods, together with a novel privacy mechanism, which generalizes the Gaussian mechanism. Our work draws upon concepts from the geometry of normed spaces, such as the notions of regularity, uniform convexity, and uniform smoothness.
Based on joint work with Raef Bassily and Anupama Nandi, and available at arXiv:2103.01278
Speaker: Nicolas Rivera, Universidad de Valparaiso
Title: The Dispersion Time of Random Walks on Finite Graphs.
When: April 28, 14:30 hrs (Chilean time).
Link: https://zoom.us/j/91342046761?pwd=bGczR3d6c2Qxdk9sSk1STFVyY2tkZz09
Abstract: Consider two random processes on an n vertex graph related to Internal Diffusion-Limited Aggregation (IDLA). In each process n particles perform independent random walks from a fixed vertex until they reach an unvisited vertex, at which point they settle. In the first process only one particle moves until settling and then the next starts, in the second process all particles are released together.
We study the dispersion time which is the time taken for the longest walk to settle. We present a new coupling which allows us to compare dispersion time across the processes. We additionally prove bounds on the dispersion time(s) in terms of more well-studied parameters of random walks such as hitting times and the mixing time.
Speaker: Felipe Garrido, LAMSADE, Université Paris Dauphine - PSL
Title: Stable matching games
When: April 21, 14:30 hrs (Chilean time).
Link: https://zoom.us/j/97375770036?pwd=TmxQVjRmRGRTYWFqd2FrcFNFNWdEdz09
Abstract: Gale and Shapley (1962) introduced a matching problem between two sets of agents M and W, who need to be paired by taking into account that each agent on one side of the market has an exogenous preference order over the agents of the other side. They defined a matching as stable if no unmatched pair can both improve their payoffs by breaking their couples and forming a new one. They proved the existence of a stable matching using a deferred-acceptance algorithm. Shapley and Shubik (1971) extended the model by allowing monetary transfers. Our article offers a further extension by assuming that matched couples obtain their payoff endogenously as the outcome of a strategic-form game they have to play. A matching, together with a strategy profile, is externally stable if no unmatched pair can break their couples, form a new one and play a strategy profile in their game such that both of them improve their payoffs. It is internally stable if no agent, by individually changing its strategy inside its couple, can increase its payoff without breaking the external stability of its couple. We prove the existence of an externally and internally stable matching in a large class of problems including zero-sum games, strictly competitive games, potential games and infinitely repeated games. We also prove that our main model encompasses and refines matching with monetary transfers as well as matching with contracts.
Speaker: Lars Rohwedder, EPFL Lausanne
Title: A (2 + epsilon)-approximation algorithm for preemptive weighted flow time on a single machine.
When: April 14, 14:30 hrs (Chilean time).
Link: https://zoom.us/j/95471087976?pwd=UzFTQVBTRlFNRzNicWk4c0ZGeGZWUT09
Abstract: In a recent breakthrough in scheduling, Batra, Garg, and Kumar gave the first constant approximation algorithm for minimizing the sum of weighted flow times. Wiese and I (STOC'21) managed to improve this large unspecified constant to 2 + epsilon. I will give a very graphic presentation of the algorithmic techniques behind this.
Speaker: Maximilian Janke, TU Munich
Title: Scheduling in the Random-Oder Model
When: March 31, 14:30 hrs (Chilean time).
Link: https://zoom.us/j/95041369607?pwd=amRHU29ZS3AvSHNaQmc4MkdkelBXdz09
Abstract: We study Online Makespan Minimization, one of the most basic scheduling problems, in the random-order model. Here jobs of a given input arrive in a uniformly chosen random order as opposed to the classical adversarial model, which considers worst-case orders. The random-order model originates from the Secretary Problem and has received quite some research interest over the last years.
For scheduling, the random-order model provides beyond worst-case guarantees while still not being overly pessimistic. Furthermore, it leads to a stronger performance measure, which considers 'nearly worst-case' orders. Recent results on Makespan Minimization in the random-order model outperform classical lower bounds, even using this stronger measure. This proves formally that classical worst-case sequences are highly order-reliant and should therefore be rare in practical applications.
In this talk we discuss the role of random-order arrival for Makespan Minimization. We compare recent random-order bounds with various classical online and semi-online guarantees, and introduce key techniques, which allow to profit from random-order arrival. These techniques should be relevant for further research on Random-Order Makespan Minimization and related problems.
This talk is based on the paper Scheduling in the Random-Order Model, which is joint work with Susanne Albers.
Speaker: Sebastián Pérez, Gatech
Title: Adaptive bin packing with overflow
When: March 24, 14:30 hrs (Chilean time).
Link: https://zoom.us/j/95041369607?pwd=amRHU29ZS3AvSHNaQmc4MkdkelBXdz09
Abstract: Motivated by the allocation of virtual machines into servers in the cloud, we consider the online problem of packing items with random sizes into unit-capacity bins. Items arrive sequentially, but upon arrival an item’s actual size is unknown; only its probabilistic information is available to the decision maker. Without knowing this size, the decision maker must irrevocably pack the item into an available bin or place it in a new bin. Once packed in a bin, the decision maker observes the item’s actual size, and overflowing the bin is a possibility. An overflow incurs a large penalty cost and the corresponding bin is unusable for the rest of the process. In practical terms, this overflow models delayed services, failure of servers, and/or loss of end-user goodwill. The objective is to minimize the total expected cost given by the sum of the number of opened bins and the overflow penalty cost. In this talk, we present an online algorithm with expected cost at most a constant factor times the cost incurred by the optimal packing policy when item sizes are drawn from an i.i.d. sequence. We discuss the limits of our approach and extensions to offline settings, where distributions are known in advance but packed sequentially. Joint work with Mohit Singh and Alejandro Toriello.
Speaker: Sabine Limbourg, University of Liege
Title: City logistics and smart delivery
When: March 17, 14:30 hrs (Chilean time).
Link: https://zoom.us/j/92175923247?pwd=QWlhZGpDUUVQSlBFbnNxZk5NNXlPdz09
Abstract: The current and expected growing number of people living and working in cities and the limited space available inside city centres implies a greater exchange of inbound and outbound freight flows between city centres and their surrounding regions. Urban freight transports provide economic benefits to society but are also responsible for negative externalities such as congestion, air and water pollution, climate change, accidents and noise. Access restrictions are one of the most applied measures to control urban traffics in the city's specific areas. A growing use of urban trucks based on electric, hydrogen and hybrid technologies or non-motorized transport such as bikes reduces pollutant emissions, noise and road congestion by making night deliveries and avoiding morning and afternoon peak periods.
This seminar's objective is to discuss how to efficiently distribute shipments to customers from several cities by a freight transport operator. This delivery company has several vehicles to carry products to customers or small depots located at different cities' points. We consider the whole distribution network, allowing us to make decisions at firm, delivering companies and small depots level (Aguayo et al., 2020).
Then we consider a platform bringing together several freight transport operators. Freight bundling is a central characteristic of the systems in focus. Besides, pricing decisions can be simultaneously integrated by extending a bilevel programming formulation (Tawk and Limbourg, 2019). Finally, Unmanned Aerial Vehicle, which is also known as a drone, is examined as a possible way to facilitate biomedical transportation. We deal with the drone network design problem for biomedical material transportation. Four location models are developed and applied to Brussels and its periphery with respect to the associated market in terms of biomedical product flows. In the context of separate case studies of scenario-based analysis, the experiments show that charging stations is useful for extending the mission ranges and gaining market share. The results also show the possibility of gradually implementing the bases without requiring any signifficant changes, such as closing a base (Dhote and Limbourg, 2020).
Acknowledgments
This work was supported by Wallonie-Bruxelles International.
Speaker: Chun-Hung Liu, Texas A&M University
Title: Well-quasi-ordering digraphs by the strong immersion relation
When: Wednesday, December 16, 15:00 hrs (Chilean time).
Link: https://zoom.us/j/97493917037?pwd=YkN0NkhJRHNhMXJOM0NWS3ViY0R1Zz09
Abstract: A well-quasi-ordering is a reflexive and transitive binary relation such that every infinite sequence has a non-trivial increasing subsequence. The study of well-quasi-ordering was stimulated by two conjectures of Vazsonyi in 1940s: trees and subcubic graphs are well-quasi-ordered by the topological minor relation. It is known that the topological minor relation does not well-quasi-order all graphs. Nash-Williams in 1960s introduced the notion of strong immersion and conjectured that the strong immersion relation well-quasi-orders all graphs, which is a common generalization of both conjectures of Vazsonyi. In this talk we consider strong immersion on digraphs. Paths that change direction arbitrarily many times form an infinite antichain with respect to the strong immersion relation. In this talk, we will prove that it is the only obstruction. Namely, for any integer k, digraphs with no paths that change direction at least k times are well-quasi-ordered by the strong immersion relation. Joint work with Irene Muzi.
Speaker: Gwen McKinley, UC San Diego
Title: Counting integer partitions with the method of maximum entropy
When: Wednesday, December 2, 15:00 hrs (Chilean time).
Link: https://zoom.us/j/98543137164?pwd=M1lUaUNPcm1LcnFocEZhRXMvWDRKQT09
Abstract: The problem of asymptotically counting integer partitions has a long and storied history, beginning with the pioneering work of Hardy and Ramanujan in 1918. In the work presented here, we give a probabilistic perspective on this problem, and use this approach to prove an asymptotic formula for the number of partitions of an integer n where the sums of the kth powers of the parts are also fixed, for some collection of values k. To obtain this result, we reframe the counting problem as an optimization problem, and find the probability distribution on the set of all integer partitions with maximum entropy among those that satisfy our restrictions in expectation (in essence, this is an application of Jaynes' principle of maximum entropy). This approach leads to an approximate version of our formula as the solution to a relatively straightforward optimization problem over real-valued functions. To establish more precise asymptotics, we prove a local central limit theorem.
A large portion of the talk will be devoted to outlining how our method can be used to re-derive a classical result of Hardy and Ramanujan, with an emphasis on the intuitions behind the method, and limited technical detail. This is joint work with Marcus Michelen and Will Perkins
Speaker: José Samper, Pontificia Universidad Católica
Title: Matroids, polymatroids and submodular functions: algebra or optimization?
When: Wednesday, December 2, 14:30 hrs (Chilean time).
Link: https://zoom.us/j/98780890672?pwd=VXpjdEM0Zk9KNWxBcDdKeFlsaEFnUT09
Abstract: Matroids, polymatroids and submodular functions are beautiful objects that have appeared in many areas of mathematics including algebra, geometry, topology and optimization. Consequently, they have been studied from multiple points of view. This oftentimes has the consequence that the same results are proven from different points of view and by very different communities. A notable example being their study in optimization and algebraic combinatorics. The goal of this talk is to tell this story, show how a small mixture of ideas from different communities lead to powerful algebraic tools, and speculate a little a bit about the opposite direction.
Speaker: Santiago Armstrong, Pontificia Universidad Católica
Title: An optimal algorithm for strict circular seriation.
When: Wednesday November 25, 14:30 hrs (Chilean time).
Link: https://zoom.us/j/91348424630?pwd=eSsvdEhlRU1EeXlWc0RxUnp1TjFKUT09
Abstract: Seriation is an exploratory combinatorial data analysis technique to reorder objects into a sequence along a one-dimensional continuum when the only available information among the objects is their pairwise similarity (or dissimilarity). Linear seriation aims at inferring an ordering (permutation) consistent with an underlying linear order of the data. In many cases however, the data objects may be arranged around a closed continuum yielding a rather circular underlying order. In a matrix representation, this can be visualized as a symmetric matrix of pairwise dissimilarities between objects where elements of each row/column increase monotonically while moving to the right/bottom until some specific element and then decrease again monotonically until the end of each row/column and fold back from the left/top of the matrix. Many approaches used in linear seriation algorithms involve computing the ball hypergraph associated to the dissimilarity. We show that a similar approach can be used in the circular case. In addition to the previous we present an optimal algorithm for strict seriation, which is a natural setting for continuous data.
Speaker: Victor Verdugo, Universida de O'Higgins
Title: Apportionment Mechanisms under Parity Constraints and the case of Chile and a new Political Constitution.
When: Wednesday November 18, 14:30 hrs (Chilean time).
Link: https://zoom.us/j/93139273671?pwd=VWE1NEplWjRsWFJxSnRnNVVRMlZEUT09
Abstract: How many seats should be allocated to each political party in an election? This question has a long and rich history, and plays a fundamental role in order to ensure appropriate levels of representation in the parliaments. We consider this question in the presence of types (e.g. men and women), where apart from ensuring proportionality at the party level, we also have to find an allocation which is paritary for the types. We consider two different approaches: one of them, with a greedy/local search paradigm, corresponds to a mechanism recently approved in the chilean parliament (March 2020) to find the representatives of the constitutional assembly with the goal of designing a new political constitution for Chile (election to happen in April 2021); the other one is based on the idea of integral biproportionality, introduced by Balinski and Demange in the late 80’s. We analyze both mechanisms from an axiomatic and quantitative point of view. We also provide new results regarding integral biproportional solutions and the fair share, a.k.a matrix scaling solution, which represents the ideal (but not necessarily implementable) fractional biproportional solution.
Joint work with Claire Mathieu (CNRS & IRIF, France)
Speaker: Arturo Merino, TU-Berlin
Title: On the two-dimensional knapsack problem for convex polygons.
When: Wednesday October 28, 14:30 hrs (Chilean time).
Link: https://zoom.us/j/95413798884?pwd=ait4K0J4WHFSZTVVMXdJN1VFVGFyZz09
Abstract: We study the two-dimensional geometric knapsack problem for convex polygons. Given a set of weighted convex polygons and a square knapsack, the goal is to select the most profitable subset of the given polygons that fits non-overlappingly into the knapsack. We allow rotating the polygons by arbitrary angles.
In this talk we will first present:
- a quasi-poly time O(1)-approximation
- a poly time O(1)-approximation algorithm if all input polygons are triangles.
We will then look into the setting of resource augmentation, i.e., we allow to increase the size of the knapsack by a factor of 1 + δ for some δ > 0 but compare ourselves with the optimal solution for the original knapsack. In the resource augmentation setting we present:
- a poly time O(1)-approximation
- a quasi-poly time algorithm that computes a solution of optimal weight.
To the best of our knowledge, these are the first results for two-dimensional geometric knapsack in which the input objects are more general than axis-parallel rectangles or circles and in which the input polygons can be rotated by arbitrary angles.
This is joint work with Andreas Wiese.
Speaker: Francisco Marmolejo, University of Oxford and IOHK.
Title: Strategic and Structural Aspects of Bitcoin Mining
When: Wednesday October 21, 14:30 hrs (Chilean time).
Link: https://zoom.us/j/92034905790?pwd=RjFlL29iQzIwQ1hkSnQrdG1uOTVXUT09
Abstract:Just over a decade ago, Satoshi Nakamoto published a landmark white paper outlining the backbone protocol to Bitcoin, arguably the world's first mainstream digital currency. Through the Nakamoto protocol and blockchain technology, users can reliably agree upon a common transaction history maintained by miners on the public Bitcoin ledger in a decentralised and anonymous fashion. This is possible in part because agents within the protocol are assumed to be strategic rather than adversarial---a fair assumption when creating a ledger for currency. The Nakamoto protocol is thus at its core a decentralised consensus protocol among strategic agents that is built from cryptographic primitives and judicious incentive engineering. In this talk I will give an overview of how game theory is an important tool to understand decentralised consensus protocols such as Bitcoin. Along the way, I will highlight recent work pertaining to strategic mining within "pools" of miners and inherent structural limitations of Bitcoin-like proof-of-work (PoW) consensus protocols.
Speaker: Santiago R. Balseiro, Columbia University.
Title: Dynamic Pricing of Relocating Resources in Large Networks
When: Wednesday October 07, 14:30 hrs (Chile time).
Link: https://zoom.us/j/96968142246?pwd=c1BleFZ3S0F3WXA1b1RGdGlYYWVRQT09
Abstract: Motivated by applications in shared vehicle systems, we study dynamic pricing of resources that relocate over a network of locations. Customers with private willingness-to-pay sequentially request to relocate a resource from one location to another, and a revenue-maximizing service provider sets a price for each request. This problem can be formulated as an infinite horizon stochastic dynamic program, but is quite difficult to solve, as optimal pricing policies may depend on the locations of all resources in the network. We first focus on networks with a hub-and-spoke structure, and we develop a dynamic pricing policy and a performance bound based on a Lagrangian relaxation. This relaxation decomposes the problem over spokes and is thus far easier to solve than the original problem. We analyze the performance of the Lagrangian-based policy and focus on a supply-constrained large network regime in which the number of spokes ($n$) and the number of resources grow at the same rate. We show that the Lagrangian policy loses no more than $O(\sqrt{\ln(n)/n})$ in performance compared to an optimal policy, thus implying asymptotic optimality as $n$ grows large. We also show that no static policy is asymptotically optimal in the large network regime. Finally, we extend the Lagrangian relaxation to provide upper bounds and policies to general networks with multiple, interconnected hubs and spoke-to-spoke connections, and to incorporate relocation times. We also examine the performance of the Lagrangian policy and the Lagrangian relaxation bound on some numerical examples, including examples based on data from RideAustin.
Joint work with David B. Brown (Duke University) and Chen Chen (University of Chicago).
Paper: https://ssrn.com/abstract=3313737
Speaker: Benjamin Moseley, Carnegie Mellon University.
When: Wednesday 23 de Septiembre, 14:30 hrs (Chile time).
Abstract: Combinatorial optimization often focuses on optimizing for the worst-case. However, the best algorithm to use depends on the "relative inputs", which is application specific and often does not have a formal definition.
The talk gives a new theoretical model for designing algorithms that are tailored to inputs for the application at hand. In the model, learning is performed on past problem instances to make predictions on future instances. These predictions are incorporated into the design and analysis of the algorithm. The predictions can be used to achieve "instance-optimal" algorithm design when the predictions are accurate and the algorithm's performance gracefully degrades when there is error in the prediction.
The talk will apply this framework to applications in online algorithm design and give algorithms with theoretical performance that goes beyond worst-case analysis. The majority of the talk will focus on load balancing on unrelated machines.
Speaker: Nicolas Stier-Moses, Co-Director de Facebook Core Data Science.
Title: Pacing Mechanisms For Ad Auctions
When: Wednesday September 30, 14:30 hrs (Chile time).
Abstract: Budgets play a significant role in real-world sequential auction markets such as those implemented by Internet companies. To maximize the value provided to auction participants, spending is smoothed across auctions so budgets are used for the best opportunities. Motivated by pacing mechanisms used in practice by online ad auction platforms, we discuss smoothing procedures that ensure that campaign daily budgets are consistent with maximum bids. Reinterpreting this process as a game between bidders, we introduce the notion of pacing equilibrium, and study properties such as existence, uniqueness, complexity and efficiency, both for the case of second and first price auctions. In addition, we connect these equilibria to more general notions of market equilibria, and study how compact representations of a market lead to more efficient approaches to compute approximate equilibria.
Papers:
Second price pacing: https://arxiv.org/abs/1706.07151
First price pacing: https://arxiv.org/abs/1811.07166
Abstractions: https://arxiv.org/abs/1901.06230