SPO‎ > ‎

Abstracts

19/6 Kristian Bredies

Accelerated and preconditioned Douglas-Rachford algorithms for the solution of variational imaging problems
(joint work with Hongpeng Sun)

We discuss basic, accelerated and preconditioned versions of the Douglas-Rachford (DR) splitting method for the solution of convex-concave saddle-point problems that arise in variational imaging. While the basic DR iteration admits weak and ergodic convergence with rate $O(1/k)$ for restricted primal-dual gaps, acceleration leads to convergence rates of $O(1/k^2)$ and $O(q^k)$ for some $0<q<1$ under appropriate strong convexity assumptions.  Further, preconditioning allows to replace the potentially computationally expensive solution of a linear system in each iteration step in the corresponding DR iteration by fast approximate solvers without the need to control the error.

The methods are applied to non-smooth and convex variational imaging problems. We discuss denoising and deconvolution with $L^2$ and $L^1$ discrepancy and total variation (TV) as well as total generalized variation (TGV) penalty. Preconditioners which are specific to these problems are presented, the results of numerical experiments are shown and the benefits of the respective accelerated and preconditioned iteration schemes are discussed.

19/6 Zheng Qu

Restarting accelerated gradient methods with a rough strong convexity estimate

Proximal gradient method is known to converge linearly for convex composite minimization problems when the objective function satisfies a local quadratic growth condition, which greatly generalizes the strong convexity condition. Accelerated proximal gradient methods are known to converge with a worst-case complexity bound $O(1/\sqrt{\epsilon})$. If in addition the problem is strongly convex with condition number $\kappa$, an accelerated linear convergence rate (worst-case complexity bound $O(\sqrt{\kappa}\log(1/\epsilon))$) can be achieved. These classical methods have been successfully extended to randomized coordinate update to improve the efficiency for large scale problems.


We first show that, by using an adaptive restart strategy, the accelerated linear convergence can also be achieved under the local quadratic growth condition. Our method can be combined with any accelerated proximal gradient method with complexity bound $O(1/\sqrt{\epsilon})$. We also propose new restarting strategies for accelerated gradient and accelerated coordinate descent methods. In particular, we show that by changing the restarting point, the restarted method has a geometric rate of convergence for any restarting frequency, and so it allows us to take profit of restarting even when we do not know the local quadratic growth condition number.


22/5 Fabio D'Andreagiovanni

Multiband Robust Optimization: theory and applications

Over the last years, Robust Optimization (RO) has emerged as an effective and efficient methodology to tackle data uncertainty in real-world optimization problems. RO takes into account data uncertainty in the shape of hard constraints that restrict the feasible set and maintain only robust solutions, i.e. solutions that remain feasible even when the values of the input data change.

In this talk, I will provide an overview of my research about theory and applications of RO. Specifically, I will present Multiband Robustness (Büsing and D'Andreagiovanni 2012), a model for RO proposed to generalize and refine the classical Gamma-robustness model by Bertsimas and Sim. The main aim of the model is to provide an improved representation of arbitrary non-symmetric distributions of the uncertainty, in particular histograms, which are commonly present in real-world applications. Such refined representation grants a reduction in conservatism of robust solutions, while maintaining the accessibility and computational tractability that have been a key factor of success of Gamma-robustness. I will also provide an overview of applications of the Multiband model to real-world problems that I have considered in past and ongoing industrial research projects.


22/5 Nelson Maculan

The Euclidean Steiner tree problem in n-space: Mixed-integer nonlinear optimization models

The Euclidean Steiner tree problem (ESTP) is to find a shortest network interconnecting p given points in n-dimensional Euclidean space. The problem was first described in the plane and an algorithm with very good practical efficiency has been proposed to solve this particular case. The generalization for higher dimensions was proposed in the 19th century, however the numerical solution of the problem remains very challenging when n ≥ 3. We present a way for formulating this problem using mixed-integer nonlinear optimization models, and some computational results. A conjecture for given points associated with the vertices of a n-cube (n=2,3,4,...) is also discussed.

This is a joint work with Marcia Fampa, Virginia Costa and Luiz-Felipe R. Ribeiro da Cruz.


24/4 András Sebő

What should we buy for Traveling Salesmen, for Hypertournaments and for Reliable Networks?

Starting with a beautiful theorem of Graham and Pollak (1972) about partitioning the edge-set of a graph to complete bipartite graphs (with Tverberg's elegant short proof (1982)) applied to ''hypertournaments'', continuing with recent results on reliable networks (minimum 2-edge-connected graphs 2014-2017), and finishing with the best bound for traveling salesman paths (March 2017), we exhibit a classical structural tool occurring in novel and very useful ways.

I would like to show the tool, the way it is applied and some ongoing research using it for improving the approximation ratio of some relevant optimization (or extremal) problems.


24/4 Leo Liberti
On the approximate solution of large dense Linear Programs
A well known lemma by Johnson and Lindenstrauss states that given a finite set X of n points in a Euclidean space E(m) in m dimensions and a small real tolerance 0<t<1, there is a k ~ O(1/t^2 log(n)) and a mapping f:E(m)-->E(k) such that:

for each x,y in E(m) (1-t)||x-y|| <= ||f(x)-f(y)|| <= (1+t)||x-y||.

We apply the Johnson-Lindenstrauss lemma to the columns of a Linear Program (LP), and show that the resulting projected LP approximates the original LP quite well. We showcase the application of this methodology to solving large (and rather dense) LPs. Joint work with Pierre-Louis Poirion and Vu Khac Ky.


27/3 Victor Chepoi

Tout réseau hyperbolique a un coeur


We investigate the impact the negative curvature has on the traffic congestion in large-scale networks. We prove that every Gromov hyperbolic network G admits a core, thus answering in the positive a conjecture by
Jonckheere, Lou, Bonahon, and Baryshnikov, Internet Mathematics, 7 (2011)
which is based on the experimental observation by
Narayan and Saniee, Physical Review E, 84 (2011)
that real-world networks with small hyperbolicity have a core congestion. Namely, we prove that for every subset X of n vertices of a graph G with delta-thin geodesic triangles there exists a vertex m of G such that the ball B(m,4delta) of radius 4delta centered at m intercepts at least one half of the total flow between all pairs of vertices of X, where the flow between two vertices x,y of X is carried by geodesic (or quasi-geodesic) (x,y)-paths. We show that the point m can be constructed in the following way: take a median point m* of X (a point minimizing the sum of distances to the points of X) in the injective hull of G and then a closest to m* point in G.

Moreover, we prove a primal-dual result showing that, for any commodity graph R on X and any r at least 8delta, the size sigma_r(R) of the least r-multi-core (i.e., the number of balls of radius r) intercepting all pairs of R is upper bounded by the maximum number of pairwise (2r-5delta)-apart pairs of R and that an r-multi-core of size sigma_{r-5\delta}(R) can be computed in polynomial time. Our result about multi-cores is based on a general Helly-type theorem for quasiconvex sets in delta-hyperbolic graphs (this is our second main result). Namely, we show that for any finite collection Q of pairwise intersecting epsilon-quasiconvex sets of a delta-hyperbolic graph G there exists a single ball B(c,2epsilon+5delta) intersecting all sets of Q. Using the Helly theorem for quasiconvex sets and a primal-dual approach, we show algorithmically that the minimum number of balls of radius 2epsilon+5delta intersecting all sets of a family Q of epsilon-quasiconvex sets does not exceed the packing number of Q (maximum number of pairwise disjoint sets of Q).

27/3 Nicolas Le Roux

Optimize what you care about (and stop using the log-loss)

Machine Learning systems are increasingly complex and often comprise many interconnected components. When working on a single of these components, one should make sure that any modification to that component improves the whole system.
I will discuss how, when performing classification, the log-loss does not satisfy these guarantees. I will then present a simple modification which not only improves the classification error, but also allows the seamless integration of external constraints as well as the possibility for the system to deal with its own uncertainty.


27/2 Claudia d'Ambrosio
Classical formulations and strong valid inequalities for the pooling problem
The focus of this talk will be on a classical continuous, non-convex optimization problem arising in the petrolium engineering context, namely, the standard pooling problem. Three different classical formulations have been proposed in the literature and their characteristics will be discussed and analysed. Moreover, strong relaxations for the pooling problem will be presented. In particular, we studied a structured non-convex subset of some special cases to derive valid nonlinear convex inequalities that we proved to define the convex hull of the non-convex subset. Preliminary computational results on instances from the literature are reported and demonstrate the utility of the inequalities.
This is a joint work with Jeff Linderoth (University of Wisconsin-Madison), James Luedtke (University of Wisconsin-Madison), Jonas Schweiger (IBM).

27/2 Joseph Salmon

Gap Safe screening rules for sparsity enforcing penalties

In high dimensional regression context, sparsity enforcing penalties have proved useful to regularize the data-fitting term. A recently introduced technique called screening rules, leverage the expected sparsity of the solutions by ignoring some variables in the optimization, hence leading to solver speed-ups. When the procedure is guaranteed not to discard features wrongly the rules are said to be safe. We propose a unifying framework that can cope with generalized linear models regularized with standard sparsity enforcing penalties such as l1 or l1/l2 norms. Our technique allows to discard safely more variables than previously considered safe rules, particularly for low regularization parameters. Our proposed Gap Safe rules (so called because they rely on duality gap computation) can cope with any iterative solver but is particularly well suited to block coordinate descent for many standard learning tasks: Lasso, Sparse-Group Lasso, multi-task Lasso, binary and multinomial logistic regression, etc. For all such tasks and on all tested datasets, we report significant speed-ups compared to previously proposed safe rules.
This is joint work with Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort and Vincent Leclere.

16/1 Emmanuel Trélat

Turnpike property in optimal control

Turnpike refers to the fact that, over long time, the optimal solution of a given optimal control problem remains essentially close to a steady-state, itself being the optimal solution of an associated static optimization problem. I will review this property and give some results for problems in finite and infinite dimension, and variants concerning periodic turnpike or shape turnpike (co-authors: Martin Gugat, Can Zhang, Enrique Zuazua).


16/1 Francis Bach

Stochastic Variance Reduction Methods for Saddle-Point Problems 

We consider convex-concave saddle-point problems where the objective functions may be split in many components, and extend recent stochastic variance reduction methods to provide the first large-scale linearly convergent algorithms for this class of problems which is common in machine learning. While the algorithmic extension is straightforward, it comes with challenges and opportunities: (a) the convex minimization analysis does not apply and we use the notion of monotone operators to prove convergence, showing in particular that the same algorithm applies to a larger class of problems, such as variational inequalities, (b) the split does need to be done with convex-concave terms. (Joint work with P. Balamurugan).
12/12 Vladimir Gurvich

Metric and ultrametric spaces of resistance


12/12 Xavier Allamigeon

Log-barrier interior-point methods are not strongly polynomial

We identify a class of path-following interior-point methods which are not strongly polynomial. This class corresponds to primal-dual interior-point methods which iterates in the so-called ``wide'' neighborhood of the central path arising from the logarithmic barrier. It includes short step, long step as well as predictor-corrector types of interior-point methods.
We establish a lower bound on the number of iterations of these methods when applied to some parametric families of linear programs. The lower bound is expressed in terms of the number of tropical segments in a piecewise linear curve arising from the tropicalization of the central path. In this way, we derive that the aforementioned interior point methods require $\Omega(2^d)$ iterations to solve a linear program in dimension $O(d)$ which we studied in a previous work.







Comments