Fall 2016

Concise schedule is below, followed by a long version with the abstracts

Short list of abstracts:

  • Aug 23 2016, Welcome meeting, and Jessica Gronski (CU): Nuclear Norms for Collaborative Filtering
  • Aug 30 2016, Paul Constantine (Colorado School of Mines): Accelerating heuristics for nonconvex optimization with active subspaces (abstract)
  • Sep 6 2016, Derek Driggs (CU), Farhad Pourkamali-Anaraki (CU)
  • Sep 13 2016, none (see Fernando Perez speak in CS colloq.)
  • Sep 20 2016, Jacob Abernethy, U. Michigan Ann Arbor: On the equivalence of simulated annealing and interior point path following for optimization (abstract)
  • Sep 27 2016, Bo Waggoner (U Penn): What Dice Are These? (abstract)
  • Oct 4 2016, Kamalika Chaudhuri, UCSD: Challenges in Privacy-Preserving Data Analysis (abstract)
  • Oct 11 2016, James Folberth (CU) and Jake Knigge (CU and CoBank), "A Brief Introduction to Optimization Algorithms on Matrix Manifolds" and "Give me an error bar! Bootstrapping big data"
  • Oct 18 2016, Anastasios Kyrillidis (UT Austin): Finding low-rank solutions via the Burer-Monteiro approach, efficiently and provably (abstract)
  • Oct 25 2016, Adam Bloniarz (Google, Boulder): Lasso adjustments of treatment effects in randomized experiments (abstract)
  • Nov 1 2016, Jason and Jon (CU)
  • Nov 8 2016, Dan Milroy, Michelle Maiden (CU)
  • Nov 15 2016, Osman Malik (CU)
  • Nov 22 2016, Thanksgiving break, no seminar
  • Nov 29 2016, Hunaid Contractor, Peter Shaffery (CU)
  • Dec 6 2016, Lijun Chen (CU-Boulder): The Weighted Sum Rate Maximization in MIMO Interference Networks: Minimax Lagrangian Duality and Algorithm (abstract)

Abstracts

SPEAKER: Paul Constantine, Ben L. Fryrear Assistant Professor of Applied Mathematics and Statistics at Colorado School of Mines

TITLE: Accelerating heuristics for nonconvex optimization with active subspaces

TIME: 3:30 PM, Tuesday, 30 August 2016

PLACE: Newton Lab

ABSTRACT

Scientists and engineers use computer simulations to study relationships between a physical model's input parameters and its outputs. However, thorough parameter studies---e.g., constructing response surfaces, optimizing, or averaging---are challenging, if not impossible, when the simulation is expensive and the model has several inputs. To enable studies in these instances, the engineer may attempt to reduce the dimension of the model's input parameter space. Active subspaces are part of an emerging set of dimension reduction tools that identify important directions in the parameter space. I will describe methods for discovering a model's active subspace and propose strategies for exploiting the reduced dimension to enable otherwise infeasible parameter studies. For more information, see activesubspaces.org

In particular, I will discuss how the low-dimensional structure revealed by an active subspace may be exploited to accelerate computational heuristics for optimizing a nonlinear, nonconvex function of several variables.

SPEAKER: Jacob Abernethy; Department of Electrical Engineering and Computer Science; University of Michigan, Ann Arbor

TITLE: On the equivalence of simulated annealing and interior point path following for optimization.

TIME: 3:30 PM, Tuesday, 20 September 2016

PLACE: DUAN G2B21

ABSTRACT:

A well-studied deterministic algorithmic technique for convex optimization is the class of so-called “interior point methods” of Nesterov and Nemirovski, which involve taking a sequence of Newton steps along the “central path” towards the optimum. An alternative randomized method, known as simulated annealing, involves performing a random walk around the set while “cooling” the stationary distribution towards the optimum. We will show that these two methods are, in a certain sense, fully equivalent: both techniques can be viewed as different types of path following. This equivalence allows us to get an improved state-of-the-art rate for simulated annealing, and provides a new understanding of random walk methods using barrier functions.

SPEAKER: Bo Waggoner, Department of Computer Science, University of Pennsylvania

TITLE: What Dice Are These?

TIME: 3:30 PM, Tuesday, 27 September 2016

PLACE: DUAN G2B21

ABSTRACT:

You are handed a die with /n/ sides and asked to describe some property of it. Is it fair or biased? If biased, what is its entropy? Can you write down its distribution over sides, and how precisely? How similar is its distribution to that of a second die?

The only thing you're allowed to do is roll the die or dice. The question is how many rolls you need to answer these questions with a given accuracy. I'll describe what we know about these kinds of problems and the algorithms and techniques used. Almost none of this talk will be my own work, but I hope you'll enjoy it and learn something new.

SPEAKER: Kamalika Chaudhuri, Assistant Professor, University of California, San Diego

WHEN: 3:30 p.m. Tuesday, Oct. 4

WHERE: MATH 100

Challenges in Privacy-Preserving Data Analysis

ABSTRACT: Machine learning algorithms increasingly work with sensitive information on individuals, and hence the problem of privacy-preserving data analysis -- how to design data analysis algorithms that operate on the sensitive data of individuals while still guaranteeing the privacy of individuals in the data-- has achieved great practical importance. In this talk, we address two problems in privacy-preserving data analysis.

First, we address the problem of privacy-preserving classification, and present an efficient classifier which is private in the differential privacy model of Dwork et al. Our classifier works in the ERM (empirical loss minimization) framework, and includes privacy preserving logistic regression and privacy preserving support vector machines.

We next address the question of learning from sensitive correlated data, such as private information on users connected together in a social network, and measurements of physical activity of a single user across time. Unfortunately differential privacy cannot adequately address privacy challenges in this kind of data, and as such, these challenges have been largely ignored by existing literature. We consider a recent generalization of differential privacy, called Pufferfish, that can be used to address privacy in correlated data, and present new privacy mechanisms in this framework. Based on joint work with Claire Monteleoni (George Washington University), Anand Sarwate (Rutgers), Yizhen Wang (UCSD) and Shuang Song (UCSD).

SPEAKER: Anastasios (Tasos) Kyrillidis, Simons Fellowship PostDoc researcher at the WNCG Group at University of Texas at Austin

TITLE: Finding low-rank solutions via the Burer-Monteiro approach, efficiently and provably

DATE: October 18 2016

ABSTRACT:

A low rank matrix can be described as the outer product of two tall matrices, where the total number of variables is much smaller. One could exploit this observation in optimization: e.g., consider the minimization of a convex function f over low-rank matrices, where the low-rank set is modeled via such factorization. This is not the first time such a heuristic has been used in practice. The reasons of this choice could be three-fold: (i) it might model better an underlying task (e.g., f may have arisen from a relaxation of a rank constraint in the first place), (ii) it might lead to computational gains, since smaller rank means fewer variables to maintain and optimize, (iii) it leads to statistical “gains”, as it might prevent over-fitting in machine learning or inference problems.

Though, such parameterization comes at a “cost”: the objective is now a bilinear function over the variables, and thus non-convex with respect to the factors. Such cases are known to be much harder to analyze. In this talk, we will discuss and address some of the issues raised by working directly on such parameterization: How does the geometry of the problem change after this transformation? What can we say about global/local minima? Does this parameterization introduce spurious local minima? Does initialization over the factors play a key role, and how we can initialize in practice? Can we claim any convergence guarantees under mild assumptions on the objective f? And if yes, at what rate?

SPEAKER: Adam Bloniarz (Google, Boulder)

TITLE: Lasso adjustments of treatment effects in randomized experiments

DATE: October 25 2016

ABSTRACT:

We provide a principled way for investigators to analyze randomized experiments when the number of covariates is large. Investigators often use linear multivariate regression to analyze randomized experiments instead of simply reporting the difference of means between treatment and control groups. Their aim is to reduce the variance of the estimated treatment effect by adjusting for covariates. If there are a large number of covariates relative to the number of observations, regression may perform poorly because of overfitting. In such cases, the Lasso may be helpful. We study the resulting Lasso-based treatment effect estimator under the Neyman-Rubin model of randomized experiments. We present theoretical conditions that guarantee that the estimator is more efficient than the simple difference-of-means estimator, and we provide a conservative estimator of the asymptotic variance, which can yield tighter confidence intervals than the difference-of-means estimator. Simulation and data examples show that Lasso-based adjustment can be advantageous even when the number of covariates is less than the number of observations. Specifically, a variant using Lasso for selection and OLS for estimation performs particularly well, and it chooses a smoothing parameter based on combined performance of Lasso and OLS.

SPEAKER: Lijun Chen, Assistant Professor of Computer Science and Telecommunications at the University of Colorado at Boulder

TITLE: The Weighted Sum Rate Maximization in MIMO Interference Networks: Minimax Lagrangian Duality and Algorithm

DATE: Dec 6 2016

ABSTRACT: We take a new approach to the weighted sum-rate maximization in multiple-input multiple-output (MIMO) interference networks, by formulating an equivalent max-min problem. This reformulation has significant implications: the Lagrangian duality of the equivalent max-min problem provides an elegant way to establish the sum-rate duality between an interference network and its reciprocal, and more importantly, suggests a novel iterative minimax algorithm with monotonic convergence for the weighted sum-rate maximization. The design and the convergence proof of the algorithm use only general convex analysis. They apply and extend to other max-min problems with similar structure, and thus provide a general class of algorithms for such optimization problems.