Fall 2018

Quick info:

  • To sign up for the mailing list (you do not need to be enrolled in the seminar), visit our google groups website.
  • The seminar meets Tuesdays, 3:30 to 4:30 PM in the Newton Lab (ECCR 257) in the Classroom Wing of the Engineering Center on the CU Boulder main campus


Announcements:

  • Our Oct 2 meeting will be starting at 3:00pm and has been moved to DLC 170
  • Our Sept 18 meeting has been moved to Sept 19 at 3:00pm in ECOT 226

List of Talks

  • Oct 16-"Exploiting Low-Dimensional Structure in Optimization Under Uncertainty" (Jeffrey Hokansen)
  • Oct 9- "Fast Rates for Unbounded Losses: from ERM to Generalized Bayes" (Nishant Mehta)
  • Oct 2- "Fast Algorithms and Community Software for Physical Prediction, Inference, and Design" (Jed Brown)
  • Sept 25 - "Online Optimization with Feedback" (Emiliano Dall'anese)
  • Sept 19 - "Learning from Large-Scale Spatiotemporal Data" (Rose Yu)
  • Sept 11 - "CHOMP: Gradient Optimization Techniques for Efficient Motion Planning" (Carl Mueller), "Low-rank Tucker decomposition of large tensors using TensorSketch" (Osman Malik)
  • Sept 4 - "Chance Constraints for Smart Buildings and Smarter Grids" (Kyri Baker)
  • Aug 28 - Organizational


Abstracts

Oct 16

Title: Exploiting Low-Dimensional Structure in Optimization Under Uncertainty

Speaker: Jeffrey Hokansen

Abstract: In computational science, optimization under uncertainty (OUU) provides a new methodology for building designs reliable under a variety of conditions with improved efficiency over a traditional, safety factor based approach. However, the resulting optimization problems can be challenging. For example, chance constraints bounding the probability that an arbitrary function exceeds a threshold are difficult; in absence of exploitable structure, these require estimation via Monte Carlo which is both noisy and expensive. In this talk I present joint work with Paul Constantine on solving OUU problems with chance constraints using a response surface approach with application to multi-physics model of a jet nozzle.

Our approach requires solving new optimization problems at several steps: building response surfaces from (1) polynomial ridge approximations with samples chosen using (2) a new sampling strategy and (3) exploiting structure to replace chance constraints with bounding linear inequality constraints. For (1), we show how a polynomial ridge approximation can be constructed by solving an optimization problem over the Grassmann manifold using a Gauss-Newton approach. For (2), we build a new sampling strategy based on a variable metric induced by the Lipschitz matrix: a new analog to the scalar Lipschitz constant that can reduce the effective intrinsic dimension. We show how the Lipschitz matrix can be estimated using either samples or gradients using a semidefinite program. For (3), we show how to exploit the low-dimensional structure in our response surfaces allows us to conservatively approximate the chance constraints with inequalities with a data-driven safety factor. This yields a linear program approximating the original OUU problem. This approach requires few expensive function evaluations to construct a design that, although not optimal, satisfies the chance-constraints with small probability of failure in our jet nozzle application.


Oct 9

Title: Fast Rates for Unbounded Losses: from ERM to Generalized Bayes

Speaker: Nishant Mehta

Abstract: I will present new excess risk bounds for randomized and deterministic estimators, discarding boundedness assumptions to handle general unbounded loss functions like log loss and squared loss under heavy tails. These bounds have a PAC-Bayesian flavor in both derivation and form, and their expression in terms of the information complexity forms a natural connection to generalized Bayesian estimators. The bounds hold with high probability and a fast $\tilde{O}(1/n)$ rate in parametric settings, under the recently introduced central' condition (or various weakenings of this condition with consequently weaker results) and a type of 'empirical witness of badness' condition. The former conditions are related to the Tsybakov margin condition in classification and the Bernstein condition for bounded losses, and they help control the lower tail of the excess loss. The 'witness' condition is new and suitably controls the upper tail of the excess loss. These conditions and our techniques revolve tightly around a pivotal concept, the generalized reversed information projection, which generalizes the reversed information projection of Li and Barron. Along the way, we connect excess risk (a KL divergence in our language) to a generalized Rényi divergence, generalizing previous results connecting Hellinger distance to KL divergence.

Oct 2

Title: Fast Algorithms and Community Software for Physical Prediction, Inference, and Design

Speaker: Jed Brown

Abstract: Physically-based computational models are the foundation of modern science and engineering, providing the only path to reliable prediction and inference in the presence of sparse and indirect observations as well as deeper understanding of interacting physical processes and scales. This talk presents a holistic approach to advancing capability, robustness, efficiency, and productivity as well as fostering open source software communities.

Sept 25

Title: Online Optimization with Feedback

Speaker: Emiliano Dall'anese

Abstract: The talk focuses on the design and analysis of running (i.e., online) algorithmic solutions to control systems or networked systems based on performance objectives and engineering constraints that may evolve over time. The time-varying convex optimization formalism is leveraged to model optimal operational trajectories of the systems, as well as explicit local and network-level operational constraints. Departing from existing batch and feed-forward optimization approaches, the design of the algorithms capitalizes on a running implementation of primal-dual projected-gradient methods; the gradient steps are, however, suitably modified to accommodate actionable feedback from the system -- hence, the term ``online optimization with feedback.'' By virtue of this approach, the resultant running algorithms can cope with model mismatches in the algebraic representation of the system states and outputs, they avoid pervasive measurements of exogenous inputs, and they naturally lend themselves to a distributed implementation. Under suitable assumptions, analytical convergence claims are presented in terms of dynamic regret. Furthermore, when the synthesis of the feedback-based online algorithms is based on a regularized Lagrangian function, Q-linear convergence to solutions of the time-varying optimization problem is shown.

Sept 19

Title: Learning from Large-Scale Spatiotemporal Data

Speaker: Rose Yu

Abstract: In many real-world applications, such as internet of things (IoT), transportation and physics, machine learning is applied to large-scale spatiotemporal data. Such data is often nonlinear, high-dimensional, and demonstrates complex spatial and temporal correlations. In this talk, I will demonstrate how to efficiently learn from such data. In particular, I will present some recent results on 1) Low-Rank Tensor Regression for spatiotemporal causal inference and 2) Diffusion Convolutional RNNs for spatiotemporal forecasting, applied to real-world traffic and climate data. I will also discuss opportunities and challenges of learning from large-scale spatiotemporal data.

Sept 11

Title: CHOMP: Gradient Optimization Techniques for Efficient Motion Planning

Speaker: Carl Mueller

Abstract: Existing high-dimensional motion planning algorithms are simultaneously overpowered and underpowered. In domains sparsely populated by obstacles, the heuristics used by sampling-based planners to navigate “narrow passages” can be needlessly complex; furthermore, additional post-processing is required to remove the jerky or extraneous motions from the paths that such planners generate. In this paper, we present CHOMP, a novel method for continuous path refinement that uses covariant gradient techniques to improve the quality of sampled trajectories. Our optimization technique converges over a wider range of input paths and is able to optimize higherorder dynamics of trajectories than previous path optimization

strategies. As a result, CHOMP can be used as a standalone motion planner in many real-world planning queries. The edfectiveness of our proposed method is demonstrated in manipulation planning for a 6-DOF robotic arm as well as in trajectory generation for a walking quadruped robot.

Title: Low-rank Tucker decomposition of large tensors using TensorSketch

Speaker: Osman Malik

Abstract: Many real datasets have more than two dimensions and are therefore better represented using tensors, or multi-way arrays, rather than matrices. In the same way that methods such as the singular value decomposition can help in the analysis of data in matrix form, tensor decompositions are important tools when working with tensor data. As multidimensional datasets grow larger and larger, there is an increasing need for methods that can handle them, even on modest hardware. I will present results from recent research where we develop randomized algorithms for computing the Tucker decomposition of a tensor. Our algorithms, which utilize sketching and only need a single pass of the data, are suitable for decomposing large tensors when the decomposition we seek is of low-rank.

Sept 4

Chance Constraints for Smart Buildings and Smarter Grids

Dr. Kyri Baker

University of Colorado Boulder, Civil, Environmental and Architectural Engineering

Evolving energy systems are introducing heightened levels of stress on the electric power grid. Fluctuating renewable energy sources, dynamic electricity pricing, and new loads such as plug-in electric vehicles are transforming the operation of the grid, from the high-voltage transmission grid down to individual buildings. Grid overvoltages, instabilities, and overloading issues are increasing, but stochastic predictive optimization and control can help alleviate these undesirable conditions.

Optimization techniques leveraging chance (probabilistic) constraints will be presented in this talk. Different ways to incorporate chance constraints into optimization problems, including distributionally robust and joint chance constraint reformulations, will be presented. Applications in smart buildings and distribution grids with high integration of solar energy are shown to benefit from chance constrained optimization formulations, reducing grid voltage issues, conserving energy, and allowing buildings and the grid to interact in new ways.