Spring 2017

List of speakers, in reverse order (see below for full abstracts):

  • May 8 (MONDAY), Jennifer Wortman Vaughan, Senior Researcher, Microsoft Research New York City, Title: Crowd Behavior and Implications on Research
  • May 2, Nathaniel Mathews, Department of Applied Mathematics, University of Colorado Boulder, Title: Emulating Expensive PDEs for Inversion Frameworks via Kernel Regression
  • April 25, Steffen Borgwardt, Assistant Professor, Department of Mathematics, University of Colorado, Denver. Title: The Hirsch Conjecture is true for Network-Flow Polytopes
  • April 18, Claire Monteleoni, Assistant Professor of Computer Science, George Washington University, Title: Machine Learning Algorithms for Climate Informatics, Sustainability and Social Good
  • April 11, Paris Smaragdis, Associate Professor of Computer Science and Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Title: Machine Listening: Making Computers Understand Sound
  • April 4, Ali Pezeshki, Department of Electrical and Computer Engineering, and Department of Mathematics, Colorado State University, Title: Compressed Sensing, Sparse Recovery, and Image Inversion: Cautionary Notes
  • Mar 21, Paul Constantine, Department of Applied Mathematics and Statistics, Colorado School of Mines, Title: Active Subspaces: Emerging Ideas for Dimension Reduction in Computational Science and Engineering Models
  • Mar 14, Abtin Rahimian, Courant Institute of Mathematical Sciences, New York University, Title: Fast Algorithms for Structured Matrices in Simulations of Physical Systems
  • Mar 7, Alexandra Kolla, Assistant Professor of Computer Science, University of Illinois Urbana-Champaign, Title: The Sound of Graphs
  • Feb 28, Lisa Natale, Department of Ecology & Evolutionary Biology, University of Colorado Boulder, Title: How near is near enough?: Proximity-based social networks to aid California condor conservation
  • Feb 28, Joseph Benzaken, Department of Applied Mathematics, University of Colorado Boulder, Title: A Parametric Framework for Propagation, Analysis, and Control of Geometric Variation in Engineering Design
  • Feb 21, Nathan Heavner, Department of Applied Mathematics, University of Colorado Boulder, Title: Randomized Algorithms for Analysis of High Dimensional Data
  • Feb 21, Gregor Robinson, Department of Applied Mathematics, University of Colorado Boulder, Title: Pursuing Intuitive Models for a Weather Mystery: MCMC for Selecting High-Dimensional Dynamics
  • Feb 14, Shudong Hao, Department of Computer Science, University of Colorado Boulder, Title: Learning Low-Resource Languages for Emergent Incidents
  • Feb 7, William Kleiber, Department of Applied Mathematics, University of Colorado Boulder, Title: Simulation of High-Resolution Random Fields
  • Jan 31, Mohit Iyyer, Department of Computer Science, University of Maryland, College Park, Title: Deep Learning for Creative Language Understanding
  • Jan 24, Satyen Kale, Google Research NYC, Title: Online Boosting Algorithms

To receive announcements about talks please signup at the google groups page!

Welcome to the SOML / StatOptML (Statistics, Optimization, and Machine Learning) seminar for Spring 2017.

Note that all talks are in the Newton Lab (ECCR 257) unless otherwise specified.

May 8 (MONDAY), Jennifer Wortman Vaughan, Senior Researcher, Microsoft Research New York City

Time/Location: 2:00 - 3:00, ECCR 257, Newton Lab (Note the time change!!)

Title: Crowd Behavior and Implications on Research

Abstract: Over the last decade, crowdsourcing has been used to harness the power of human computation to solve tasks that are notoriously difficult to solve with computers alone, such as determining whether or not an image contains a tree, rating the relevance of a website, or verifying the phone number of a business. The machine learning community was early to embrace crowdsourcing as a tool for quickly and inexpensively obtaining the vast quantities of labeled data needed to train systems. However, crowdsourcing platforms are often treated as a black box, with little attention paid to the crowd itself. I will describe two recent projects aimed at understanding who crowdworkers are, how they behave, and what this should teach us about best practices for interacting with the crowd. The first examines the causal effects of financial incentives on the quality of crowdwork, focusing on performance-based payments, bonus payments awarded to workers for producing high quality work. The second opens up the black box of crowdsourcing to uncover that crowdworkers should not be viewed as independent contractors, but rather a network with a rich communication structure. Together, these research studies can teach us how to most effectively interact with the crowd. This talk is based on joint work with Mary Gray, Chien-Ju Ho, Alex Slivkins, Sid Suri, and Ming Yin.

May 2, Nathaniel Mathews, Department of Applied Mathematics, University of Colorado Boulder

Time/Location: 3:30 - 4:30, ECCR 257, Newton Lab

Title: Emulating Expensive PDEs for Inversion Frameworks via Kernel Regression

Abstract: Two varieties of observables are available for studying the solar corona. There are line-integrated coronal observations, and photospheric images. But the region between the photosphere and the base of the corona is computationally prohibitive to simulate in an inversion framework. Instead, we will examine a technique to emulate the distribution of runs from an extant solver. Fitting an optimal kernel to integrate the photosphere against will be the prime topic, but we will also touch on RBF smoothing and improving results via columnar linear regression techniques.

April 25, Steffen Borgwardt, Assistant Professor, Department of Mathematics, University of Colorado, Denver

Time/Location: 3:30 - 4:30, ECCR 257, Newton Lab

Title: The Hirsch Conjecture is true for Network-Flow Polytopes

Abstract: The study of combinatorial diameter of polyhedra is a classical topic in the theory of linear programming. While transportation and network-flow problems are at the core of the field of optimization, for more than 50 years it has been open whether the Hirsch conjecture holds for the corresponding polytopes. We prove the validity for all network-flow polytopes.

April 18, Claire Monteleoni, Assistant Professor of Computer Science, George Washington University

Time/Location: 3:30 - 4:30, DLC 170, Discovery Learning Center (Note the room change!!)

Title: Machine Learning Algorithms for Climate Informatics, Sustainability and Social Good

Abstract: Despite the scientific consensus on climate change, drastic uncertainties remain. Crucial questions about regional climate trends, changes in extreme events, such as heat waves and mega-storms, and understanding how climate varied in the distant past, must be answered in order to improve predictions, assess impacts and vulnerability, and inform sustainable strategies for mitigation and adaptation. Machine learning can help answer such questions and shed light on climate change.

Monteleoni will give an overview of her climate informatics research: machine learning for the study of climate science. Similar to the case of bioinformatics, climate informatics provides a data-rich scientific domain in which machine learning can make a major impact. Further, questions in climate science give rise to new challenges for the design of machine learning algorithms. More broadly, any real-world data source can be massive, high-dimensional, streaming, time-varying, raw (unlabeled), sparse, or private; or combine these and other attributes. Monteleoni will survey her research program on developing machine learning algorithms to address such challenges, and applying them to societally impactful problems. She will center the discussion on out contributions to improving predictions of climate change trends from ensembles of climate model simulations, and improving the understanding of climate extremes. Along the way, she will describe her work on machine learning algorithms for non-stationary spatiotemporal data, and scaling up unsupervised learning to big data.

April 11, Paris Smaragdis, Associate Professor of Computer Science and Electrical and Computer Engineering, University of Illinois at Urbana-Champaign

Time/Location: 3:30 - 4:30, DLC 170, Discovery Learning Center (Note the room change!!)

Title: Machine Listening: Making Computers Understand Sound

Abstract: Enabling machines to perceive the world using various modalities is one of the holy grails of artificial intelligence. In this talk I will present some research on creating machines that do as such by listening. I will discuss some the of unique difficulties in this field and present a thread of research which spans a range of computational disciplines relating to signal processing, machine learning and cryptography. This research will be introduced in the context of classic audio problems such as time/frequency analysis, music transcription, source separation, recognition in mixtures and more. I'll show how this work generalizes and finds applications to other domains, what its practical implications are, and what it takes to move it from the whiteboard to the real world.

April 4, Ali Pezeshki, Department of Electrical and Computer Engineering, and Department of Mathematics, Colorado State University

Time/Location: 3:30 - 4:30, ECCR 257, Newton Lab

Title: Compressed Sensing, Sparse Recovery, and Image Inversion: Cautionary Notes

Abstract: Broadly speaking there are two classical principles for inverting the kinds of images that are measured in optics, electromagnetics and acoustics. The first principle is one of matched filtering, wherein a sequence of rank-one subspaces, or one-dimensional test images, is matched to the measured image by correlating or phasing in frequency, wavenumber, doppler, and/or delay. The second principle is one of parameter estimation in a separable linear model, wherein a modal representation for the fields is posited and estimates of linear parameters (complex amplitudes) and nonlinear mode parameters (frequency, wavenumber, delay, and/or doppler) are extracted, usually based on maximum likelihood, or some variation on linear prediction. An important limitation of the classical principles is that any subsampling of the measured image has consequences for resolution (bias) and for variability (variance).

Compressed sensing theory stands in contrast to the classical principles. It states that complex baseband data may be compressed before processing, when it is known a priori that the field to be imaged is sparse in a known dictionary, and it suggests that subsampling has manageable consequences for image inversion. Moreover, the compression step in compressed sensing typically employs randomly drawn linear combinations, which stand in stark contrast to the linearly phased combinations that are used to form narrow bands and focused beams in time/space series analysis.

But how does the performance of compressed sensing compare with that of classical methods of modal analysis? This is the general question that we discuss in this talk. More specifically, we discuss recent analytical and numerical results that answer the following questions:

1. What is the sensitivity of compressed sensing to mismatch between the physical model that generated the data and the mathematical model that is assumed in the inversion algorithm?

2. What is the impact of compressive sampling on the Fisher information matrix, Cramer-Rao bound (CRB), and Kullback-Leibler divergence for estimating nonlinear parameters?

3. What is the impact of compressive sampling on SNR thresholds at which mean-squared error (MSE) in estimating parameters deviate sharply from the CRB?

The results are from joint work with Pooria Pakrooh, Yuejie Chi, Louis Scharf, Doug Cochran, Stephen Howard, Robert Calderbank, and Edwin Ching.

Mar 21, Paul Constantine, Department of Applied Mathematics and Statistics, Colorado School of Mines

Time/Location: 3:30 - 4:30, DLC 170, Discovery Learning Center (Note the room change!!)

Title: Active Subspaces: Emerging Ideas for Dimension Reduction in Computational Science and Engineering Models

Abstract: Scientists and engineers use computer simulations to study relationships between a physical model's input and and its output predictions. However, thorough parameter studies -- e.g., constructing response surfaces, optimizing or averaging -- are challenging, if not impossible, when the simulation is expensive and the model has several inputs. To enable parameters studies in these cases, the engineer may attempt to reduce the dimension of the model's input parameter space. Active subspaces are part of an emerging set of subspace-based dimension reduction tools that identify important directions in the input parameter space. Constantine will (i) describe computational methods for discovering a model's active subspaces, (ii) propose strategies for exploiting the reduced dimension to enable otherwise infeasible parameter studies, and (iii) review results from several science and engineering applications. For more information, visit: activesubspaces.org.

Mar 14, Abtin Rahimian, Courant Institute of Mathematical Sciences, New York University

Time/Location: 3:30 - 4:30, DLC 170, Discovery Learning Center (Note the room change!!)

Title: Fast Algorithms for Structured Matrices in Simulations of Physical Systems

Abstract: Real-world complex phenomena are typically characterized by interacting physical processes, uncertain parameters, dynamic boundaries, and close coupling over a wide span of spatial and temporal scales. Predictive computational models of such phenomena inherit these characteristics and require many novel algorithmic components. In this talk, I will identify some common features and challenges in physical modeling, focusing on cellular hemodynamics and cell biomechanics, and outline algorithms that enable predictive simulations of these processes. I will discuss some recent advances in efficiently solving large linear systems arising from the discretization of such models using the Tensor-Train decomposition.

Mar 7, Alexandra Kolla, Assistant Professor of Computer Science, University of Illinois Urbana-Champaign

Time/Location: 3:30 - 4:30, DLC 170, Discovery Learning Center (Note the room change!!)

Title: The Sound of Graphs

Abstract: In this talk, Kolla will discuss major implications that linear algebraic techniques have in understanding and resolving hard computational and graph theoretical questions, as well as unifying various areas of mathematics and computer science. She will focus on two representative examples, stemming from two key areas of computer science, namely computational complexity and robust network design respectively:

. Resolving the infamous Unique Games Conjecture and its implications to the theory of inapproximability

. Constructing optimal expander graphs, and its implications to fault tolerant network design and clustering.

Kolla will show how, via the prism of spectral graph theory, those seemingly unrelated questions can be seen to be deeply connected. She will present techniques she developed to tackle both problems, which span a wide range of areas such as linear algebra, convex optimization, group theory, harmonic analysis and probability theory.

Feb 28, Lisa Natale, Department of Ecology & Evolutionary Biology, University of Colorado Boulder

Time/Location: 3:30 - 4:30, ECCR 257, Newton Lab

Title: How near is near enough?: Proximity-based social networks to aid California condor conservation

Abstract: When the California condor (Gymnogyps californianus) reached a direly low count in the 1980s, the U.S. Fish & Wildlife Service undertook an ardent recovery effort. While the program has been successful in raising numbers, condor populations are far from self-sustaining. Lead poisoning is a potent threat, especially for inland populations. Observed condor behavior suggests that the social structure of populations may influence individual birds' risk for poisoning. We are interested in exploring this potential relationship, and we seek to start by mapping the birds' social network using a rich spatio-temporal dataset containing observations of individual birds across many years. Building from the common assumption made in the construction of ecological proximity-based social networks, namely that co-location implies a social connection between individuals, we ask: on what timescale are two birds observed in the same location considered co-located (and hence socially linked)? I will briefly present two approaches to this problem and hope to receive more ideas to try from those in attendance.

Feb 28, Joseph Benzaken, Department of Applied Mathematics, University of Colorado Boulder

Time/Location: 3:30 - 4:30, ECCR 257, Newton Lab

Title: A Parametric Framework for Propagation, Analysis, and Control of Geometric Variation in Engineering Design

Abstract: There s a great need for efficient, robust, and informative design space exploration tools throughout the engineering design cycle. In early-stage design, this tool help the end-user gain understanding of how changes in design features affect overall system response, narrow down the admissible design space through the application of design constraints, as well as determine optimality criteria for a given design problem. In late-stage design, this tool helps the end-user quantify, propagate, and ultimately control the effect of uncertain geometric variations after a final design has been selected. Despite the obvious need for effective design space exploration tools, current approaches are largely ad-hoc, application-specific, and they require frequent user interaction with both Compiler Aided Design (CAD) and Computer Aided Engineering (CAE) software packages.

In this talk, I introduce a parametric framework for propagation, analysis, and control of geometric variation in engineering design. The isogeometric paradigm naturally yields the concept of a "geometric family" where, rather than treating each engineering model as an independent entity, we instead parametrize geometries belonging to the same "family" by a set of shared design parameters. A subset of geometries belonging to the same "family" are selected through a collocation-like method and analyzed. Subsequently, technologies emerging from the uncertainty quantification community provide the necessary tools for representing the entirety of the design space from these collocated samples.

This methodology addresses the issues of manufacturing uncertainty in late-stage design by understanding the effects of perturbations in design parameters on the corresponding solution field, allowing the specification of geometric tolerances such that the resulting product remains in compliance with the predefined system requirements such as allowable stress or maximum displacement. This application is exemplified through the linear-elastic analysis of a square plate with a hole and an L-bracket, where we consider the influence of perturbations in stochastic geometric parameters on the resulting displacement field.

Feb 21, Nathan Heavner, Department of Applied Mathematics, University of Colorado Boulder

Time/Location: 3:30 - 4:30, ECCR 257, Newton Lab

Title: Randomized Algorithms for Analysis of High Dimensional Data

Abstract: Determining low-rank approximations of matrices is a problem that occurs commonly in areas of data mining and machine learning, such as Principal Component Analysis, Latent Semantic Indexing, and the PageRank algorithm, to name a few. We will discuss a method for computing an approximate singular value decomposition which is faster than classical deterministic methods and executes efficiently in parallel computing environments. Of particular interest is a procedure involving randomized projections which efficiently computes an approximate basis for the numerical range of a matrix. Further applications of the randomized range finder for other problem types in data analysis will be briefly discussed.

Feb 21, Gregor Robinson, Department of Applied Mathematics, University of Colorado Boulder

Time/Location: 3:30 - 4:30, ECCR 257, Newton Lab

Title: Pursuing Intuitive Models for a Weather Mystery: MCMC for Selecting High-Dimensional Dynamics

Abstract: The Madden-Julian Oscillation (MJO) is a complex multiscale atmospheric phenomenon that directly impacts the monsoon for a majority of Earth's human population and has cascading effects on global climate. The mechanisms behind its formation are unclear, despite decades of thorough research. Detailed multiphysics weather models capture some of the MJO's dynamics with reasonable accuracy, but are too complicated to isolate the mechanisms responsible. A number of simplified models capture some of the qualitative features, but observation has failed to narrow the variety of simple models that point to dramatically different physics. It is often desirable to confront this kind of model comparison problem with a Bayesian framework. Bayesian model comparison not only allows one to assign relative probabilities to models, but also avoids overfitting by gracefully penalizing models with a large number of parameters. Unfortunately, long sample times hinder the application of Markov Chain Monte Carlo (MCMC) that is generally used to approximate the analytically intractable integrals required of most Bayesian problems. This talk will discuss why we still want to use Bayesian model comparison for the MJO, describe a simple way to parallelize MCMC without loss of accuracy, and touch on some ongoing work to mitigate the difficulties of performing MCMC in extremely high-dimensional search spaces.

Feb 14, Shudong Hao, Department of Computer Science, University of Colorado Boulder

Time/Location: 3:30 - 4:30, ECCR 257, Newton Lab

Title: Learning Low-Resource Languages for Emergent Incidents

Abstract: Social network and social media have been a popular platform for communication. During a disaster, this platform becomes more important where people send out messages looking for help. For languages that few people outside the region understand and few resources available, however, it is much more difficult to understand the messages and provide assistance quickly. Traditional machine translation systems usually require huge language resources and long time for training an accurate model. Both requirements limit its usage in this situation. Multilingual topic models can learn consistent representations across languages without using huge dataset, which helps people understand the messages and respond to them quickly.

In this presentation, I will briefly review the current multilingual topic models, and talk about how to increase language resources for training them with a native informant during a short time, so that it could be applied to emergent incidents.

Feb 7, William Kleiber, Department of Applied Mathematics, University of Colorado Boulder

Time/Location: 2:30 - 3:30/ ECAD 100, Clark Conference Room (Engineering Dean's Office) (Note the time AND room change!)

Title: Simulation of High-Resolution Random Fields

Abstract: Simulation of spatial random fields is an essential goal in most geostatistical analyses. This talk will focus on two approaches to generating high resolution simulations of nonstationary Gaussian random fields. The first relies on spatial deformation of a stationary solution that can then be generated using extant approaches such as circulant embedding. The second generates approximate realizations for an arbitrary given covariance function. The procedure rests on sequential conditional simulations based on topologically connected regions in the domain. We suggest initializing this consecutive conditioning approach with an initial coarse resolution simulation over the domain to reduce bias of correlations at long distances. We provide a theoretical justification for the local prediction step. Both approaches are illustrated on climatological datasets.

Jan 31, Mohit Iyyer, Department of Computer Science, University of Maryland, College Park

Time/Location: 2:30 - 3:30/ ECCR 257, Newton Lab (Note the time change!)

Title: Deep Learning for Creative Language Understanding

Abstract: Creative language--the sort found in novels, film, and comics--contains a wide range of linguistic phenomena, from increased syntactic complexity (e.g., metaphors, sarcasm) to high-level discourse structures such as narrative and character arcs. In this talk, I explore how we can use deep learning to understand, generate, and answer questions about creative language. In particular, I present neural architectures for two different tasks involving creative language understanding: 1) modeling dynamic fictional relationships in novels and 2) predicting dialogue and artwork from comic book panels. I also propose a method to disentangle an author's writing style from the content of their words by strategically weakening parts of the network architecture. These tasks are motivated through quiz bowl, a trivia game that contains many questions about novels, paintings, and comics, for which I present deep models that are competitive with human players. I conclude by discussing future plans to build more engaging conversational agents by leveraging systems for creative language understanding and question answering.

Jan 24, Satyen Kale, Google Research NYC

Time/Location: 3:30 - 4:30/ ECCR 257, Newton Lab

Title: Online Boosting Algorithms

Abstract: We initiate the study of boosting in the online setting, where the task is to convert a "weak" online learner into a "strong" online learner. The notions of weak and strong online learners directly generalize the corresponding notions from standard batch boosting. For the classification setting, we develop two online boosting algorithms. The first algorithm is an online version of boosting-by-majority, and we prove that it is essentially optimal in terms of the number of weak learners and the sample complexity needed to achieve a specified accuracy. The second algorithm is adaptive and parameter-free, albeit not optimal.

For the regression setting, we give an online gradient boosting algorithm which converts a weak online learning algorithm for a base class of regressors into a strong online learning algorithm which works for the linear span of the base class. We also give a simpler boosting algorithm for regression that obtains a strong online learning algorithm which works well for the convex hull of the base class, and prove its optimality.