Keynote Speakers

Andre Cire

University of Toronto

Cornell University

Vinod Nair

Google Brain

Decision Diagrams for Deterministic and Stochastic Optimization

Andre Cire, University of Toronto

Abstract: In this talk we will discuss alternative solution techniques for discrete and stochastic optimization based on decision diagrams (DDs). A DD, in our context, is a graph-based extended formulation of an optimization problem that exposes network structure, leading to novel bounding and branching mechanisms that complement classical model-based approaches. We will investigate the principles of DD modeling for combinatorial problems and develop the intrinsic connections between DDs and (approximate) dynamic programming. We will then leverage links with mathematical programming and polyhedral theory to propose stronger formulations, cutting-plane methods, and new decomposition approaches for difficult combinatorial and stochastic discrete problems. The talk will highlight examples in routing, scheduling, and planning, while also emphasizing new applications and future research in the area.

Bio: Andre Augusto Cire is an Associate Professor in Operations Management and Analytics at the University of Toronto, cross-appointed between the Department of Management at the Scarborough campus and the Rotman School of Management. His research focuses on both methodology and practice of optimization, specifically in mathematical programming and dynamic programming for scheduling, healthcare, and supply chain problems. He has completed his Ph.D. from Carnegie Mellon University in Operations Research in 2014, and has received the Research Excellence Award at the University of Toronto Scarborough, the INFORMS Computing Society Best Student Paper Award, and the Gerald L. Thompson Doctoral Dissertation Award at Carnegie Mellon University. Andre currently serves as an associate editor for the Network Optimization area at INFORMS Journal on Computing and in several senior roles in conferences such as AAAI, CP, and CPAIOR.

Combining Reasoning and Learning for Discovery

Carla Gomes, Cornell University

Abstract: Artificial Intelligence (AI) is a rapidly advancing field inspired by human intelligence. AI systems are now performing at human and even superhuman levels on various tasks, such as image identification and face and speech recognition.  The tremendous AI progress that we have witnessed in the last decade has been largely driven by deep learning advances and heavily hinges on the availability of large, annotated datasets to supervise model training. However, often we only have access to small datasets and incomplete data. We amplify a few data examples with human intuitions and detailed reasoning from first principles and prior knowledge for discovery. I will talk about our work on AI for accelerating the discovery for new solar fuels materials, which has been featured in Nature Machine Intelligence, in a cover article entitled, Automating crystal-structure phase mapping by combining deep learning with constraint reasoning. In this work, we propose an approach called Deep Reasoning Networks (DRNets), which seamlessly integrates deep learning and reasoning via an interpretable latent space for incorporating prior knowledge. and tackling challenging problems. DRNets requires only modest amounts of (unlabeled) data, in sharp contrast to standard deep learning approaches. DRNets reach super-human performance for crystal-structure phase mapping, a core, long-standing challenge in materials science, enabling the discovery of solar-fuels materials. DRNets provide a general framework for integrating deep learning and reasoning for tackling challenging problems. For an intuitive demonstration of our approach, using a simpler domain, we also solve variants of the Sudoku problem. The article DRNets can solve Sudoku, speed scientific discovery, provides a perspective for a general audience about DRNets. DRNets is part of a SARA, the Scientific Reasoning Agent for materials discovery. Finally, I will also talk about the effectiveness of a novel curriculum learning with restarts strategy to boost a reinforcement learning framework. We show how such a strategy can outperform specialized solvers for Sokoban, a prototypical AI planning problem.

Bio: Carla Gomes is the Ronald C. and Antonia V. Nielsen Professor of Computing and Information Science and the director of the Institute for Computational Sustainability at Cornell University. Gomes received a Ph.D. in computer science in the area of artificial intelligence from the University of Edinburgh. Her research area is Artificial Intelligence with a focus on large-scale constraint reasoning, optimization, and machine learning. Recently, Gomes has become deeply immersed in research on scientific discovery for a sustainable future and more generally in research in the new field of Computational Sustainability. Computational Sustainability aims to develop computational methods to help solve some of the key challenges concerning environmental, economic, and societal issues in order to help put us on a path towards a sustainable future. Gomes is the lead PI of an NSF Expeditions in Computing award Gomes has (co-)authored over 150 publications, which have appeared in venues spanning Nature, Science, and a variety of conferences and journals in AI and Computer Science, including five best paper awards. Gomes was named the “most influential Cornell professor” by a Merrill Presidential Scholar (2020) and she was also the recipient of the Association for the Advancement of Artificial Intelligence (AAAI) Feigenbaum Prize (2021) for “high-impact contributions to the field of artificial intelligence, through innovations in constraint reasoning, optimization, the integration of reasoning and learning, and through founding the field of Computational Sustainability, with impactful applications in ecology, species conservation, environmental sustainability, and materials discovery for energy.” Gomes is a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), a Fellow of the Association for Computing Machinery (ACM), and a Fellow of the American Association for the Advancement of Science (AAAS).

Deep Learning and Neural Network Accelerators for Combinatorial Optimization

Vinod Nair, Google Brain

Abstract: Deep Learning has been used to construct heuristics for challenging combinatorial optimization problems. It has two advantages: a) expressive neural network models can learn custom heuristics from data by exploiting the structure in a given application's distribution of problem instances, and b) once learned, the models can be executed on accelerators such as GPUs and TPUs with high throughput for computations such as matrix multiplication. In this talk we'll present two works that illustrate these advantages. In the first work, we apply Deep Learning to solving Mixed Integer Programs by learning a Branch and Bound variable selection heuristic and a primal heuristic. We show results on datasets from real-world applications, including two production applications at Google. In the second work, we use a class of neural networks called Restricted Boltzmann Machines to define a stochastic search heuristic for Maximum Satisfiability that is well-suited to run on a large-scale TPU cluster. Results on a subset of problem instances from annual MaxSAT competitions for the years 2018 to 2021 show that the approach achieves better results than competition solvers with the same wall-clock budget across all four years.

Bio: Vinod Nair is a researcher at DeepMind focusing on machine learning for combinatorial optimization. Before joining DeepMind and recently Google Brain, he worked at Microsoft Research. He got his PhD in Deep Learning from the University of Toronto in 2010.