Andrés Gómez
Assistant Professor
Department of Industrial & Systems Engineering
University of Southern California
310B Olin Hall, 3650 McClintock Avenue, Los Angeles, CA 90089
Contact: gomezand@usc.edu
BIO
Andrés Gómez received his B.S. in Mathematics and B.S. in Computer Science from the Universidad de los Andes (Colombia) in 2011 and 2012, respectively. He then obtained his M.S. and Ph.D. in Industrial Engineering and Operations Research from the University of California Berkeley in 2014 and 2017, respectively. From 2017 to 2019, Dr. Gómez worked as an Assistant Professor in the Department of Industrial Engineering at the University of Pittsburgh, and since 2019 he is an Assistant Professor in the Department of Industrial and Systems Engineering at the University of Southern California. Dr. Gómez research focuses on developing new theory and tools for challenging optimization problems arising in finance, machine learning and statistics.
RESEARCH
The past two decades have witnessed an explosion in the use of optimization to tackle problems arising in data analysis, finance, statistics and, more recently, machine learning. These new application domains demand faster, scalable and more precise algorithms, yet classical optimization tools and techniques have been unable to cope with such requirements. Specifically, there has been an enormous progress in solving convex optimization problems. However, most inference problems are naturally non-convex once priors or interpretability conditions are imposed. Thus, decision-makers are faced with a dichotomy: either construct a (crude) convex approximation of the problem, which can be solved efficiently but yields sub-optimal and even bad solutions; alternatively, tackle the non-convex directly to obtain optimal solutions, but at the expense of large or excessive computational times.
Dr. Gómez goal is to bridge the gap between these two extremes. His research focuses on systematically constructing strong or ideal convex relaxations of difficult problems. Such relaxations can then be naturally used to obtain high-quality solutions quickly, and to solve the problems to optimality efficiently, resulting in the best of both worlds. His research uses ideas from the following disciplines:
Discrete optimization (combinatorial, submodularity)
Mixed-integer optimization (branch-and-bound, lifting, disjunctive programming)
Convex optimization (quadratic, conic).
FUNDING
Mixed-Integer Quadratic Optimization: Structural Results and Practical Relaxations Air Force Office of Scientific Research Grant No. FA9550-22-1-0369 (2022-2025)
Convexification of Mixed-Integer Quadratic Optimization Problems Google Research Scholar Program (2022)
Collaborative Research: CDS&E: Scalable Inference for Spatio-Temporal Markov Random Fields National Science Foundation Grant No. 2152777 (2022-2025)
Collaborative Research: CIF: Small: Convexification-based Decomposition Methods for Large-Scale Inference in Graphical Models National Science Foundation Grant No. 2006762 (2020-2023)
EAGER: Transforming Additive Nanomanufacturing with Machine Learning National Science Foundation Grant No. 1930582 (2019-2021)
Advancing Fractional Combinatorial Optimization: Computation and Applications National Science Foundation Grant No. 1818700 (2018-2021)