An important thrust in the research done in the ACES lab is learning-based optimization, emphasizing the integration of adaptive and intelligent methods to enhance traditional optimization techniques. In conventional optimization scenarios, the focus often lies in balancing exploration and exploitation within a defined set of constraints and search spaces. Commonly used algorithms navigate these spaces by exploring various possibilities and exploiting promising solutions as they arise. Stochastic methods and random-search algorithms, while successful in practice, provide a limited scope into the optimization search space. By incorporating learning into this process, we believe we can significantly boost efficiency. Continually learning and adapting within the optimization space allows us to dynamically refine our search strategies, leading to more effective and efficient outcomes.
Our philosophy centers on leveraging adaptive methods to enhance the search process as we explore and learn the optimization landscape. This approach enables us to conduct more efficient searches based on the knowledge we accumulate, rather than relying on random exploration. Transfer learning is another significant advantage of learning-based methods, as recent research has shown it can reduce the overall cost of optimization. We employ a variety of techniques, including hypergraph neural networks, distributed computing, reinforcement learning (RL), adaptive sampling, and Gaussian sampling, to develop effective optimization methods. By learning the intricacies of the optimization space, our methods can identify optimal paths and solutions more effectively. In both static and dynamic settings, our aim is to push the boundaries of traditional optimization, creating intelligent systems that autonomously learn, adapt, and improve, leading to innovative and efficient solutions.
Building on these foundations, our current research extends reinforcement learning–based optimization to more challenging combinatorial optimization (CO) and constrained satisfaction problem (CSP) settings. Here, we are developing RL agents and graph neural network (GNN)–based policies that learn to navigate highly structured discrete solution spaces, capturing dependencies that are typically intractable for classical solvers. By coupling RL with expressive GNN architectures, we aim to create policies that generalize across problem instances, adapt to evolving constraints, and scale to large graphs and real-world optimization tasks. This work pushes learning-based optimization beyond continuous or smooth landscapes, establishing a foundation for intelligent solvers that can reason over complex constraints, combinatorial structure, and dynamic objectives.
Some of our results in this direction are published in papers HypOp and AdaNS.
An overview of HypOp, an optimization tool for distributed constrained combinatorial optimization.
Hypergraph modeling and distributed computing technique used in HypOp
Overview of AdaNS adaptive sampling methodology for hyperparameter customization.