This talk will provide an overview of recent advances in large language models (LLMs) and their applications in operations research, including leveraging LLMs to generate and formulate optimization models and develop novel optimization algorithms. A particular focus will be on using LLMs to enhance the interpretability of optimization models for practitioners. One of the key challenges in deploying optimization models is ensuring that practitioners can effectively understand and interpret their results. To address this, we have developed OptiChat, a software platform designed to assist users by diagnosing infeasibilities, conducting sensitivity analyses, generating counterfactual explanations, and responding to general queries. OptiChat has been rigorously tested on a diverse dataset of optimization-related queries. The talk will conclude with a discussion on both the opportunities and limitations of applying LLMs in optimization.
Bio: Can obtained his bachelor’s degree from Tsinghua University, China, in Chemical Engineering. He completed his PhD in Chemical Engineering at Carnegie Mellon University. His PhD research is focused on stochastic mixed-integer nonlinear programming and long-term expansion planning of power systems. Can did a one-year Postdoc at Polytechnique Montreal on using machine learning techniques to accelerate optimization algorithms. He joined the Davidson School of Chemical Engineering at Purdue University as an assistant professor in the Fall of 2022. His research group is focused on optimization, machine learning, and applications in sustainable energy systems. His group won Air Liquide’s global scientific challenge on data sharing for decarbonization in 2023, the Amazon Research Award in 2024, and the NSF CAREER Award in 2025.
While optimization technologies have advanced significantly in solving discrete combinatorial problems, formally modeling these problems still requires expertise and remains a major bottleneck for wider adoption. Recent works have therefore explored the use of Large Language Models (LLMs) to assist in the modeling process by translating problem descriptions into executable constraint models. These studies have highlighted the potential of LLMs and shown that prompt engineering, inference-time compute methods, and related strategies can further improve performance. However, evaluating such systems remains difficult: existing benchmarks often do not reflect the diversity of real-world combinatorial problems, while assessing the correctness of generated constraint models can itself be challenging. In this talk, I will focus on these evaluation challenges and present DCP-Bench-Open, a new benchmark we have developed for assessing LLM-based constraint modeling on a broad and diverse collection of well-known problems. I will show how it enables systematic evaluation across modeling systems with different abstraction levels and syntactic requirements, and what this reveals about the strengths and limitations of current LLMs for combinatorial modeling. The talk will conclude with a discussion of how the use of LLMs may help transform constraint modeling and solving into a more interactive process, making it more accessible and expanding its applicability to a wider range of real-world problems.
Bio: Dimos Tsouros is a Lecturer and Researcher in Artificial Intelligence at the University of Western Macedonia. He obtained his Ph.D. in 2021 under the supervision of Kostas Stergiou, focusing on the integration of Constraint Programming and Machine Learning for assisting the modeling process. His Ph.D. research received an honorable mention at the 2022 Doctoral Dissertation Award of the Association for Constraint Programming. From 2022 to 2025, he was a postdoctoral researcher at KU Leuven in the DTAI research group. His research lies at the intersection of Constraint Programming and Machine Learning, with a focus on interactive constraint acquisition and explainable constraint solving. More recently, he explores the use of large language models in this direction, focusing on supporting constraint modeling. He contributes to the CPMpy modeling library and leads the development of PyConA. His work aims to develop human-aware AI systems that learn from users and provide meaningful explanations.
Louis-Martin Rousseau
Polytechnique Montreal, Canada
Large language models are increasingly used to translate natural-language problem descriptions into formal combinatorial optimization models. As these pipelines become more autonomous, two questions become central: how to generate correct models reliably, and how to verify their correctness without ground-truth access. We address both through the lens of framework diversity. We present a cross-framework iterative generation pipeline in which multiple modeling frameworks are built simultaneously from a shared decomposition plan. Cross-framework coherence checking and consensus mechanisms allow correct constraint patterns to propagate across frameworks, substantially improving accuracy over independent single-framework generation, with the strongest gains for frameworks less represented in LLM training data. We then investigate LLM-as-a-Judge for combinatorial optimization, showing that grouped cross-framework evaluation enhances judging consistency through comparative reasoning, significantly reducing false negatives compared to individual assessment. We also identify and quantify positional and self-preference biases in LLM-based evaluation, and demonstrate that using LLM judges to select the best solution across frameworks approaches virtual-best accuracy. Together, these results establish framework diversity as a unifying principle that strengthens both the generation and evaluation stages of autonomous modeling pipelines.
Bio: Louis-Martin Rousseau is a Full Professor in the Department of Mathematics and Industrial Engineering at Polytechnique Montréal and a member of CIRRELT. He holds a Ph.D. in Computer Science and Operations Research from Université de Montréal (2002). He was among the first researchers to investigate the hybridization of operations research methods and constraint programming techniques from artificial intelligence. Since 2016, he has held the Tier 1 Canada Research Chair in Healthcare Analytics and Logistics (HANALOG). His research pursues a unified agenda: bridging machine learning and combinatorial optimization to support complex operational decisions under uncertainty. On the methodological side, this translates into embedding learned components — such as reinforcement learning, graph neural networks, and predictive models — directly into optimization solvers to improve their search and bounding mechanisms. On the applied side, healthcare serves as the primary proving ground, where his group leverages these hybrid approaches to improve scheduling, resource allocation, and patient outcome prediction across cancer treatment, home care, and hospital logistics.
Constraint Programming (CP) and its high-level modeling languages have long promised to be a powerful paradigm for solving complex combinatorial problems. Yet, their adoption by non-experts remains limited due to the inherent difficulty of modeling: the richness of global constraints, the subtleties of formulation, and the expertise required to design efficient models. In this master class lecture, I will present CP-Model-Zoo, a tutoring-oriented system that aims to bridge this gap by leveraging a large collection of expert-written CP models accumulated over time. Given a natural language description of a problem, CP-Model-Zoo retrieves the most relevant model from its database, allowing users to benefit directly from expert knowledge without requiring manual labeling or deep modeling expertise. Experimental results demonstrate that our approach achieves strong accuracy in retrieving appropriate models, even when the input descriptions vary in clarity and expertise level. I will also briefly introduce the concept of Retrieval-Augmented Generation (RAG) and the associated techniques.
Bio: Pierre Schaus is a Professor of Computer Science at UCLouvain. He obtained his Ph.D. in 2009 under the supervision of Yves Deville, working on global constraints and bin packing in Constraint Programming. After research stays at Brown University under the supervision of Pascal Van Hentenryck and industrial experience at Dynadec and N-SIDE in Belgium, he returned to UCLouvain in 2012 as a professor. He has contributed to the development of several open-source CP solvers, including OscaR, MiniCP, and MaxiCP. His recent research focuses on CP and DP solvers, applications and exact machine learning algorithms.
Language models are becoming a primary interface for interacting with machines, yet they still struggle with the structure and reasoning required for formal modeling. This talk presents three complementary efforts aimed at closing this gap: Text2Model, Text2Zinc, and Learn2Zinc.
Text2Model formalizes the text‑to‑model translation task and introduces a suite of publicly available modeling copilots using zero‑shot prompting, chain‑of‑thought reasoning, knowledge‑graph representations, grammar‑guided validation, and agentic decomposition. These strategies form a solver‑agnostic pipeline with an online leaderboard for evaluating execution and solution accuracy.
Text2Zinc provides the first cross‑domain, paradigm- and solver‑agnostic dataset covering both satisfaction and optimization problems. It unifies instances from major OR and CP sources into a consistent schema with curated MiniZinc models, metadata, and an interactive editor with an AI assistant for ongoing curation.
Learn2Zinc explores whether language models can learn a domain‑specific modeling language through targeted fine‑tuning. Using cross‑model bootstrapping, it teaches small models to generate syntactically valid MiniZinc despite zero pretraining exposure, achieving high execution accuracy while highlighting a persistent gap in deeper constraint‑modeling reasoning.
Bio: Serdar Kadıoğlu is an Adjunct Associate Professor in the Department of Computer Science at Brown University and Group Vice President of Artificial Intelligence in the AI Center of Excellence at Fidelity Investments. He previously led the Advanced Constraint Technology group at Oracle and worked at Adobe. He has established strategic partnerships across industry and academia, including collaborations with Amazon, NVIDIA, Carnegie Mellon University, and Harvard Business School. His dual academic-industry career spans the design and deployment of learning and reasoning systems across technical and financial products, with a strong emphasis on open-source innovation.