LLMs meet Constraint Solving
CP/SAT 2025 Workshop
Monday August 11, 2025
Glasgow, Scotland
CP/SAT 2025 Workshop
Monday August 11, 2025
Glasgow, Scotland
The workshop will be held at the University of Glasgow's Gilmorehill Campus in the west end of Glasgow, on the ground floor of the Wolfson Medical Building (Yudowitz Room).
More information regarding the venue can be found at: https://cp2025.a4cp.org/venue.html
9:00 - 09:40: Invited talk
Florian Regin: Neurosymbolic CP
9:40 - 10:30: Session "Assisting constraint modeling with LLMs"
Augustin Crespin, Ioannis Kostis, Hélène Verhaeghe, Pierre Schaus,
"CP-Model-Zoo: A Natural Language Query System for Constraint Programming Models"
Adam Kuca, Phong Le, Nguyen Dang
"Automated Constraint Model Modification with Large Language Models"
10:30 - 11:00: Coffee break
11:00 - 12:30: Session "LLMs and Solvers"
Haoyu Wang, Dan Roth
"LLMs as Translators, Not Thinkers: Structured Output Enables Stronger NP-Hard Problem Solving"
Wilfred Springer
"HiGHS-MCP: Enabling Large Language Models to Solve Optimization Problems with HiGHS"
Roberto Amadini, Simone Gazza
"From Portfolio Solvers to Agentic Solvers"
12:30 - 13:30: Lunch break
13:30 - 14:10: Joint talk with PTHG workshop
Stefan Szeider: Neural Meets Symbolic: Synergies Between Language Models and Constraint Reasoning
14:10 - 15:00: Session "Generating and evaluating constraint models"
Yuliang Song, Eldan Cohen
Emma Frechette, Louis-Martin Rousseau, Tias Guns
"LLM-as-a-Judge For Combinatorial Optimization Modelisations from Natural Language"
15:00 - 15:30: Coffee break
15:30 - 16:00:
Kostis Michailidis, Dimos Tsouros, Tias Guns
"Benchmarking LLMs for CP model generation"
16:00 - 17:30: Joint Panel with PTHG workshop on "Are LLMs the missing piece to get us to the Holy Grail?"
Title: Neurosymbolic CP
Abstract: Many real-world systems evolve over time: they are dynamic processes, often modeled by state graphs. To ensure that these systems behave correctly, formal verification methods are used, such as model checking (MC), which allows us to verify that a property always holds or that an undesirable property never occurs. Traditionally, these approaches rely on formal logic, as in constraint programming (CP). But with the emergence of large language models (LLMs), new possibilities are arising. The goal then becomes to combine the generative capabilities of LLMs with the rigor of logical methods, in what is known as a neuro-symbolic approach. During my PhD, I developed GenCP, a method capable of dynamically constructing a constraint programming model from a specification. This approach demonstrated competitive performance, even outperforming traditional CP solvers like Choco and OR-Tools on classic CP problems such as N-Queens and All Interval Series. Subsequently, GenCP was integrated into a neuro-symbolic approach: the LLM was incorporated to automatically generate sentences that satisfy logical constraints. The originality of this approach lies in using the LLM to dynamically define the domain of constrained variables, making the cooperation between logic and LLM both simple and effective.
Title: Neural Meets Symbolic: Synergies Between Language Models and Constraint Reasoning
Submission deadline: June 20, 2025
Notification: June 25, 2025
Workshop date: August 11, 2025
Large Language Models (LLMs) have shown remarkable capabilities in natural language processing, opening exciting opportunities in many domains and applications, like chatbots and code generation. However, their reliance on statistical patterns makes them prone to errors, hallucinations, and a lack of formal guarantees—hindering their application in domains requiring structured reasoning and reliability.
On the other hand, constraint-solving technologies such as CP/SAT/OR provide a framework for modeling and solving combinatorial problems with provable guarantees on correctness, optimality, and explainability. Yet, the need for expert knowledge in formalizing problems and selecting appropriate solving techniques often limits the wider adoption of such technologies.
By combining the strengths of LLMs and constraint solving, new opportunities arise: LLMs can assist in model formulation, constraint acquisition, and interactive solving, while constraint solvers can provide formal verification, control mechanisms, and structured reasoning for LLMs. Recent research has already shown promising results in both directions, from LLM-assisted constraint modeling to using CP/SAT/OR solvers for guiding and validating LLM outputs.
The LLM-Solve workshop aims to bring together researchers working at this exciting frontier to discuss challenges, synergies, and future research directions. This workshop aims to shape the future of LLM-powered constraint solving and constraint-driven LLMs.
The LLM-Solve 2025 workshop aims to bring together researchers exploring the intersection of Large Language Models (LLMs) and Constraint Solving (CP, SAT, SMT, MIP, and related paradigms). This workshop provides a platform to discuss recent advances, challenges, and opportunities in combining LLMs and constraint solving.
The workshop covers both directions of this interaction:
LLMs for Constraint Solving: Investigating how LLMs can be used to tackle challenges in constraint solving research, i.e., assist in constraint modeling, solving, and explanations; including automated constraint acquisition, solver configuration and selection, solver heuristics, automated solver code generation and natural-language based solving and solution refinement.
Constraint Solving for LLMs: Exploring how constraint-solving techniques can improve LLM reasoning and verification, LLM safety, and applications in structure reasoning, formal verification, and more.
The topics of interest include (but are not restricted to):
LLMs for constraint modeling and acquisition
LLM-guided solver heuristics and search strategies
Hybrid approaches combining LLMs and constraint solvers
Constraint solvers for improving LLM reasoning and verification
SAT/CP-based methods for controlling or guiding LLM outputs
LLM-driven explanations and interactive constraint solving
Benchmarks, datasets and evaluation methodologies for LLM-Constraint integration
Real-world applications involving the use of constraint-solving technologies and LLMs.
This workshop welcomes contributions from both theoretical and applied perspectives, fostering discussions between researchers in CP, SAT, AI, OR, and NLP who are interested in bridging the gap between constraint solving and natural language processing with LLMs.
Authors are invited to send contributions in the form of extended abstracts (maximum 2 pages, without including the references). The authors can optionally add up to 10 pages of technical report after the extended abstract. Submissions can be published journal/conference papers, original work, work in progress with preliminary results, or position papers.
Contributions should be submitted in the form of a PDF file, following LIPIcs guidelines: https://submission.dagstuhl.de/series/details/5#author)
The workshop co-chairs will select the papers to be presented at the workshop according to their suitability to the aims. All presenters and attendees are expected to register to the CP/SAT workshop day.
Submission web page: https://openreview.net/group?id=a4cp.org/CP/2025/Workshop/LLM-Solve
The workshop co-chairs are:
Tias Guns (KU Leuven, Belgium), https://people.cs.kuleuven.be/~tias.guns/
Serdar Kadıoglu (Brown University, USA), https://skadio.github.io/
Stefan Szeider (TU Wien, Austria), https://www.ac.tuwien.ac.at/people/szeider/
Dimos Tsouros (KU Leuven, Belgium), https://dimostsouros.github.io/