Large Language Models Can Solve Real-World Planning Rigorously with Formal Verification Tools
Yilun Hao, Yongchao Chen, Yang Zhang, Chuchu Fan
Massachusetts Institute of Technology
NAACL 2025 Main (Oral)
[Paper] [Dataset & Code]
Yilun Hao, Yongchao Chen, Yang Zhang, Chuchu Fan
Massachusetts Institute of Technology
NAACL 2025 Main (Oral)
[Paper] [Dataset & Code]
See our closely related work that extends to build a zero-shot general purpose planner for realistic planning problems:
Abstract
Large Language Models (LLMs) struggle to directly generate correct plans for complex multi-constraint planning problems, even with self-verification and self-critique. For example, a U.S. domestic travel planning benchmark TravelPlanner was proposed in Xie et al. (2024), where the best LLM OpenAI o1-preview can only find viable travel plans with a 10% success rate given all needed information. In this work, we tackle this by proposing an LLM-based planning framework that formalizes and solves complex multi-constraint planning problems as constrained satisfiability problems, which are further consumed by sound and complete satisfiability solvers. We start with TravelPlanner as the primary use case and achieve a success rate of 93.9%. We demonstrate our framework's robustness by showing its effectiveness in diverse paraphrased prompts. More importantly, our framework has strong zero-shot generalizability, successfully handling unseen constraints in our newly created unseen international travel dataset and generalizing well to new domains like routing and task allocation problems. Moreover, when user input queries are infeasible, our framework can identify the unsatisfiable core, provide failure reasons, and offers personalized modification suggestions. We show that our framework can modify and solve for an average of 81.6% and 91.7% unsatisfiable queries from two datasets and prove with ablations that all key components of our framework are effective and necessary.
Given a natural language query, LLM 1) generates steps to formulate it as an SMT problem, 2) generates corresponding codes that encode the problem and call the solver. If the solver is not able to find the solution, LLM receives unsatisfiable reasons from the solver, collects information, analyzes the current situation, and offers suggestions to modify the query interactively. LLM then updates the code based on suggestions and calls the solver again to find a feasible plan.
@article{hao2024large,
title={Large Language Models Can Plan Your Travels Rigorously with Formal Verification Tools},
author={Hao, Yilun and Chen, Yongchao and Zhang, Yang and Fan, Chuchu},
journal={arXiv preprint arXiv:2404.11891},
year={2024}
}