Harnessing Large Language Models for Planning: A Lab on Strategies for Success and Mitigation of Pitfalls
February 21, 2024 [10:45 am to 12:30 pm] | Vancouver Convention Centre – West Building | Vancouver, BC, Canada
In the rapidly evolving landscape of artificial intelligence, Large Language Models (LLMs) have emerged as potent tools with an impressive aptitude for understanding and generating human-like text. Their relevance in AI Planning is particularly noteworthy, given the similarities between planning tasks and programming code-related tasks, a forte of LLMs. Planning, akin to scripting in the Lisp programming language using Planning Domain Definition Language (PDDL), presents a fertile ground to explore the capabilities of LLMs in devising effective and efficient plans. This lab seeks to delve deep into the nuances of utilizing LLMs for planning, offering participants a comprehensive understanding of various techniques integral to the functioning of these models. Participants will be introduced to supervised fine-tuning and a range of prompting techniques, fostering a critical analysis of which approaches tend to enhance planning capabilities significantly. At the heart of this lab is a hands-on session where participants can work closely with 'Plansformer', our proprietary fine-tuned model developed explicitly for planning tasks. This session aims to provide a comparative analysis of the current state-of-the-art LLMs, including GPT-4, GPT-3.5, BARD, and Llama2, offering insights into their respective strengths and weaknesses in planning. We will also briefly explain and show how neuro-symbolic approaches can complement the incorrect generations from LLMs.
Goal of the Lab
The participants will gain a comprehensive understanding of the potential and limitations of LLMs in Planning.
Examine the performance of various language modeling strategies—causal, masked, and seq2seq—in the context of plan generation.
Understand the common failures LLMs encounter while generating plans for PDDL planning problems.
Discover strategies for addressing incorrect plan generations by LLMs through integrating neuro-symbolic techniques, resulting in valid plans.