Evoke: Evoking Critical Thinking Abilities in LLMs via Reviewer-Author Prompt Editing
Comparison between hand crafted and Evoke prompts
How does Evoke work?
The workflow comprises three steps: First, the LLM-Author edits prompts from previous iterations, taking into account the past edits and the feedback from the LLM-Reviewer. Second, the LLM-Reviewer scores the revised prompts from the LLM-Author, and the top-n candidates with the highest scores are selected for subsequent procedures.
The LLM-Reviewer employs a memory module that stores history edits, prompts and task accuracy of history prompts. Finally, the task accuracy for each instruction is computed.
Results
We are exploring how Evoke can enhance the performance of LLMs across various tasks. These tasks include:
1) Instruction Induction
2) Big Bench
3) Adversarial SST2 and QQP
4) Named Entity Recognition
For example, on the challenging logical fallacy detection task from BBII, performance of Evoke is more than 80, while performance of both APE and Human are below 20.
This is because Evoke is adept at conceptualizing the core definition of a task, decomposing a complex task into smaller subtasks, and curating relevant demonstrations accompanied by detailed explanations.
To demonstrate the power of Evoke, we show the generated prompt for logical fallacy detection below.
We observe that Evoke significantly outperforms all the baselines in all the tasks.
The performance gain is more significant for adversarially constructed datasets.
Tasks above are all sentence-level classification tasks, e.g., deciding whether a sentence is of positive or negative sentiment. Here, we show Evoke can handle more fine-grained tasks, such as token-level named entity recognition.