LLM4PLC: Large Language Models for Verifiable Programming of PLCs

We propose LLM4PLC, a user-guided iterative pipeline leveraging user feedback and external verification tools -- including grammar checkers, compilers and SMV verifiers -- to guide the LLM's generation. We further enhance the generation potential of LLM by employing Prompt Engineering and model fine-tuning through the creation and usage of LoRAs. We validate this system using a FischerTechnik Manufacturing TestBed (MFTB), illustrating how LLMs can evolve from generating structurally-flawed code to producing verifiably correct programs for industrial applications. We run a complete test suite on GPT-3.5, GPT-4, Code Llama-34B, and a fine-tuned Code Llama-34B model. The proposed pipeline improved the generation success rate from 47% to 72%, and the Survey-of-Experts code quality from 2.25/10 to 7.75/10.

Increased Pass rate

Compilation Success Rate increased when using LLM4PLC

Decreased Errors

Less Syntax and Logic Errors Per file when using LLM4PLC

Faster Development

Much easier engineering iteration using LLM4PLC

Stage 1: Create Natural Language Description and Constraints

Stage 2: LLM4PLC Generates Structured Text and SMV Models

Stage 3: LLM4PLC Automatically solves Errors and Logic Flaws

Stage 4: LLM4PLC Deploys Verified Program on PLC

Neutral State

All blocks are in their appropriate place before code is run with the intention of moving blocks around

Naive Prompt

Unverified code results in pieces falling (indicated by bright green arrows)


Optimized Prompt

Code Verifier helps achieve correct movement of blocks (indicated by bright green arrow)

Demo

To the left is a demo of how our PLC model works when it is given optimized code.