8:30 am – 8:40 am
Welcome & Introduction
8:40 am – 9:10 am
━━
Deming Chen, University of Illinois at Urbana-Champaign
Title: Proof2Silicon: Prompt Repair for Verified Code and Hardware Generation via LLMs
Large Language Models (LLMs) have demonstrated impressive capabilities in automated code generation but frequently produce code that fails formal verification, an essential requirement for hardware and safety-critical domains. To overcome this fundamental limitation, we previously proposed PREFACE, a model-agnostic framework based on reinforcement learning (RL) that iteratively repairs the prompts provided to frozen LLMs, systematically steering them toward generating formally verifiable Dafny code without costly fine-tuning. This work presents Proof2Silicon, a novel end-to-end synthesis framework that embeds the previously proposed PREFACE flow to enable the generation of correctness-by-construction hardware directly from natural language specifications. Proof2Silicon operates by: (1) leveraging PREFACE’s verifier-driven RL agent to optimize prompt generation iteratively, ensuring Dafny code correctness; (2) automatically translating verified Dafny programs into synthesizable high-level C using Dafny’s Python backend and PyLog; and (3) employing Vivado HLS to produce RTL implementations. Evaluated rigorously on a challenging 100-task benchmark, PREFACE’s RL-guided prompt optimization consistently improved Dafny verification success rates across diverse LLMs by up to 21%. Crucially, Proof2Silicon achieved an end-to-end hardware synthesis success rate of up to 72%, generating RTL designs through Vivado HLS synthesis flows. These results demonstrate a robust, scalable, and automated pipeline for LLM-driven, formally verified hardware synthesis, bridging natural-language specification and silicon realization.
9:10 am – 9:40 am
━━
Igor L. Markov, Synopsys
Title: Navigating Current Limitations to AI Technologies when Designing EDA Products
AI technologies support impressive demos and stir the imagination of researchers and product developers. However, implementation projects run into a variety of limitations in practice. In this talk, I will outline how Synopsys dealt with these limitations and developed a line of EDA products that improve productivity of IC designers.
9:40 am – 10:00 am
Coffee Break
10:00 am – 10:30 am
━━
Bing Li, University of Siegen
Title: LLM-Assisted Testbench and Circuit Generation
Large Language Models (LLMs) have great potential to boost circuit design productivity by automating design generation and verification in electronic design automation (EDA). This talk introduces methods for automatically generating circuit designs and testbenches from specifications to reduce manual engineering effort. Techniques such as scenario decomposition to improve testbench coverage and simulation-based code generation are covered. Future opportunities of applying LLMs in EDA flow will also be discussed.
10:30 am – 11:00 am
━━
Qi Sun, Zhejiang University
Title: FabGPT for Smart IC Manufacturing: Overcoming Challenges in Data, Multimodality, and Deployment
Smart IC manufacturing is crucial for advancing modern technology, enabling the production of smaller, faster, and more efficient electronic devices. However, unlike many other smart manufacturing realms, IC manufacturing faces significant challenges, including data scarcity, multimodal integration, and edge deployment difficulties. In this talk, we introduce FabGPT, an innovative Large Language Model (LLM)-based framework that effectively addresses these issues. FabGPT supports diverse data types and improves defect detection even with limited data, while optimized edge AI solutions enhance deployment efficiency. Such an LLM-based framework like FabGPT can enable more robust and scalable IC manufacturing solutions, paving the way for more reliable and efficient production processes.
11:00 am – 11:30 am
━━
Zhengyuan Shi, The Chinese University of Hong Kong
Title: Beyond LLMs: Making Foundation Model Understand Circuit
The integration of large language models (LLMs) into electronic design automation (EDA) has revolutionized aspects of the design workflow, from natural language interface for tool scripting to automated HDL code generation . However, LLMs’ inherent limitations in deep circuit understanding remain a critical bottleneck.
To address these gaps, we propose the Large Circuit Model (LCM), an AI-native foundation model tailored for circuit design. Unlike LLMs, LCM is trained on massive-scale, multi-modal circuit data, including functional specifications, RTL code, gate-level netlists, and physical layouts . By aligning representations across design stages through functional equivalence constraints, LCM learns to encode circuit topology, functional relationships, and PPA trade-offs directly from data. This enables cross-stage reasoning—for example, predicting post-layout PPA from RTL descriptions or understanding design semantics to speedup traditional verification. By moving beyond textual tokens, a combined LLM and LCM approach has the potential to implement a "shift-left" design methodology and achieve closed-loop design automation.