9:00 AM - 9:10 AM
Welcome & Introduction
9:10 AM - 10:40 AM
Section I: Large Models for EDA
9:10 AM - 9:40 AM
━━
Siddharth Garg
New York University
Foundational Models for Hardware Design: What, Why and How?
9:40 AM - 10:10 AM
━━
Zhiyao Xie
Hong Kong University of Science and Technology
AI for EDA Paradigms: Prediction, Generation, and Generalization
In recent years, AI-assisted chip design, also named AI/ML for EDA, has demonstrated great potential by reusing knowledge from prior circuit design. In this talk, I will categorize existing AI for EDA techniques into three paradigms: 1) predictive solution; 2) generative solution; and 3) generalized pre-train and fine-tune solution. I will introduce the evolution of different paradigms and present the advantages of each type. For each paradigm, I will present several of our research works as case studies. The works cover all major design stages of AI for EDA, including RTL, netlist, and layout stages, and target different applications, including timing, power, design generation, and functionality reasoning.
10:10 AM - 10:40 AM
━━
Qiang Xu
Chinese University of Hong Kong
Large Circuit Models: The Dawn of AI-Native EDA
Within the Electronic Design Automation (EDA) domain, AI-driven solutions have emerged as formidable tools, yet they typically augment rather than redefine existing methodologies. In this talk, we argue for a paradigm shift towards AI-native EDA designs and discuss its potential. Specifically, we first discuss some of our recent works on circuit representation learning and their applications. Next, we advocate for creating large circuit models (LCMs) to elevate EDA to new heights. LCMs are crafted to harmonize and extract insights from circuit designs at various stages, including functional specifications, RTL designs, circuit netlists, and physical layouts. We will then articulate how future designs can benefit from this new EDA paradigm.
10:40 AM – 11:00 AM
Coffee Break
11:00 AM - 12:00 PM
Section II: Graph Learning for EDA
11:00 AM - 11:20 AM
━━
Zhanguang Zhang
Huawei Noah's Ark Lab
The graph's apprentice: teaching an LLM for RTL-level PPA prediction
Logic synthesis is a crucial phase in the circuit design process, responsible for transforming hardware description language (HDL) designs into optimized netlists. However, traditional logic synthesis methods are computationally intensive, restricting their iterative use in refining chip designs. Recent advancements in large language models (LLMs), particularly those fine-tuned on programming languages, present a promising alternative. In this work, we introduce VeriDistill, the first end-to-end machine learning model that directly processes raw Verilog code to predict circuit quality-of-result metrics. Our model employs a novel knowledge distillation method, transferring low-level circuit insights via graphs into the predictor based on LLM. Experiments show VeriDistill outperforms state-of-the-art baselines on large-scale Verilog datasets and demonstrates robust performance when evaluated on out-of-distribution datasets.
11:20 AM - 11:40 AM
━━
Nan Wu
George Washington University
Unlocking the Power of Directed Graphs: Benchmarking Directed Graph Representation Learning for Circuit Design
Circuits can be represented as directed graphs, yet directed graph representation learning (DGRL) remains underexplored despite the widespread use of graph learning for circuit evaluation and optimization. A key challenge has been the lack of comprehensive and user-friendly benchmarks for various DGRL models. In this talk, I will introduce our benchmark on a diverse set of DGRL models, employing diverse graph neural networks and graph transformers (GTs) as backbones, enhanced by positional encodings (PEs) tailored for directed graphs. The results highlight that bidirected (BI) message passing neural networks (MPNNs) and robust PEs significantly boost model performance. Additionally, our analysis of out-of-distribution performance underscores the critical need to improve OOD generalization in DGRL models. The benchmark, developed with a modular codebase, simplifies the evaluation process for both hardware and machine learning practitioners.
11:40 AM - 12:00 PM
━━
Guohao Dai
Shanghai Jiao Tong University
AI-based EDA models Acceleration Library Release
Machine learning-based methods have shown promising results in the field of Electronic Design Automation (EDA) in areas such as RTL code generation, logic synthesis result prediction, layout design optimization, and circuit congestion prediction, enabling a shift-left in the overall EDA flow. However, with the increasing parameter size of AI models (Large Language models (LLM) typically have 7B parameters) and the growing amount of data in EDA (real-world circuits often exceed millions of gates), there is an urgent need for acceleration solutions to support more efficient training and inference for AI-based EDA models. Our library provides acceleration solutions from two aspects: custom-designed for circuit-specific characteristics and general-purpose AI model designs. Through system-level and CUDA kernel-level optimizations, we achieved an average 1.34×~1.54× speedup for GNN-based EDA models and 1.56× speedup for LLM-based EDA models on NVIDIA RTX 3090, and support most EDA models using GCN, GIN, and GAT, such as DeepGate4, LOSTIN, PreRoutGNN, and etc. Our future work will be extended to more EDA applications and more EDA domain-specific AI models, such as Graph diffusion models and Large Circuit Models, as a basic acceleration library to support the development of AI for EDA.
12:00 PM - 13:00 PM
Section III: AI for Compiler
12:00 PM - 12:20 PM
━━
Meng Li
Peking University
Compiler Optimization for Efficient Transformer Inference
Recent years have witnessed the fast evolution of Transformer models in the field of computer vision, natural language processing, etc. Though promising, Transformer models usually require higher computation and memory. In this talk, I will discuss some of our recent works on efficient Transformer inference through compiler optimizations, including tiling, prefetching, caching, etc. We will also discuss interesting future directions of compiler optimizations targeting at Transformers.
12:20 PM - 12:40 PM
━━
Sitao Huang
University of California
Optimizing High-Level Synthesis Designs with Retrieval-Augmented Large Language Models
High-level synthesis (HLS) allows hardware designers to create hardware designs with high-level programming languages like C/C++/OpenCL, which greatly improves hardware design productivity. However, existing HLS flows require programmers' hardware design expertise and rely on programmers' manual code transformations and directive annotations to guide compiler optimizations. Optimizing HLS designs requires non-trivial HLS expertise and tedious iterative process in HLS code optimization. Automating HLS code optimizations has become a burning need. Recently, large language models (LLMs) trained on massive code and programming tasks have demonstrated remarkable proficiency in comprehending code, showing the ability to handle domain-specific programming queries directly without labor-intensive fine-tuning. In this work, we propose a novel retrieval-augmented LLM-based approach to effectively optimize high-level synthesis (HLS) programs.
12:40 PM - 13:00 PM
━━
Ying Wang
Chinese Academy of Sciences
Multi-Modal Approaches for LLM-Assisted Chip Design Automation
While Large Language Models (LLMs) have shown promise in automating HDL code generation, current approaches face limitations when dealing with the complexities of advanced hardware architectures. To explore the potential of natural language-based chip design flows, we employed the ChipGPT framework in the 1st OpenDACs Contest, a processor design competition held among senior EE students in China. The ChipGPT framework leverages an agent-based system to provide end-to-end chip design capabilities, encompassing task decomposition, synthesis, simulation, placement and routing, and optimization strategies.
One key lesson learned from the contest is that while natural language interfaces can generate Verilog code and EDA scripts, they struggle to address the spatial and architectural complexities of sophisticated hardware systems. In contrast, visual context, which encapsulates spatial relationships and architectural details, plays a crucial role in fully capturing hardware complexity, surpassing the capabilities of natural language alone.
To address these limitations, we introduce a multi-modal approach for LLM-assisted chip design, incorporating both visual and linguistic inputs. Our contributions include an open-source benchmark for multi-modal generative models that synthesize Verilog for both simple and complex hardware modules. Additionally, we present a visual and natural language Verilog query language framework, enabling efficient and intuitive multi-modal queries for hardware design. By comparing our multi-modal generative AI approach with natural language-only methods, we demonstrate significant improvements in accuracy, particularly in handling complex design tasks. Our work underscores the importance of rich modality in enhancing LLM-assisted chip design workflows.