Training Large-scale Foundation Models on Emerging AI Chips 

Tuesday, 8 August, from 2:00pm to 5:00pm in room 202B

Abstract

Foundation models such as ChatGPT and GPT-4 have garnered significant interest from both academia and industry due to their emergent capabilities, such as few-shot prompting, multi-step reasoning, instruction following, and model calibration. Such capabilities were previously only attainable with specially designed models, such as those using knowledge graphs, but can now be achieved on a much larger scale with foundation models. As the capabilities of foundation models have increased, so too have their sizes at a rate much faster than Moore's law. For example, the BERT large model, released in 2018, was a 334M-parameter model. The Pathways Language Model (PaLM), released in 2022, was trained with 540B-parameter, which represents an increase of more than three-order of magnitude in just 4 years.  The training of foundation models requires massive computing power. For instance, training a BERT model on a single state-of-the-art GPU machine with multi-A100 chips can take several days, while training GPT-3 models on a large multi-instance GPU cluster can take several months to complete the estimated 3*10^23 flops.

This tutorial provides an overview of the latest progress in supporting foundation model training and inference with new AI chips. It reviews progress on the modeling side, with an emphasis on the transformer architecture, and presents the system architecture supporting training and serving foundation models. This includes programming language frameworks such as PyTorch and Tensor Flow, graph compilers, 3D parallelisms, and accelerators such as the GPU H100, TPU, and Trainium. Finally, the tutorial presents our experience of training foundation models using different systems.

Slides 

You can download the slides here

Panel Discussion

To wrap up the tutorial, we will have a joint 30 minute panel discussion with the speakers, Professor Yiran Chen from Duke University, and Professor Yizhou Sun from UCLA about recent advancements in AI hardware, the importance of software-hardware co-design for new AI chips, and the democratization of AI research and training.

Speaker's Bio