Talk Title: Emergence of unexpected complex skills in LLMs: Some theory and experiments

Talk Abs: It has been discovered that as LLMs are scaled up (both with respect to number of parameters and size of training data) they spontaneously acquire new and complex skills. Our paper (Arora and Goyal'23) gave a mathematical analysis.  Under a plausible framework for the structure of training dataset, it was shown rigorously that the LLM will be able to combine $k$-tuples of elementary skills when solving new tasks; where $k$ roughly doubles with each order of scaling.

This talk will report on subsequent experiments ---based upon the SKILLMIX eval---that verify this prediction, including the prediction that LLMs can combine skills at test time despite never having seen the same combination during training.  Another recent experiment of special interest involved training on data that was generated by asking GPT4  and exhibited random subsets of up to $k$ skills. The resulting trained model displaying new capabilities at combining skills that were not seen  **at all (in any combinations)** during training. This is of interest in discussions of alignment and safety, where it has been implicitly assumed that filtering training data of all "objectionable behaviors" would keep the model free of such objectionable behaviors.

(Based upon  "A Theory for Emergence of Complex Skills in Language Models" and "SKILLMIX: A Flexible and Expandable  Family of Evaluations for AI models", and a paper in progress.)

Talk Title: Imitation learning, Model Predictive Control, and data-driven learning and control of constrained dynamic systems.

Talk Abs: Over the past few years, Imitation Learning (IL) has become a topic of intense recent focus in the Reinforcement Learning (RL) literature. In its simplest form, imitation learning is an approach that tries to learn an expert policy by querying samples from an expert (usually a human). Recent work in imitation learning has shown that having an expert controller that is both suitably smooth and stable enables much stronger guarantees on the performance of the approximating learned controller. Constructing such smoothed expert controllers for arbitrary systems remains challenging, especially in the presence of input and state constraints. I will discuss some of our recent results that show how such a smoothed expert can be designed for a general class of systems using a log-barrier-based relaxation of a standard Model Predictive Control (MPC) optimization problem. Time permitting, I will also discuss some of our recent work on using generative AI techniques such as Denoising Diffusion probabilistic Models (DDPM) for generating trajectories and stabilizing controllers for multi-modal robotics tasks. 


Yingbin Liang

Talk Title: Theory on Training Dynamics of Transformers

Talk Abs: Transformers, as foundation models, have recently revolutionized many machine learning (ML) applications such as natural language processing, computer vision, robotics, etc. Alongside their tremendous experimental successes, there arises a compelling inquiry into the theoretical foundations of the training dynamics of transformer-based ML models; particularly, why transformers trained by the common routine of gradient descent can achieve desired performance. In this talk, I will present our recent results along this direction on two case studies: linear regression in in-context learning and masked image modeling in self-supervised learning. For both problems, we analyze the convergence of the training process over one-layer transformers and characterize the optimality of the attention models upon convergence. Our numerical results further corroborate these theoretical insights. Lastly, I will discuss future directions and open problems in this actively evolving field.

Talk Title: Knowledge Distillation as Semiparametric Inference

Talk Abs: More accurate machine learning models often demand more computation and memory at test time, making them difficult to deploy on CPU- or memory- constrained devices. Knowledge distillation alleviates this burden by training a less expensive student model to mimic the expensive teacher model while maintaining most of the original accuracy. To explain and enhance this phenomenon, we cast knowledge distillation as a semiparametric inference problem with the optimal student model as the target, the unknown Bayes class probabilities as nuisance, and the teacher probabilities as a plug-in nuisance estimate. By adapting modern semiparametric tools, we derive new guarantees for the prediction error of standard distillation and develop two enhancements—cross-fitting and loss correction—to mitigate the impact of teacher overfitting and underfitting on student performance. We validate our findings empirically on both tabular and image data and observe consistent improvements from our knowledge distillation enhancements.

Talk Title: Transformers learn in-context by implementing gradient descent

Talk Abs: We study the  theory of context learning, for which we investigate how Transformers can implement learning algorithms in their forward pass. We show that a linear attention Transformer naturally learns to implement gradient descent, which enables it to learn linear functions in-context. More generally, we show that a (non-linear attention based) Transformer can implement functional gradient descent with respect to some RKHS metric, which allows it to learn a broad class of nonlinear functions in-context. We show that the RKHS metric is determined by the choice of attention activation, and that the optimal choice of attention activation depends in a natural way on the class of functions that need to be learned. 

Talk Title: Capitalizing on Generative AI: Diffusion Models Towards High-Dimensional Generative Optimization 

Talk Abs: Diffusion models represent a significant breakthrough in generative AI, operating by progressively transforming random noise distributions into structured outputs, with adaptability for specific tasks through guidance or fine-tuning. In this presentation, we delve into the statistical aspects of diffusion models and establish their connection to theoretical optimization frameworks. In the first part, we explore how unconditioned diffusion models efficiently capture complex high-dimensional data, particularly when low-dimensional structures are present. We present the first efficient sample complexity bound for diffusion models that depend on the small intrinsic dimension, effectively addressing the challenge of the curse of dimensionality. Moving to the second part, we leverage our understanding of diffusion models to introduce a pioneering optimization method termed "generative optimization." Here, we harness diffusion models as data-driven solution generators to maximize an unknown objective function. We introduce innovative reward guidance techniques incorporating the target function value to guide the diffusion model. Theoretical analysis in the offline setting demonstrates that the generated solutions yield higher function values on average, with optimality gaps aligning with off-policy bandit regret. Moreover, these solutions maintain fidelity to the intrinsic structures within the training data, suggesting a promising avenue for optimization in complex, structured spaces through generative AI.


 Contact the organizers: workshopbgpt@gmail.com