Search this site
Embedded Files
Towards AGI
  • Home
  • Schedule
  • Projects
  • Topics&Papers
    • Adversarial Robustness
    • Alignment and Safety
    • CompPsych-FoMo
    • Compression and Fast Inference
    • Continual Learning at Scale
    • Emergence & Phase Transitions in ML
    • Foundation Models
    • Generalization (iid and ood)
    • High Performance Computing
    • Knowledge Fusion
    • Neural Scaling Laws
    • Out-of-Distribution Generalization
    • Scaling Laws in Nature
    • State Space Models
    • Time Series Foundation Models
  • Reading Group
Towards AGI
  • Home
  • Schedule
  • Projects
  • Topics&Papers
    • Adversarial Robustness
    • Alignment and Safety
    • CompPsych-FoMo
    • Compression and Fast Inference
    • Continual Learning at Scale
    • Emergence & Phase Transitions in ML
    • Foundation Models
    • Generalization (iid and ood)
    • High Performance Computing
    • Knowledge Fusion
    • Neural Scaling Laws
    • Out-of-Distribution Generalization
    • Scaling Laws in Nature
    • State Space Models
    • Time Series Foundation Models
  • Reading Group
  • More
    • Home
    • Schedule
    • Projects
    • Topics&Papers
      • Adversarial Robustness
      • Alignment and Safety
      • CompPsych-FoMo
      • Compression and Fast Inference
      • Continual Learning at Scale
      • Emergence & Phase Transitions in ML
      • Foundation Models
      • Generalization (iid and ood)
      • High Performance Computing
      • Knowledge Fusion
      • Neural Scaling Laws
      • Out-of-Distribution Generalization
      • Scaling Laws in Nature
      • State Space Models
      • Time Series Foundation Models
    • Reading Group

Continual  Learning  at Scale


Continual Learning: Surveys

  • A comprehensive survey of continual learning: Theory, method and application

  • Embracing Change: Continual Learning in Deep Neural Networks 

  • How to reuse and compose knowledge for a lifetime of tasks: A survey on continual learning and functional composition

  • Towards Continual Reinforcement Learning: A Review and Perspectives 

  • Continual Learning with Deep Architectures (tutorial by Irina Rish and Vincenzo Lomonaco, ICML 2021)

  • Continual lifelong learning with neural networks: A review

  • Continual learning: A comparative study on how to defy forgetting in classification tasks

  • A wholistic view of continual learning with deep neural networks: Forgotten lessons and the bridge to active and open world learning 

  • Class-incremental learning: survey and performance evaluation

  • Measuring Catastrophic Forgetting in Neural Networks 

  • Never-Ending Learning (tutorial by Tom Mitchell and Partha Talukdar, ICML 2019)

  • Continual task learning in natural and artificial agents

  • An Introduction to Lifelong Supervised Learning

Book (1st book on the topic): Lifelong Machine Learning


Continual Learning @ Scale

  • Continual Pre-Training of Large Language Models: How to re-warm your model?  (ICML ES-FoMo workshop 2023)

  • Human-Timescale Adaptation in an Open-Ended Task Space

  • Fine-tuned Language Models are Continual Learners (aka Continual T0) (EMNLP 2022)

  • Effect of scale on catastrophic forgetting in neural networks  (ICLR 2022)

  • An Empirical Study of Catastrophic Forgetting in Large Language Models During Continual Fine-tuning

  • Intelligent Learning Rate Distribution to Reduce Catastrophic Forgetting in Transformers

  • Improving language models fine-tuning with representation consistency targets

  • Foundational Models for Continual Learning: An Empirical Study of Latent Replay

  • Effects of Model and Prior Learning Scale on Catastrophic Forgetting 

  • Don’t Stop Learning: Towards Continual Learning for the CLIP Model 

Continual Learning in NLP

  • Continual Lifelong Learning in Natural Language Processing: A Survey (COLING 2020)

  • Drinking from a Firehose: Continual Learning with Web-scale Natural Language  (TPAMI 2023)

  • Pretrained Language Model in Continual Learning: A Comparative Study,   (ICLR, 2022) 

  • Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models

  • LAMOL: LAnguage MOdeling for Lifelong Language Learning (ICLR 2020)

  • TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models (2022)

  • Towards Continual Knowledge Learning of Language Models

  • Continual Pre-training of Language Models (ICLR 2023 )

  • Episodic memory in lifelong language learning (Neurips 2019)

  • Recall and learn: Fine-tuning deep pretrained language models with less forgetting 

  • Lifelong Pretraining: Continually Adapting Language Models to Emerging Corpora

  • Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning  (Neurips 2021)

  • On Anytime Learning at Macroscale (CoLLas 2022)


Continual Learning in NLP

  • Continual Lifelong Learning in Natural Language Processing: A Survey (COLING 2020)

  • Drinking from a Firehose: Continual Learning with Web-scale Natural Language  (TPAMI 2023)

  • Pretrained Language Model in Continual Learning: A Comparative Study,   (ICLR, 2022) 

  • Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models

  • LAMOL: LAnguage MOdeling for Lifelong Language Learning (ICLR 2020)

  • TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models (2022)

  • Towards Continual Knowledge Learning of Language Models

  • Continual Pre-training of Language Models (ICLR 2023 )

  • Episodic memory in lifelong language learning (Neurips 2019)

  • Recall and learn: Fine-tuning deep pretrained language models with less forgetting 

  • Lifelong Pretraining: Continually Adapting Language Models to Emerging Corpora

  • Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning  (Neurips 2021)

  • On Anytime Learning at Macroscale (CoLLas 2022)


Misc

  • Learning to prompt for continual learning

  • Wide neural networks forget less catastrophically

  • Architecture matters in continual learning

  • A Closer Look at Rehearsal-Free Continual Learning

  • Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback

  • The Effect of Task Ordering in Continual Learning

  • Continual Pre-Training Mitigates Forgetting in Language and Vision

  • Online Continual Learning with Declarative Memory


Google Sites
Report abuse
Page details
Page updated
Google Sites
Report abuse