Learning compatible representations and large pre-trained models updating
Abstract
Learning compatible representations aims to learn feature representations that can be used interchangeably over time whenever a model undergoes updates. We discuss the challenge of maintaining compatibility and review the different solutions proposed so far. We focus on stationary representations learned by a d-Simplex fixed classifier, that achieve state of the art in compatible representation learning. We examine distinct learning scenarios: the first considering fine-tuning a model initialized from scratch, to assess the ability to learn incoming tasks while maintaining compatibility; the second, considering fine-tuning of a large pre-trained model that is occasionally replaced by an improved version, to verify the capability of exploiting the improved model while maintaining compatibility with the learned representation.Â