Antoniou, Antreas, et al. “Data Augmentation Generative Adversarial Networks.” ArXiv:1711.04340 [Cs, Stat], 21 Mar. 2018, arxiv.org/abs/1711.04340. Accessed 9 May 2022.
Cubuk, Ekin D., et al. “RandAugment: Practical Automated Data Augmentation with a Reduced Search Space.” ArXiv:1909.13719 [Cs], 13 Nov. 2019, arxiv.org/abs/1909.13719. Accessed 9 May 2022.
Hendrycks, Dan, et al. “AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty.” ArXiv:1912.02781 [Cs, Stat], 17 Feb. 2020, arxiv.org/abs/1912.02781. Accessed 9 May 2022.
Huang, Sheng-Wei, et al. AugGAN: Cross Domain Adaptation with GAN-Based Data Augmentation. 2018.
Brock, Andrew, et al. “Large Scale Gan Training for High Fidelity Natural Image Synthesis.” ArXiv.org, 25 Feb. 2019, https://arxiv.org/abs/1809.11096.
Jiang, Liming, et al. “TSIT: A Simple and Versatile Framework for Image-To-Image Translation.” ArXiv:2007.12072 [Cs, Eess], 25 July 2020, arxiv.org/abs/2007.12072.
Karras, Tero, et al. “A Style-Based Generator Architecture for Generative Adversarial Networks.” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019, 10.1109/cvpr.2019.00453. Accessed 10 Nov. 2020.
Kerras et al. “Progressive Growing of GANs for Improved Quality, Stability, and Variation.” ArXiv.org, 2017, arxiv.org/abs/1710.10196.
Liang, Weixin, and James Zou. “MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts.” Openreview.net, 29 Sept. 2021, openreview.net/forum?id=MTex8qKavoS. Accessed 9 May 2022.
Nichol, Alex, et al. “GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models.” ArXiv:2112.10741 [Cs], 8 Mar. 2022, arxiv.org/abs/2112.10741. Accessed 9 May 2022.
Nielsen, C., and M. Okoniewski. “GAN Data Augmentation through Active Learning Inspired Sample Acquisition.” Semantic Scholar, 2019, www.semanticscholar.org/paper/GAN-Data-Augmentation-Through-Active-Learning-Nielsen-Okoniewski/abb6ef0832a587b444a5033ea741b08c953862ef. Accessed 9 May 2022.
Oord, Aaron van den, et al. “Neural Discrete Representation Learning.” ArXiv:1711.00937 [Cs], 30 May 2018, arxiv.org/abs/1711.00937.
Radford, Alec, et al. “Learning Transferable Visual Models from Natural Language Supervision.” ArXiv:2103.00020 [Cs], 26 Feb. 2021, arxiv.org/abs/2103.00020.
Ramesh, Aditya, et al. “Zero-Shot Text-To-Image Generation.” ArXiv:2102.12092 [Cs], 26 Feb. 2021, arxiv.org/abs/2102.12092.
Reimers, Nils, and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings Using Siamese BERT-Networks.” ArXiv.org, 2019, arxiv.org/abs/1908.10084.
Sandfort, Veit, et al. “Data Augmentation Using Generative Adversarial Networks (CycleGAN) to Improve Generalizability in CT Segmentation Tasks.” Scientific Reports, vol. 9, no. 1, 15 Nov. 2019, 10.1038/s41598-019-52737-x. Accessed 13 Aug. 2020.
Tsimpoukelli, Maria, et al. “Multimodal Few-Shot Learning with Frozen Language Models.” ArXiv:2106.13884 [Cs], 3 July 2021, arxiv.org/abs/2106.13884. Accessed 9 May 2022.
Vahdat, Arash, and Jan Kautz. “NVAE: A Deep Hierarchical Variational Autoencoder.” ArXiv:2007.03898 [Cs, Stat], 8 July 2020, arxiv.org/abs/2007.03898.
Vaswani, Ashish, et al. “Attention Is All You Need.” ArXiv.org, 2017, arxiv.org/abs/1706.03762.
Wang, Peng, et al. “Unifying Architectures, Tasks, and Modalities through a Simple Sequence-To-Sequence Learning Framework.” ArXiv:2202.03052 [Cs], 7 Feb. 2022, arxiv.org/abs/2202.03052. Accessed 9 May 2022.
Xiao, Zhisheng, et al. “VAEBM: A Symbiosis between Variational Autoencoders and Energy-Based Models.” ArXiv:2010.00654 [Cs, Stat], 4 Nov. 2021, arxiv.org/abs/2010.00654. Accessed 9 May 2022.
Zhang, Hongyi, et al. “Mixup: Beyond Empirical Risk Minimization.” ArXiv:1710.09412 [Cs, Stat], 27 Apr. 2018, arxiv.org/abs/1710.09412.
Zhang, Yuxuan, et al. “DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort.” ArXiv:2104.06490 [Cs], 19 Apr. 2021, arxiv.org/abs/2104.06490. Accessed 9 May 2022.
Cai, Han, Chuang Gan, και Song Han. ‘Once for All: Train One Network and Specialize it for Efficient Deployment’. CoRR abs/1908.09791 (2019): n. pag. Web.