Plan moving forwards (after Checkpoint)

We need to measure the Compression Performance of the alorithms we are interested in.

JPEG: After completing the process of analyzing the various algorithms that go into JPEGs, study variations of the compression quantization, and possibilities of greater or less compression and the resulting data losses.

We were able to get access to the Great Lakes Computing cluster, so we could try to modify our architecture and train using their resources if we decide to continue pursuing Autoencoders or potentially find or design other models. 

Image compression using GANs is on the cutting edge of using machine learning for image compression. In a paper by Mentzer et. al., they are able to compress images and generate very high quality reconstructions of them. Their model works on high resolution images. In a user study they conducted, users preferred their compressed image compared to bmp and jpeg when at similar compression ratios. Their project page has a demo: https://hific.github.io/. Their trained model is available through the tensorflow compression library, so we could run some tests with this method of compression too. Other information about compression using generative models: https://www.tensorflow.org/tutorials/generative/data_compression. There is a lot of information about generative model based data compression if we choose to explore this route. 

Optional (Low Priority)

Find pretrained autoencoder to see if others have had better results.

Implement QOI (Quite OK Image Format) lossless compression (since PNG is too hard) and demonstrate its massive speed improvement and comparable compression rate to PNG. (https://phoboslab.org/log/2021/11/qoi-fast-lossless-image-compression