CycleGAN for Facial Expression Recognition
By Michelle Lin and Fatemeh Ghezloo
CycleGAN for Facial Expression Recognition
By Michelle Lin and Fatemeh Ghezloo
References
[1] Xinyue Zhu et al. “Emotion classification with data augmentation using generative adversarial networks”. In:Pacific-Asia conference on knowledge discovery and data mining. Springer. 2018, pp. 349–360
[2] Zhu, Jun-Yan, et al. "Unpaired image-to-image translation using cycle-consistent adversarial networks." Proceedings of the IEEE international conference on computer vision. 2017.
[3] Mao, Xudong, et al. "Least squares generative adversarial networks." Proceedings of the IEEE international conference on computer vision. 2017.
[4] Chawla, Nitesh V., et al. "SMOTE: synthetic minority over-sampling technique." Journal of artificial intelligence research 16 (2002): 321-357.
[5] Goh, Siong Thye, and Cynthia Rudin. "Box drawings for learning with imbalanced data." Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. 2014.
Image sources:
Header: https://acart.com/wp-content/uploads/2017/04/facial-recognition-img2.jpg
Six facial expressions drawing: https://miro.medium.com/max/1200/1*L93DIgkABLyRb0tX08T-MA.jpeg
Six facial expressions examples images: https://cbim.rutgers.edu/component/content/article?id=141:expression-recognition
CycleGAN training diagram: https://arxiv.org/abs/1711.00648
Generator structure: https://arxiv.org/abs/1711.00648
Discriminator structure: https://arxiv.org/abs/1711.00648
CNN structure: https://arxiv.org/abs/1711.00648