(Journal of Computational and Cognitive Engineering, 2022)
Recent advances in deep learning techniques such as Convolutional Neural Networks (CNN) and Generative Adversarial Networks (GAN) have achieved breakthroughs in the problem of semantic image inpainting, the task of reconstructing missing pixels. While more effective than conventional approaches, deep learning models require large datasets and computational resources for training, and inpainting quality varies considerably when training data differs in size and diversity. To address these problems, we present an inpainting strategy called Comparative Sample Augmentation, which enhances the quality of the training set by filtering irrelevant images and constructing additional images using information about the surrounding regions of the target image. Experiments on multiple datasets demonstrate that our method extends the applicability of deep inpainting models to training sets with varying levels of diversity, while enhancing the inpainting quality as measured by qualitative and quantitative metrics for a large class of deep models, with little need for model-specific consideration.
(Proceedings of IJCAI, 2020)
Effective complements to human judgment, artificial intelligence techniques have started to aid human decisions in complicated social decision problems across the world. Automated machine learning/deep learning (ML/DL) classification models, through quantitative modeling, have the potential to improve upon human decisions in a wide range of decision problems on social resource allocation such as Medicaid and Supplemental Nutrition Assistance Program (SNAP, commonly referred to as Food Stamps). However, given the limitations in ML/DL model design, these algorithms may fail to leverage various factors for decision making, resulting in improper decisions that allocate resources to individuals who may not be in the most need of such resource. In view of such an issue, we propose in this paper the strategy of fairgroups, based on the legal doctrine of disparate impact, to improve fairness in prediction outcomes. Experiments on various datasets demonstrate that our fairgroup construction method effectively boosts the fairness in automated decision making, while maintaining high prediction accuracy.