Generative Adversarial Networks (GANs) are a revolutionary class of Deep Neural Networks (DNNs) that have been successfully used to generate realistic images, text, and other data. However, GAN training presents many challenges, notably it can be very resource-intensive. Further, a potential weakness in GANs is that discriminator DNNs typically provide only one value (loss) of corrective feedback to generator DNNs. By contrast, we propose a new class of GAN we refer to as LogicGAN, that leverages recent advances in explainable AI (xAI) systems to provide a "richer" form of corrective feedback from discriminators to generators. Specifically, we modify the gradient descent process using xAI systems that specify the reason as to why the discriminator made the classification it did, thus providing the richer corrective feedback that helps the generator to better fool the discriminator. Using our approach, we observe LogicGANs on the MNIST dataset have a 12.73% improvement in data efficiency over standard GANs while maintaining the same quality as measured by Fréchet Inception Distance. Further, we argue that LogicGAN enables users greater control over how models learn than standard GANs.