Sell me the Blackbox! Regulating eXplainable Artificial Intelligence (XAI) May Harm Consumers. Mohammadi B., Malik N., Derdenger T. and Srinivasan K.
Peer Reviewed Conferences
Marketing Science Conference, 2022. (Virtual)
Marketing Strategy and Policy (MSP) 2022.
Conference on Artificial Intelligence, Machine Learning, and Business Analytics, HBS Dec 2022.
Marketing Science Conference, Miami, Florida. June 2023.Â
Workshop on Information Systems and Economics (WISE), Hyderabad, India. Dec 2023.
The paper theoretically shows that fully explaining AI predictions to end users may not be ideal for the same consumers.
Abstract: Recent AI algorithms are blackbox models whose decisions are difficult to interpret. eXplainable AI (XAI) seeks to address lack of AI interpretability and trust by explaining to customers their AI decision, e.g., decision to reject a loan application. The common wisdom is that regulating AI by mandating fully transparent XAI leads to greater social welfare. This paper challenges this notion through a game theoretic model for a policy-maker who maximizes social welfare, firms in a duopoly competition that maximize profits, and heterogenous consumers. The results show that XAI regulation may be redundant. In fact, mandating fully transparent XAI may make firms and customers worse off. This reveals a trade-off between maximizing welfare and receiving explainable AI outputs. We also discuss managerial implications for policy-maker and firms.