Working Papers
[P1] Enhancing the Wisdom of AI-Assisted Crowds: Theory and Experimental Evidence. [SSRN] (Under review at Management Science.)
Angshuman Pal, Asa B. Palley, Ville A. Satopää.
(Click for Abstract)
Abstract: Accurate forecasts are critical for managerial decision making. Such forecasts may be generated by human experts or by artificial intelligence (AI) technologies. A decision maker can benefit from the distinct advantages that each source may offer by providing AI assistance to the experts, allowing them to augment the information contained in the AI forecast by incorporating their own knowledge about the variable of interest. When multiple experts are available, accuracy can be further improved by utilizing the wisdom of crowds, forming a consensus by averaging each of their AI-assisted forecasts. However, the potential accuracy of a crowd of AI-assisted forecasters may be limited by two structural characteristics. First, because the AI assistance is valuable to each expert at an individual level, the opinion of the AI can end up being overrepresented in the crowd’s consensus. Second, the experts may fail to appropriately utilize the AI assistance when forming their forecasts, either under- or over-emphasizing the information it provides. Using a stylized Bayesian model of information aggregation, we develop a procedure that can recover the most accurate consensus forecast given all information collectively observed by the AI and every expert in the crowd. This procedure works by pivoting the average AI-assisted forecast either toward or away from the crowd’s average initial forecast. We test the performance of the proposed aggregation method in three laboratory experiments and find that it matches, and in many cases outperforms, the accuracy of the AI-assisted crowd, the AI advice itself, and the unassisted crowd of forecasters.
Work In Progress
[P2] The Interpretable Data-Driven Newsvendor: Prescriptive Policies for Human-Algorithm Collaboration
Angshuman Pal, Rodney P. Parker, Asa B. Palley.
(Click for Abstract)
Abstract: Data-driven solutions to the feature-based newsvendor problem are useful to supply chain planners when the underlying demand distribution is unknown. Interpretable policies such as decision tree–based prescriptions that organize products into operationally meaningful categories are attractive in such settings. We study a class of interpretable, tree-based newsvendor policies that can be completely learned via empirical risk minimization. We develop a regret decomposition that separates approximation, estimation, and learning effects, and use it to characterize how the optimal complexity of an interpretable policy should scale with available data. These results provide ex ante guidance on how much segmentation a manager should allow when designing a data-driven ordering policy. We also study the ex post reliability of leaf-level prescriptions after the policy has been deployed. Conditioning on the learned tree, we develop observable, leaf-specific measures of risk that capture finite-sample uncertainty, instability of the prescribed order quantity, and intrinsic randomness of the demand environment. We develop a method for diagnostic assessment to determine when an algorithmic prescription can be trusted and when additional discretion in the form of human intervention may be warranted. Our main results highlight how interpretability can enable transparent decision rules, along with explicit managerial control over the risks inherent in data-driven operational decisions.