Sustaining cooperation in indefinitely repeated games is challenging when the partner's actions are observed with noise. Communication has been identified as a key factor in sustaining cooperation because players perceive less uncertainty regarding the partner's strategy. However, empirical evidence on this perception remains limited. First, it is unclear how noise in monitoring affects the perceived strategic uncertainty, and whether communication affects the perceived strategic uncertainty differently when noise differs. Second, it is unclear whether and how communication and noise affect players' preference in regard to strategic uncertainty. In the study, I conduct lab experiments with an indefinitely repeated Prisoners' Dilemma. Treatments vary in two dimensions: communication possibility, and the noise levels. I elicit subjective beliefs about the partner's cooperative action as a measure of perceived strategic uncertainty. I elicit subjects' certainty equivalents for the stage game to calculate their preference towards strategic uncertainty. The results suggest that it is the communication opportunity, rather than a low noise, that reduces the perceived strategic uncertainty. This effect is larger when the accuracy in monitoring is high. Further analysis on preferences towards strategic uncertainty shows that although communication does not affect subjects' strategic aversion attitudes, it helps sustain cooperation among those who have a greater aversion to the uncertainty towards cooperation over defection.
Similarity and Consistency in Algorithm-Guided Exploration (With Fabian Dvorak, Ludwig Danwitz, Sebastian Fehrler, Lars Hornuf, Hsuan Yu Lin and Bettina von Helversen)
Algorithm-based decision support systems play an increasingly important role in decisions involving exploration tasks, such as product searches, portfolio choices, and human resource procurement. These tasks often involve a trade-off between exploration and exploitation, which can be highly dependent on individual preferences. In an online experiment, we study whether the willingness of participants to follow the advice of a reinforcement learning algorithm depends on the fit between their own exploration preferences and the algorithm’s advice. We vary the weight that the algorithm places on exploration rather than exploitation, and model the participants’ decision-making processes using a learning model comparable to the algorithm’s. This allows us to measure the degree to which one’s willingness to accept the algorithm’s advice depends on the weight it places on exploration and on the similarity between the exploration tendencies of the algorithm and the participant. We find that the algorithm’s advice affects and improves participants’ choices in all treatments. However, the degree to which participants are willing to follow the advice depends heavily on the algorithm’s exploration tendency. Participants are more likely to follow an algorithm that is more exploitative than they are, possibly interpreting the algorithm’s relative consistency over time as a signal of expertise. Similarity between human choices and the algorithm’s recommendations does not increase humans’ willingness to follow the recommendations. Hence, our results suggest that the consistency of an algorithm’s recommendations over time is key to inducing people to follow algorithmic advice in exploration tasks.
In infinitely or indefinitely repeated games with noisy signals about others' actions, sustaining cooperation is difficult. Theoretical work shows that cooperation can be maintained if the signals are correlated and the degree of correlation depends on the actions. In this study, we implement such an information structure in a laboratory experiment and investigate whether subjects are able to sustain cooperation by conditioning their behavior on it. A substantial number of subjects adopt strategies accounting for the correlation, but this does not increase cooperation compared to a control treatment without correlation, as behavior with independent signals is more lenient.
Public Trust in Organizations (with Sebastian Fehrler and Volker Hahn)
Public trust is critical to the functioning of many organizations. Therefore, it is important to understand the institutional characteristics that influence it. Specifically, we ask whether an individualistic or a collectivistic structure makes an organization more trustworthy and whether communication increases public trust. To address these questions, we study repeated versions of a basic trust game in which the trustee is an organization where decisions are made either by an individual or a collective. A game-theoretic analysis implies that public trust may or may not emerge for a collectivist structure with overlapping terms, but that public trust is impossible to achieve for an organization dominated by an individual. Empirically, we find that an individualistic organization can achieve similar levels of cooperation as a long-lived collectivistic organization. This may indicate that cooperation is driven primarily by intrinsic behavioral motivation rather than a desire to influence the future decisions of others.
Integrating Semantic Communication and Human Decision-Making into an End-to-End Sensing-Decision Framework (with Edgar Beck, Hsuan-Yu Lin, Patrick Rückert, Bettina von Helversen-Helversheim, Sebastian Fehrler, Kirsten Tracht, Armin Dekorsy)
As early as 1949, Weaver defined communication in a very broad sense to include all procedures by which one mind or technical system can influence another, thus establishing the idea of semantic communication. With the recent success of machine learning in expert assistance systems where sensed information is wirelessly provided to a human to assist task execution, the need to design effective and efficient communications has become increasingly apparent. In particular, semantic communication aims to convey the meaning behind the sensed information relevant for Human Decision-Making (HDM). Regarding the interplay between semantic communication and HDM, many questions remain, such as how to model the entire end-to-end sensing-decision-making process, how to design semantic communication for the HDM and which information should be provided to the HDM\@. To address these questions, we propose to integrate semantic communication and HDM into one probabilistic end-to-end sensing-decision framework that bridges communications and psychology. In our interdisciplinary framework, we model the human through a HDM process, allowing us to explore how feature extraction from semantic communication can best support HDM both in theory and in simulations. In this sense, our study reveals the fundamental design trade-off between maximizing the relevant semantic information and matching the cognitive capabilities of the HDM model. Our initial analysis shows how semantic communication can balance the level of detail with human cognitive capabilities while demanding less bandwidth, power, and latency.
The causal effect of generative AI on performance and over-reliance in administrative text production. Evidence from a lab-in-the-field experiment with German senior and future civil servants (with Mareike Sirman-Winkler and Markus Tepe)