Research Interests
Primary: Gender and Gender in Language, Economics of Artificial Intelligence and Robotics
Secondary: Behavioral Economics, Experimental Methodology, Networks, Contest
Publications
How Strength Asymmetries Shape Multi-Sided Conflicts, joint with Sebastiàn Cortes-Corrales, Economic Theory, 2024
Our older working paper version, including a proof showing that there exist no closed-form solutions to the model for non-trivial graphs, can be found here: Generalising Conflict Networks
Reproducibility in Management Science, by Fišar, M., Greiner, B., Huber, C., Katok, E., Ozkes, A., and the Management Science Reproducibility Collaboration, Management Science, 70(3), pp. 1343-2022, 2024, Note: Contributed as a Member of the Management Science Reproducibility Collaboration.
Prosocial behavior among human workers in robot-augmented production teams—A field-in-the-lab experiment, joint with Benedikt Renner and Louis Schäfer, Frontiers in Behavioral Economics (Section Culture and Ethics), 2, 1220563, 2023, preregistration
Competition and moral behavior: A meta-analysis of forty-five crowd-sourced experimental designs, joint with Christoph Huber, Anna Dreber, Jürgen Huber, Magnus Johannesson, Michael Kirchler, Utz Weitzel, Felix Holzmeister and others, Proceedings of the National Academy of Science (PNAS), 120(23), e2215572120, 2023
Working Papers
Adaptivity and Revealed Robot Aversion in Human-Robot Collaboration: A Field-in-the-Lab Experiment joint with Louis Schäfer
We study human-robot collaboration in a controlled experiment run in a realistic production environment. Participants completed a sequential task in pairs, where one worker (Worker 1 ) decided whether to pass intermediate components to a coworker or not. Depending on the treatment, the coworker was either another human participant or a physical industrial robot. The coworker-setup was either static or adaptive, with adaptive coworkers’ productivity being influenced by Worker 1’s performance in the task. We find strong evidence of robot aversion: workers were significantly less likely to pass intermediate products to their coworkers in the robotic as compared to the human treatments. This was despite overall productivity was identical across treatments. In a subsequent responsibility attribution task, participants also attributed greater responsibility to the robots, indicating a systematic bias in social evaluation of machine coworkers. Adaptivity only marginally affected these outcomes. Our results demonstrate that cooperation and responsibility attribution in hybrid teams depend not only on performance but also on social perceptions of artificial agents, highlighting behavioural frictions that may constrain the effective integration of robots into human work environments.
Do Personalized AI Predictions Change Subsequent Decision-Outcomes? The Impact of Human Oversight joint with Eva Groos and Christina Strobel (under review)
Regulators of artificial intelligence (AI) emphasize the importance of human autonomy and oversight in AI-assisted decision-making (European Commission, Directorate General for Communications Networks, Content and Technology, 2021; 117th Congress, 2022). Predictions are the foundation of all AI tools; thus, if AI can predict our decisions, how might these predictions influence our ultimate choices? We examine how salient, personalized AI predictions affect decision outcomes and investigate the role of reactance, i.e., an adverse reaction to a perceived reduction in individual freedom. We trained an AI tool on previous dictator game decisions to generate personalized predictions of dictators’ choices. In our AI treatment, dictators received this prediction before deciding. In a treatment involving human oversight, the decision of whether participants in our experiment were provided with the AI prediction was made by a previous participant (a ‘human overseer’). In the baseline, participants did not receive the prediction. We find that participants sent less to the recipient when they received a personalized prediction, but the strongest reduction occurred when the AI’s prediction was intentionally not shared by the human overseer. Our findings underscore the importance of considering human reactions to AI predictions in assessing the accuracy and impact of these tools, as well as the potential adverse effects of human oversight.
Gendered Language, Economic Behavior, and Norm Compliance joint with Petra Nieken and Karoline Ströhlein (submitted)
We conducted a controlled experiment to examine how the gender frame of instructions—male, female, or gender-inclusive—along with the presence or absence of a prescriptive norm (norm salience) influences norm compliance. Participants played three standard two-player economic games focusing on sharing, cooperation, and honesty. Whereas we find no clear ordering in norm compliance across our male, gender-inclusive, and female gender frames, we do find a statistically significant negative effect of the gender-inclusive frame on norm compliance for almost all participants and a statistically significant negative effect of the male gender frame on men's norm compliance. Additionally, norm salience does not appear to increase norm compliance. These findings contribute to the debate on gender-inclusive language by highlighting its unintended behavioral effects.
Previously, this project circulated as two separate papers, The Effects of Gendered Language on Norm Compliance and He, She, They? The Impact of Gendered Language on Economic Behavior.
We analyse the correlation between job satisfaction and automatability - the degree to which an occupation can be or is at risk of being replaced by computerised equipment. Using multiple survey datasets matched with various measures of automatability from the literature, we find a negative and statistically significant correlation that is robust to controlling for worker and job characteristics. Depending on the dataset, a one standard deviation increase in automatability leads to a drop in job satisfaction of about 0.64% to 2.61% for the average worker. Unlike other studies, we provide evidence that it is not the fear of losing the job that mainly drives this result, but the fact that monotonicity and low perceived meaning of the job drive both automatability and low job satisfaction.
Conference Proceedings (peer-reviewed)
Seeing Is Feeling: Emotional Cues in Others’ Heart Rate Visualizations joint with Anke Greif-Winzrieth, Verena Dorner, Fabian Wuest, and Christof Weinhardt
Learning Factory Labs as Field-in-the-Lab Environment – An Experimental Concept for Human-Centred Production Research joint with Magnus Kandler, Louis Schäfer, Gisela Lanza, Petra Nieken, and Karoline Ströhlein
Decision Experiments in the Learning Factory: A Proof of Concept joint with Karoline Ströhlein, Magnus Kandler, Petra Nieken, Louis Schäfer, and Gisela Lanza
Human-Oriented Design of Andon-Boards 4.0 – Promoting Decentralized Decisions on the Shopfloor and Acceptance by Employees joint with Magnus Kandler, Karoline Ströhlein, Sebastian Riedinger, Petra Nieken, and Gisela Lanza
Work in Progress
The Gender of Opportunity: How Gendered Job Titles Affect Job Seeker Attraction, joint with Petra Nieken and Martin Trenkle (draft in preparation)
Feedback in the Factory—A Novel Field in-the-Lab Experiment, joint with Magnus Kandler, Gisela Lanza, Petra Nieken, and Karoline Ströhlein (draft in preparation)
Assessing the Human Premium: Task Allocation Preferences in a Hybrid Workforce, joint with Aleksandr Alekseev and Mikhail Anufriev (data gathering)
FrISBEE — A Framework for Integrating Sensordata in Behavioral Economic Experiments, joint with Fabian Wüst, Anke Greif-Winzrieth, Petra Nieken, and Niklas Busse (Beta-Version and first conceptual paper soon to be released)
Do Personalized AI Predictions Change Subsequent Decision-Outcomes? Artificial versus Swarm Intelligence, joint with Christina Strobel (data gathering)
Predict, Advise, or Perform?—The Role of Automated Systems in Human Interaction, joint with Emike Nasamu and Mengjie Wang (data gathering)
Audience Effects in the Overconfidence Gender Gap, joint with Kevin Grubiak
Exploring Distributional Biases in Responses of Generative AI, joint with Petra Nieken, Abdolkarim Sadrieh, and Frederic Sadrieh