How Ethical Should AI Be? How AI Alignment Shapes the Risk Preferences of LLMs
Shumiao Ouyang (speaker), Hayong Yun, Xingjian Zheng

September 10, 2024, Tuesday, 9-10 PM (China)
September 10, 2024, Tuesday, 2-3 PM (Oxford, UK)

Hosted by Bo Tang

Shumiao OuyangOxford Saïd Business School

Abstract


This study examines the risk preferences of Large Language Models (LLMs) and how aligning them with human ethical standards affects their economic decision-making. Analyzing 30 LLMs reveals a range of inherent risk profiles, from risk-averse to risk-seeking. We find that aligning LLMs with human values, focusing on harmlessness, helpfulness, and honesty, shifts them towards risk aversion. While some alignment improves investment forecast accuracy, excessive alignment leads to overly cautious predictions, potentially resulting in severe underinvestment. Our findings highlight the need for a nuanced approach that balances ethical alignment with the specific requirements of economic domains when using LLMs in finance.



Paper


Recording



Next Talk


2024-9-24 | Lin William Cong


AlphaManager: A Data-Driven-Robust-Control Approach to Corporate Finance