Broadly, my research sits at the nexus of machine learning and economics. I am particularly interested in the concept of intelligence—a phenomenon that transcends the boundaries between human and machine, individual and society. This fascination naturally calls for an interdisciplinary approach. In my own research, I analyse intelligence using a complementary mix of skills, bridging economic theory, statistical inference and computational modelling. Viewed through this lens, my research unfolds across three interconnected strands:
The first strand studies cooperative intelligence. It asks how we can institutionally forge stable collaboration among self-interested agents to achieve Pareto improvements in overall welfare. To answer this question, I draw inspiration from social choice theory, game theory and mechanism design, synthesising them within emerging collaboration contexts—some of which are yet to exist in reality. My work thus differs from conventional economic analysis of game theory and pivots towards what could be termed 'constructive economics'. This endeavour echoes Abraham Lincoln's aphorism that 'the best way to predict the future is to create it.'
The second strand attends to aligned intelligence. It asks how we can effectively align AI agents with humans in preference and evaluate the fidelity of such alignment so that the former can safely serve as proxies for the latter in performing perfunctory tasks. This question underlies growing enthusiasm for using large language models in social science research to test social theories and inform policy design. My exploration of human-AI alignment problem is, at its core, theory-driven: I look at the extent to which values could withstand logic. This sets my work apart from conventional studies of pluralistic alignment, which were initially propounded as ideals for future AI systems. The same theoretical orientation informs my engineering principles: I design systems guided by the goal of parsimony—models that can scale economically. Given the immense carbon footprint of modern AI architectures, frugality has become less a virtue than an imperative.
The third strand centres on AI-augmented societal intelligience. It asks how emergent technologies, such as LLMs, are reshaping the landscape of human opportunity and influencing society's allocation of trust and resources. Examining this question invites an extension of principled empirical social science methods to decision-making environments that feature AI as an integral part. More importantly, the goal of such investigations is not merely to reveal the shifting dynamics but to transmute the insights into better system designs of future technology. This is precisely where interdisciplinary research—and the scholars who pursue it—becomes indispensable.
In short, my goal is to build intelligent systems that are not just coherent in logic but also humane in consequences. A selection of my research output can be found here.
Bingchen Wang*, Zi-Yu Khoo, and Bryan Kian Hsiang Low
arXiv preprint. Extended version currently under review.
📄 Paper 📦 Code (will be released upon acceptance)
Large language models (LLMs) have demonstrated promise in emulating human-like responses across a wide range of tasks. In this paper, we propose a novel alignment framework that treats LLMs as agent proxies for human survey respondents, affording a cost-effective and steerable solution to two pressing challenges in the social sciences: the rising cost of survey deployment and the growing demographic imbalance in survey response data. Drawing inspiration from the theory of revealed preference, we formulate alignment as a two-stage problem: constructing diverse agent personas called endowments that simulate plausible respondent profiles, and selecting a representative subset to approximate a ground-truth population based on observed data. To implement the paradigm, we introduce P2P, a system that steers LLM agents toward representative behavioral patterns using structured prompt engineering, entropy-based sampling, and regression-based selection. Unlike personalization-heavy approaches, our alignment approach is demographic-agnostic and relies only on aggregate survey results, offering better generalizability and parsimony. Beyond improving data efficiency in social science research, our framework offers a testbed for studying the operationalization of pluralistic alignment. We demonstrate the efficacy of our approach on real-world opinion survey datasets, showing that our aligned agent populations can reproduce aggregate response patterns with high fidelity and exhibit substantial response diversity, even without demographic conditioning.
Bingchen Wang*, Zhaoxuan Wu, Fusheng Liu, and Bryan Kian Hsiang Low
In Proceedings of the Thirty-Ninth AAAI Conference on Artificial Intelligence (AAAI-25), Philadelphia, USA, February 2025. [4.6% Acceptance Rate - Oral Presentation].
📄 Paper 📦 Code 🪧 Poster 🎬 Live Recording
Collaborative machine learning (CML) provides a promising paradigm for democratizing advanced technologies by enabling cost-sharing among participants. However, the potential for rent-seeking behaviors among parties can undermine such collaborations. Contract theory presents a viable solution by rewarding participants with models of varying accuracy based on their contributions. However, unlike monetary compensation, using models as rewards introduces unique challenges, particularly due to the stochastic nature of these rewards when contribution costs are privately held information. This paper formalizes the optimal contracting problem within CML and proposes a transformation that simplifies the non-convex optimization problem into one that can be solved through convex optimization algorithms. We conduct a detailed analysis of the properties that an optimal contract must satisfy when models serve as the rewards, and we explore the potential benefits and welfare implications of these contract-driven CML schemes through numerical experiments.
June 2021
Paper: BW_MPhilThesis.pdf Code: here
Parameter non-constancy has been a prevalent issue in empirical economic research. The advent of a new technology, the implementation of an intervention policy or the unexpected event of a global pandemic could all result in shifts to the parameters of a time series model. Traditionally, detection of parameter shifts is done as part of the diagnostic tests after a model has been selected and estimated. Yet, this approach has many drawbacks, including, inter alia, a high labour cost in the subsequent modelling of the parameter non-constancy. As an alternative, this essay provides an analysis of multiplicative-indicator saturation (MIS), which is designed to capture parameter shifts during the model selection process. Theoretical analysis of the method is done with a splif-half selection algorithm pioneered by Hendry et al. (2008). Monte Carlo simulations are conducted to evaluate the performance of MIS in a multi-path block search algorithm with different settings, followed by a comparison with other well-established saturation methods. The essay also introduces a variant of MIS called Parsimonious MIS, which has improved performance in a partial change model where parameter shifts happen to one of the regressors. Based on the analysis, ideas on future research are suggested.
Oxford, England, UK
Insights on research practices: