Workshop on Social Choice and Learning Algorithms

at AAMAS 2024 in Auckland, New Zealand

May 7, 2024

The inaugural workshop connecting research topics in social choice and learning algorithms. 

 About the Workshop

Motivated by growing interest in the similarities between problems in learning and social choice, SCaLA-24 aims to bring together researchers across these domains to highlight the benefits of collaboration. Recent work has explored theoretical bounds on the learnability of common voting rules alongside experimental evaluation of these bounds, has shown how neural networks can improve properties of voting rules or learn mechanism design, and has raised many questions.

The goal of this workshop is to highlight new connections between social choice and learning algorithms. We seek contributions that demonstrate how either one of these fields can be used to strengthen the other and, more broadly, that combine aspects of the two domains in novel ways. We are interested in a broad range of topics from both desciplines. 

Schedule and accepted papers are now listed on the Program page.

Keynote Speakers:

Haifeng Xu - University of Chicago

Rethinking Online Content Ecosystems in the Era of Generative AIs


Abstract: The open Internet is all about contents, which used to be created, consumed, shared and evaluated all by humans. In the past few decades, many successful platforms like TikTok and Google have formed various online content ecosystems that on one hand properly match user interests to the right contents and one the other hand incentivize content creators to generate fresh and high-quality contents.  However, recent significant advances in generative AIs (GenAIs) is revolutionizing the way contents are created, which could be much less costly, possibly even more creative though possibly with lower quality and depending on the available amount of human-created contents to sustainably refine GenAI's model training. This is fundamentally changing the underlying logic and incentives of existing content ecosystem. If not addressed properly, this may lead to distorted incentives for human creators and potentially disastrous outcomes of driving humans out of the system. In this talk, I will present our recent works that integrate machine learning, multi-agent system modeling and incentive design to (a) understand the economics and evolution dynamics of these online content ecosystems and (b) study how GenAI-based content creation could potentially re-shape these ecosystems. I will conclude with many open questions. We believe a thorough understanding of these questions is critical to ensure our Internet ecosystem will steer towards the era of GenAI safely and sustainably.


Bio: Haifeng Xu is an assistant professor in Computer Science at the University of Chicago and a visiting research scientist at Google Research. He studies the economics of data and machine learning, including designing learning algorithms for multi-agent decision making and designing markets for data and ML algorithms. He publishes regularly at leading machine learning and computational economics conferences, and serves as area chair or senior program committee for major venues such as ICML, EC, AAAI, IJCA, etc. His research has been recognized by multiple awards, including the AI2050 Early Career fellow, IJCAI Early Career spotlight, Google Faculty Research Award, ACM SIGecom Dissertation Award (honorable mention), IFAAMAS Distinguished Dissertation Award (runner-up), and multiple best paper awards; his works have been generously supported by varied agencies including NSF, ARO, ONR, Schmidt Science, and Google Research.



Fernando P. Santos - University of Amsterdam

How to Aggregate Reputations to Stabilize Fair Cooperation?


Abstract: Prosociality is puzzling: prosocial individuals contribute to benefiting others, yet they must incur a cost to do so. How to explain such cooperative behaviors? While this is a central question in evolutionary biology and behavioral sciences, AI is today intimately connected with the challenge of understanding prosociality. Advancing socially intelligent AI requires 1) agents that learn to cooperate with each other and 2) systems that encourage human prosociality. In this talk, I will present research challenges in the field of prosocial dynamics in AI. I will pay special attention to models, based on evolutionary game theory and multiagent reinforcement learning, to study how reputations and indirect reciprocity can stabilize cooperation among learning agents. Given the need to aggregate the assessment of multiple agents in a single reputation value, I will argue that understanding cooperation under indirect reciprocity can benefit from synergies between social choice and learning dynamics. 


Bio: Fernando P. Santos is an Assistant Professor at the Informatics Institute of the University of Amsterdam. His research lies at the interface of AI and complex systems. He is interested in understanding cooperation and collective dynamics in multiagent systems, and in designing fair/prosocial AI. Before joining IvI, Fernando completed his PhD in Computer Science and Engineering at Instituto Superior Técnico (University of Lisbon) and was a James S. McDonnell postdoctoral fellow at Princeton University.


Topics

We invite papers that explore at least one aspect of learning topics and at least one aspect of social choice topics, including, but not limited to, the following:

Social Choice

Learning

Important Dates

Submission - Feb 5, 2024 EXTENDED: Feb 12, 2024

Acceptance notification - March 5, 2024

Camera ready paper - April 15, 2024

Workshop - May 7, 2024

Format

The workshop takes place at AAMAS 2024 in Auckland, New Zealand. This will be a one day meeting consisting of technical sessions, a keynote speaker, and social sessions aimed at fostering further collaborations.

At the moment, AAMAS 2024 is planned as a fully in-person event. SCaLA-24 plans the same. If you would like to submit your work but may not be able to attend in person, please contact one of the organizers.