The workshop comprises four main sessions dedicated to different facets of algorithmic fairness in automated decision making systems. The first session is dedicated to the study and analysis of fairness metrics across different applications. In the second session, the focus shifts to transparency in marketing bringing insights from Amazon, the most popular e-commerce industry. The talks in the third session will focus on fairness over online platforms, looking at how algorithms discriminate against minority groups. The last session of the workshop will revolve around bias and privacy preservation for automated decision making systems, looking at the effect of positive feedback loops.
8:45 - 9:00
Giulia De Pasquale & Valentina Breschi
Bias and privacy preservation for decision making
9:00 - 9:30
Elena Beretta
Computer vision technologies have become an integral part of our life, driving advancements in diverse fields such as healthcare, security and creative industries. However, as these systems increasingly influence critical decisions, they raise pressing concerns about fairness, bias and representation. The cultural and social implications of computer vision are profound, as biases embedded in training data and model design can reinforce stereotypes and perpetuate inequalities. This talk examines how representation in computer vision is shaped by data, model architectures and training methodologies. Through an exploration of bias in contemporary vision models, we investigate how algorithmic decisions reflect and amplify social inequalities, even in cases where datasets have been curated to promote diversity. The discussion extends beyond technical solutions to consider the broader ethical and epistemological stakes of visual AI, advocating for interdisciplinary approaches that prioritize inclusivity, accountability and critical engagement in the development of computer vision technologies.
9:30 - 10:00
Ming Cao
It is generally believed that fairness and privacy awareness are conflicting goals in algorithmic design. We, however, show that it is possible to jointly consider the two in some optimization settings. The corresponding results are then discussed in the context of robotic decision-making algorithms, which are increasingly facing legal, ethical and social constraints as robots interact more with human peers. I look into how to navigate robots in human work environments while respecting possible fairness and privacy requirements. Such scenarios are increasingly common, for instance, when robots transport sensitive objects in crowded spaces. To address this, we propose a new framework for mobile robot navigation that leverages vision language models in adaptive path planning. Experimental results demonstrate that our framework is effective in human-robot shared public environments.
Transparency in marketing and e-commerce
10:00 - 10:30
Niklas Karlsson
Programmatic advertising is a core element of the business for companies such as Amazon, Google, Meta, and many others. It involves the automated allocation of ad impressions to different advertisers. At a high level, there are sellers (of impressions), buyers, and optimization providers. A seller wants to monetize Internet traffic on their web pages. On the other hand, the buyers (advertisers) compete over the opportunity to show their ads to different users. In the middle is the optimization provider(s) negotiating the impression allocation on behalf of the buyers, but with a simultaneous self-interest to make a profit for themselves. The first critical challenge is to define an optimization problem and impression allocation mechanism that can be considered fair to all parties and that permits algorithmic fairness. This talk will discuss such an optimization problem. The problem is welldefined and foundationally sound, but is a challenge to solve due to model uncertainties, nonlinearities, time-variance, and noise. We will discuss some state-of-the-art techniques on how to modularize the problem and how to justify fairness.
Coffee Break 10:30 - 11:00
11:00- 11:30
Karl H. Johansson & Zifan Wang
Market transparency plays a crucial role in shaping the dynamics and outcomes of multi-agent interactions. In this talk, we explore how the level of market transparency influences multi-agent learning in online convex games. Specifically, we consider three distinct settings. First, in the no-transparency setting, agents cannot observe others’ actions but only receive zeroth-order information (function evaluations), yet we show that all agents achieve no-regret learning. Second, in the full transparency setting, where agents observe others’ actions and obtain first-order gradient information, we show that such richer information yields improved regret guarantees for each agent. Third, in the asymmetric transparency setting, some agents enjoy first-order information while others rely solely on zeroth-order. We theoretically show that this asymmetric transparency benefits those with richer information, creating individual performance disparities and potential unfairness. Nevertheless, from a group-level perspective, the overall performance of the asymmetric transparency setting can surpass that of the no-transparency one. We illustrate these findings through classical and risk-averse Cournot games, where multiple firms produce the same product and each firm selects its own production level, demonstrating how variations in market transparency can influence both individual and collective outcomes.
Preventing Bias Against Minorities
11:30 - 12:00
Ana-Andreea Stoica
Ranking algorithms have recently come under scrutiny for preventing minority groups from reaching higher ranking slots in applications like search and recommendation, thus reducing their visibility. In this talk, I will describe our recent work in diagnosing when and how algorithms that use network information may further bias against minority groups. We focus on two famous algorithms, PageRank and HITS, and analyze them empirically and theoretically, using a generative network model with multiple communities. We find that HITS amplifies pre-existing bias in homophilic networks, as compared to PageRank. We find the root cause of bias amplification in HITS to be the level of homophily present in the network through a novel theoretical analysis involving network models that are predisposed to bias against minority groups. This work is joint with Augustin Chaintreau and Nelly Litvak and was published at The Web Conference ’24.
12:00 - 12:30
Stefania Ionescu
Two-sided matching markets have long existed to pair agents in the absence of regulated exchanges. A common example is school choice, where a matching mechanism uses student and school preferences to assign students to schools. In such settings, forming preferences is both difficult and critical. Data-driven decision support systems can predict future outcomes based on historical data to help students form their preferences. Although often deployed together, these matching and prediction mechanisms are usually analyzed separately. But what happens if we consider the market as a whole? In this talk, I present a new type of strategic behavior of schools targeting the prediction mechanism by leveraging the repeated nature of the market. This strategic behavior is not merely an attack on the reported data but a change in the very school-student interaction. I will discuss when such strategies are optimal and what their consequences are. In the context of our workshop, this is an example of how introducing and improving data-driven decision support systems can incentivize new strategic behaviors that ultimately cause social inequalities independent of individual potential.
Lunch Break 12:30 - 13:30
How to measure fairness?
13:30 - 14:00
Wenjun Mei
The concept of “fairness” has always been a critical consideration in allocation problems and has been widely studied by researchers from different backgrounds. In economics, “fairness” is often understood as how uneven a distribution is, characterized by metrics such as Gini coefficient or Jain’s index. In operations research, researchers adopt various subjective definitions of fairness, such as utilitarianism, equal opportunity, and egalitarianism. Is one of these definitions of fairness “better” than the other? What kind of fairness do people prefer? How can allocation fairness be leveraged to influence human behavior? Current discussions on such issues remain qualitative and philosophical, because the notion of allocation fairness has never been fully quantized. In this talk, we conceptualize fairness at the attempt to achieve a Pareto-optimal tradeoff between efficiency and equity, and propose a mathematical theory that quantizes such trade-off. To be more specific, we find that utilitarianism, equal opportunity, and egalitarianism are, in fact, optimizing the same objective function with one varying parameter. Moreover, as that parameter increases, the resulting optimal solution monotonically decreases in efficiency and increases in equity. By leveraging such monotonicity, we introduce a partial-order structure over the set of all feasible allocations, enabling the measurement of any allocation’s tendency in the “efficiency-equity” trade-off, as well as how optimal it is in achieving that tradeoff. Our new theory not only generates optimal allocation schemes under any desired tendency, but could potentially help in unveiling hidden patterns of human behavior that cannot bequantitatively discussed before.
14:00 - 14:30
Giulia De Pasquale
We study fairness in social influence maximization, whereby one seeks to select seeds that spread a given information throughout a network, ensuring balanced outreach among different communities (e.g. demographic groups). In the literature, fairness is often quantified in terms of the expected outreach within individual communities. In this talk, we demonstrate that such fairness metrics can be misleading since they overlook the stochastic nature of information diffusion processes. When information diffusion occurs in a probabilistic manner, multiple outreach scenarios can occur. As such, outcomes such as “In 50% of the cases, no one in group 1 gets the information, while everyone in group 2 does, and in the other 50%, it is the opposite”, which always results in largely unfair outcomes, are classified as fair by a variety of fairness metrics in the literature.We tackle this problem by designing a new fairness metric, mutual fairness, that captures variability in outreach through optimal transport theory. We propose a new seed-selection algorithm that optimizes both outreach and mutual fairness, and we show its efficacy on several real datasets. We find that our algorithm increases fairness with only a minor decrease (and at times, even an increase) in efficiency.
Panel discussion
14:30 - 15:00
Panelists: Ming Cao, Ana-Andreea Stoica, Niklas Karlsson
Moderator: Valentina Breschi
15:00 - 15:10
Giulia De Pasquale