Columbia Workshop on Fairness in Operations and AI

2023

• November 30 - December 1 •

The deployment of AI systems in online platforms has thrived due to direct access to consumer data, the capability to implement personalization, and the ability to run algorithms in real time. A serious concern caused by this deployment in domains such as advertising, pricing, and marketing is that users from protected groups may be harmfully discriminated against unless AI algorithms directly take into account fairness considerations. Such discrimination has been exposed in several business contexts, e.g., discriminatory targeting in housing and job ad auctions, discriminatory personalized pricing for loans and ride-hailing services, and disparate treatment of social network users by marketing campaigns to exclude certain protected groups. This workshop brings together experts from academia, industry, and public policy to explore, uncover, and address fairness issues in operations and AI. This one-day conference is brought to you by Columbia University's Department of Industrial Engineering and Operations Research and Columbia University's Data Science Institute. 


The full program is available here

When: Thursday, Nov 30 - Friday, Dec 1

Where: Davis Auditorium (CEPSR/Shapiro Building)

530 W 120th st, New York, NY 10027

REGISTRATION 

Registration fee is $25 for students/postdocs, $100 for everyone else.

Registration link is here

Poster Session

There will be a poster session. Those wishing to apply for the poster session must fill out this form by 11/23. Acceptance notifications will be sent along with instructions (bring your own poster).

SPEAKERS & Panelists

Organizing committee

PROGRAM 

DAY 1 | Thursday, November 30, 2023


1:30 PM - 2:15 PM Registration
Check-In: Davis Auditorium (Schapiro/CEPSR)

Address: 530 W 120th St, New York, NY 10027 - 4th Floor


Food and Beverages Throughout the Event: 

DSI Suite (Mudd Building) - 4th Floor

Next Door from Davis Auditorium


2:15 PM - 2:30 PM Opening Remarks

With Shih-Fu Chang, Dean, Columbia Engineering;

Morris A. and Alma Schapiro Professor of Engineering; 

and Professor of Electrical Engineering and Computer Science


2:30 PM - 3:30 PM Session 1: Fairness in Markets


3:30 PM - 4:00 PM Break

Location: DSI Suite (Mudd Building) - 4th Floor


4:00 PM - 5:00 PM Session 2: Fairness in Recommendations



DAY 2 | Friday, December 1, 2023


9:00 AM - 10:00 AM Breakfast

Location: DSI Suite (Mudd Building) - 4th Floor


10:00 AM - 11:00 AM Session 3: Applications of Fairness


11:00 AM - 11:30 AM Break

Location: DSI Suite (Mudd Building) - 4th Floor


11:30 AM - 12:30 PM Session 4: Incentives and Fairness


12:30 PM - 1:45 PM Lunch & Poster Session

Location: DSI Suite (Mudd Building) - 4th Floor


1:45 PM - 2:15 PM Session 5: Fairness in Optimization


2:15 PM - 3:00 PM Panel Discussion: Bridging Academia and Practice


3:00 PM - 3:30 PM Break & Poster Session

Location: DSI Suite (Mudd Building) - 4th Floor


3:30 PM - 4:30 PM Session 6: Fairness in Pricing and Assortment Planning

4:30 PM Event Ends

Talks, abstracts, and bios

Christian Kroer

Best of Many Worlds Guarantees for Online Fair Allocation and Online Fisher Markets


We consider the problem of fairly allocating a set of goods to a set of individuals when the goods are arriving sequentially over time. In the offline setting, it is well-known that the competitive equilibrium from equal incomes (CEEI) solution concept leads to strong fairness and efficiency properties. Our goal will be to develop algorithms that can, asymptotically, compete with the hindsight CEEI allocation. We will study two allocation methods: greedy Nash welfare maximization and an online learning procedure called PACE, which is based on first-price auctions and adaptive learning of a price-per-utility multiplier for each individual. For the learning algorithm, we will show that it achieves mean-square convergence to the hindsight-optimal CEEI utilities under stochastic inputs, as well as approximate convergence under nonstationary inputs with bounded nonstationarity. For both algorithms, we show that they achieve asymptotic competitive-ratio guarantees with respect to hindsight CEEI utilities in an adversarial setting with mild assumptions on the inputs. Combining these results, we show that PACE achieves the first best-of-many-worlds guarantee for online fair allocation, by yielding guarantees under stochastic, nonstationary, and adversarial inputs.


Bio: Christian Kroer is an Assistant Professor of Industrial Engineering and Operations Research at Columbia University, as well as a member of the Data Science Institute at Columbia. His research interests are at the intersection of operations research, economics, and computation, with a focus on how optimization and AI methods enable large-scale economic solution concepts. He obtained his Ph.D. in computer science from Carnegie Mellon University, and spent a year as a postdoc with the Economics and Computation team at Facebook Research. He is the recipient of an ONR Young Investigator award and an NSF CAREER award.

Matthew Leisten

Economic Inequality and Market Power


I develop a framework to flexibly and tractably model the joint effects of economic inequality and market power on prices and consumer welfare in a single market. The key is to characterize consumers based on two dimensions: “wealth” and “tastes.” Increases in wealth causally increase demand by changing price sensitivity as consumers' shadow value for a dollar declines. Tastes affect demand as well, and may be correlated with wealth, but are not causally linked to wealth. I demonstrate the usefulness of the framework with to basic test cases. First, I augment my framework with a set of axioms that implies a measure of inequality-adjusted consumer surplus that diverges from the usual measure employed in industrial organization. Second, I study the causal effect of distributions over wealth on monopoly markups and provide sufficient conditions for increases in inequality to cause markups to increase.


Bio: Matthew Leisten is a staff economist at the Federal Trade Commission working on antitrust. His research spans several topics in industrial organization and information economics, including algorithmic pricing and collusion, optimal dynamic regulation, and information acquisition by firms. His PhD is from Northwestern University.

Negin Golrezaei

Interpolating Item and User Fairness in Multi-Sided Recommendations


Today's online platforms rely heavily on algorithmic recommendations to bolster user engagement and drive revenue. However, such algorithmic recommendations can impact diverse stakeholders involved, namely the platform, items (seller), and users (customers), each with their unique objectives. In such multi-sided platforms, finding an appropriate middle ground becomes a complex operational challenge. Motivated by this, we formulate a novel fair recommendation framework, called Problem (FAIR), that not only maximizes the platform's revenue, but also accommodates varying fairness considerations from the perspectives of items and users. Our framework's distinguishing trait lies in its flexibility---it allows the platform to specify any definitions of item/user fairness that are deemed appropriate, as well as decide the "price of fairness" it is willing to pay to ensure fairness for other stakeholders. We further examine Problem (FAIR) in a dynamic online setting, where the platform needs to learn user data and generate fair recommendations simultaneously in real time, which are two tasks that are often at odds. In face of this additional challenge, we devise a low-regret online recommendation algorithm, called FORM, that effectively balances the act of learning and performing fair recommendation. Our theoretical analysis confirms that FORM proficiently maintains the platform's revenue, while ensuring desired levels of fairness for both items and users. Finally, we demonstrate the efficacy of our framework and method via several case studies on real-world data.


Bio: Negin Golrezaei is an Associate Professor of Operations Management at MIT Sloan School of Management and holds the KDD Career Development Professorship in Communications and Technology. Her research focuses on machine learning, statistical learning theory, mechanism design, and optimization algorithms, with applications in revenue management, pricing, and online markets. Previously, she worked as a postdoctoral fellow at Google Research in New York, collaborating with the Market Algorithm team on innovative mechanisms and algorithms for online marketplaces. She earned her BSc and MSc degrees in electrical engineering from Sharif University of Technology, Iran, and a PhD in operations research from USC in 2017. Negin also serves as an associate editor for journals like Production and Operations Management (POM), Operations Research Letters (ORL), and Naval Research Logistics (NRL). She has received numerous awards, including the 2021 Young Investigator Award at ONR, the 2018 Google Faculty Research Award, the 2017 George B. Dantzig Dissertation Award, and several others recognizing her excellence in research and teaching.

Nikhil Garg

Recommendations in High-stakes Settings: Diversity and Monoculture


Algorithmic recommendation systems -- historically developed for settings such as movies, songs, and media content -- are now well-integrated into online matching platforms for high-stakes settings such as for labor, education, and dating. With this integration comes a renewed importance on challenges such as diversity (are you showing a diverse set to users) and monoculture (what are the consequences of everyone using the same algorithm). I'll describe some of our work in this space, emphasizing how OR techniques are essential to design more efficient, equitable algorithms for such platforms. Joint work with Kenny Peng and many others. 


Bio: Nikhil Garg is an Assistant Professor of Operations Research and Information Engineering at Cornell Tech as part of the Jacobs Technion-Cornell Institute. He uses algorithms, data science, and mechanism design approaches to study democracy, markets, and societal systems at large. Nikhil has received the INFORMS George Dantzig Dissertation Award, an honorable mention for the ACM SIGecom dissertation award, several other best paper awards, and Forbes 30 under 30 for Science.

Phebe Vayanos

Learning Optimal and Fair Policies for Allocating Scarce Housing Resources to People Experiencing Homelessness


We study the problem of allocating scarce housing resources of different types to individuals experiencing homelessness based on their observed covariates. We propose a framework for evaluating fairness in such resource allocation systems and present a set of incompatibility results that raise hard moral trade-offs. In view of these trade-offs, we devise an approach for eliciting the moral priorities of stakeholders and show promising results from our deployment on Amazon Mechanical Turk. We leverage administrative observational data from the Los Angeles Homeless Management Information System to learn a provably optimal online housing allocation policy that best aligns with stakeholder preferences and fairness constraints while satisfying resource limitations. Our policies improve fairness, efficiency, and transparency of the current system all at once. This is work in partnership with LAHSA, the Los Angeles Homeless Services Authority.


Bio: Phebe Vayanos is an Associate Professor of Industrial & Systems Engineering and Computer Science at the University of Southern California. She is also a Co-Director of CAIS, the Center for Artificial Intelligence in Society at USC. Her research is focused on Operations Research and Artificial Intelligence and in particular on optimization and machine learning. Her work is motivated by problems that are important for social good, such as those arising in public housing allocation, public health, and biodiversity conservation. Prior to joining USC, she was lecturer in the Operations Research and Statistics Group at the MIT Sloan School of Management, and a postdoctoral research associate in the Operations Research Center at MIT. She is a recipient of the NSF CAREER award and the INFORMS Diversity, Equity, and Inclusion Ambassador Program Award.

Justin Brookman

Emerging Regulatory Approaches to AI


The rapid advent of generative AI has spurred conversations among policymakers about what to do --- if anything --- to restrict, delay, promote, explain, or even ban certain applications. The last year alone has seen a raft of different proposals from the White House, Congress, independent agencies, state legislatures, and perhaps most consequentially the European Union. This talk will look at some of the harms identified by consumer protection advocates that come from AI and discuss how some of the leading regulatory approaches would seek to constrain those harms while preserving the consumer benefits from the use of artificial intelligence.


Bio: Justin Brookman is the Director of Technology Policy for Consumer Reports. Justin is responsible for helping the organization continue its groundbreaking work to shape the digital marketplace in a way that empowers consumers and puts their data privacy and security needs first. This work includes using CR research to identify critical gaps in consumer privacy, data security, and technology law and policy. Justin also builds strategies to enable CR and partner organizations to evaluate the privacy and security of products and services.


Prior to joining CR, Brookman was Policy Director of the Federal Trade Commission’s Office of Technology Research and Investigation. At the FTC, Brookman conducted and published original research on consumer protection concerns raised by emerging technologies such as cross-device tracking, smartphone security, and the internet of things. He also helped to initiate and investigate enforcement actions against deceptive or unfair practices, including actions against online data brokers and digital tracking companies.


He previously served as Director of Consumer Privacy at the Center for Democracy & Technology, a digital rights nonprofit, where he coordinated the organization’s advocacy for stronger protections for personal information in the U.S. and Europe.

Rad Niazadeh

Markovian Search with Socially Aware Constraints


We study a general class of constrained sequential search problems for selecting multiple candidates from a pool that belongs to different societal groups. We focus on search processes under ex-ante constraints primarily motivated by inducing societally desirable outcomes such as attaining demographic parity among different groups, achieving diversity through quotas, or subsidizing disadvantaged groups within budget. We start with a canonical search model, known as the Pandora's box model [Weitzman, 1979], under a single affine constraint on the probability of selection and inspection of each candidate. We show that the optimal policy for such a constrained problem retains the index-based structure of the optimal policy for the unconstrained one but potentially randomizes between two policies that are dual-based adjustments of the unconstrained problem, thus they are easy to compute and economically interpretable. Building on these insights, we consider the richer class of search processes, such as search with rejection and multistage search, that can be modeled by joint Markov scheduling (JMS) [Dumitriu et al., 2003; Gittins,1979]. Imposing general affine and convex ex-ante constraints, we give a primal-dual algorithm to find the near-feasible and near-optimal policy. This algorithm, too, randomizes over index-based policies;  this time, over a polynomial number of policies whose indices are dual-based adjustments to the Gittins indices of the unconstrained JMS.  Our algorithmic developments, while involving many intricacies, rely on a simple, yet powerful observation: There exists a relaxation to the Lagrange dual function of these constrained optimization problems that admit index-based policies akin to the original unconstrained ones. Using a numerical study, we investigate the implications of imposing various constraints, the price of imposing them in terms of utilitarian loss, and whether they induce their intended societally desirable outcomes. Our numerical results suggest a dichotomy: Even with a moderate amount of inherent quality asymmetry and with strong notions of fairness such as demographic parity, the ``price of fairness'' is not drastic; yet, in a severely asymmetric population, imposing strong notions of fairness can lead to unintended consequences in terms of efficiency loss. Lastly, our numerical results suggest that imposing socially aware constraints, such as demographic parity or quota, can even improve the true utility of the search measured by long-term outcomes.


Bio: Rad Niazadeh is an Assistant Professor of Operations Management and Asness Junior Faculty Fellow at The University of Chicago Booth School of Business. He is also part of the faculty at Toyota Technological Institute of Chicago (TTIC) by a courtesy appointment. Prior to joining Chicago Booth, he was a visiting researcher at Google Research NYC and a postdoctoral fellow at Stanford University, Computer Science Department. He finished his PhD in Computer Science (minored in Applied Mathematics) at Cornell University. Rad primarily studies the theory and applications of online algorithms and learning, as well as data-driven sequential decision making and mechanism design. His research aims to design market algorithms and mechanisms for real-time operations of online marketplaces and e-commerce platforms, as well as operations of governmental agencies and non-profit organizations. Rad has received several awards for his research, including the INFORMS Auctions and Market Design 2021 Rothkopf Junior Researcher Paper Award (first place), the INFORMS Revenue Management and Pricing Dissertation Award (honorable mention), Service Science Best Student Paper (third place), and the Google PhD Fellowship (in Market Algorithms).

Chara Podimata

The Disparate Effects of Recommending to Strategic Users


Recommendation systems are pervasive in the digital economy. An important assumption in many deployed systems is that user consumption reflects user preferences in a static sense: users consume the content they like with no other considerations in mind. However, as we document in a large-scale online survey, users do choose content strategically to influence the types of content they get recommended in the future.


We model this user behavior as a two-stage noisy signalling game between the recommendation system and users: the recommendation system initially commits to a recommendation policy, presents content to the users during a cold start phase which the users choose to strategically consume in order to affect the types of content they will be recommended in a recommendation phase. We show that in equilibrium, users engage in behaviors that accentuate their differences to users of different preference profiles. In addition, (statistical) minorities out of fear of losing their minority content exposition may not consume content that is liked by mainstream users. We next propose three interventions that may improve recommendation quality (both on average and for minorities) when taking into account strategic consumption: (1) Adopting a recommendation system policy that uses preferences from a prior, (2) Communicating to users that universally liked ("mainstream") content will not be used as basis of recommendation, and (3) Serving content that is personalized-enough yet expected to be liked in the beginning. Finally, we describe a methodology to inform applied theory modeling with survey results.



Bio: Chara is an Assistant Professor of OR/Stat at MIT. Her research interests are on incentive-aware machine learning, social aspects of computing, online learning, and mechanism design. Recently, Chara has started thinking about policy questions related to AI and recommendation systems. Before MIT, she was a FODSI postdoctoral fellow at UC Berkeley and received her PhD in EconCS from Harvard, advised by Yiling Chen. Outside of research, she spends her time adventuring with her pup (Terra), running, and crocheting.

Karmel S. Shehadeh

A Unified Framework for Analyzing and Optimizing a Class of Convex Inequity Measures


We present a unified framework for analyzing a new parameterized class of convex inequity measures suitable for optimization contexts. First, we introduce a new class of order-based inequity measures and discuss their properties. Then, we introduce our proposed class of convex inequity measures, discuss their properties in an absolute and relative sense, and derive an equivalent dual representation of these measures as a robustified order-based inequity measure over their dual sets. Importantly, this dual representation renders a unified mathematical expression and an alternative geometric characterization for convex inequity measures through their dual sets. In addition, we use the dual representation to develop a unified framework for optimization problems with a convex inequity measure objective or constraint, including reformulations and solution methods. Finally, we provide stability results on the choice of convex inequity measures in the objective of optimization models. Our numerical results demonstrate the computational efficiency of our proposed framework over existing ones.


Bio: Dr. Karmel S. Shehadeh is an Assistant Professor of Industrial Systems and Engineering at Lehigh University. Before joining Lehigh,  she was a Presidential and Dean Postdoctoral Fellow at Heinz College of Information Systems and Public Policy at Carnegie Mellon University. She holds a doctoral degree in Industrial and Operations Engineering from the University of Michigan, a master's degree in Systems Science and Industrial Engineering from Binghamton University, and a bachelor's in Biomedical Engineering from Jordan University of Science and Technology.  Shehadeh’s broad methodological research expertise and interests include optimization under uncertainty and mixed-integer programming. Her application areas and expertise include healthcare operations and analytics, facility location, fair decision-making, and transportation. Professional recognition of her work includes an INFORMS Minority Issues Forum Paper Award (winner), Junior Faculty Interest Group Paper Prize (finalist), Service Science Best Cluster Paper Award (finalist), and Lehigh's Alfred Noble Robinson Faculty Award.

Adam Elmachtoub

Embedding Fairness into Pricing Algorithms


Price discrimination algorithms, which offer different prices to customers based on differences in their valuations, have become common practice. While it allows sellers to increase their profits, it also raises several concerns in terms of fairness, e.g., by charging higher prices (or denying access) to protected groups when they have higher (or lower) valuations than the general population. In this paper, we consider the problem of setting prices for different groups under fairness constraints. We consider different notions of fairness related to prices, access, and consumer surplus under two fundamental settings: an unconstrained monopolist and a vehicle sharing system.


Bio: Adam Elmachtoub is an Associate Professor of Industrial Engineering and Operations Research at Columbia University, where he is also a member of the Data Science Institute. His research spans two major themes: (i) designing machine learning and personalization methods to make informed decisions in industries such as retail, logistics, and travel (ii) pricing algorithms for modern e-commerce and service systems. He received his B.S. degree from Cornell and his Ph.D. from MIT, both in operations research. He spent one year as a postdoc at the IBM T.J. Watson Research Center working in the area of Smarter Commerce. He has received an NSF CAREER Award, an IBM Faculty Award, Great Teacher Award from Society of Columbia Graduates, and was on Forbes 30 under 30 in science.

Ozge Sahin

A Simple Way Towards Fair Assortment Planning: Algorithms and Welfare Implications


Online retailers and department stores function as marketplaces for millions of sellers, and consumers rely on the platform's assortment and display decisions to examine different sellers (product) and make purchase decisions. Traditionally, the primary objective of these marketplaces for assortment planning is to maximize revenue, which may create unfairness among sellers.  To address this issue, we propose fairness constraints that ensure fair market exposures for all sellers. These constraints ensure each seller to have a minimum market exposure, which may depend on the seller's reputation, product quality, and price, among other features. We show that the optimal solution with fairness constraints is to randomize over at most $n$ nested assortments, where $n$ is the number of sellers (or products), and the optimal solution can be found in polynomial time. When there are business constraints including cardinality constraint on the assortments and limit on the number of different assortments, we first characterize the structure of the optimal solution and then propose efficient heuristics. We further explore the impact of fairness constraints on consumer welfare, we show that it always increases when such constraints are imposed. We also show when fairness constraints induce new sellers enter the platform all involved parties may benefit resulting in a win-win-win situation. Even when there is no new seller entry, we identify cases in which the total welfare improves.


Bio: Ozge Sahin is a Professor of Operations Management and Business Analytics at the Johns Hopkins Carey Business School. She got her Ph.D. and MS degrees in Operations Research from Columbia University. She is the Faculty Director of Innovation Field Projects in the Johns Hopkins Carey Business School MBA Program. She teaches Operations Management, Business Analytics, and Advanced Business Analytics in masters and MBA programs.


Her research interests include pricing, marketplace analytics with an emphasis on consumer behavior, and strategic capacity and supply chain management. Some of her recent research projects include analysis of pricing strategies with search cost, nonlinear promotion optimization, biases and heuristics for sequential decision problems, assortment optimization, and fairness. Ozge has published papers in academic journals, including Management Science, Operations Research, Manufacturing and Service Operations Management. She serves as an Associate Editor for Manufacturing and Service Operations Management, Operations Research, and Naval Research Logistics. She is also the Department Editor of Revenue Management and Pricing Area of Decision Sciences Journal. She served as a consultant to companies, including Lucent Technologies, Amadeus SAS, and Amazon. She is an Amazon Scholar in the Pricing Research and Machine Learning group at Amazon since 2019.