Fairness in Design

As the artificial intelligence (AI) industry booms and the systems they created are impacting our lives immensely, we begin to realize that these systems are not as impartial as we previously thought. Even though they are machines that make logical decisions, biases and discrimination are able to creep into the data and model to affect outcomes causing harm. With our tool, we aim to lower the bar to add fairness into the design discussion so that more design teams can make better, more informed decisions for fairness in their application scenarios

The FID online tool can be used during requirement engineering for the design team to envision potential stakeholder scenarios pertaining to fairness notions.

A demo video can be found at https://youtu.be/nnowNLss_wQ

In this section, we describe the 2 types of fairness, namely group and individual

Step by Step Guide to FID

  1. The first step involves the whole team identifying the application scenario they will use the tool for. It can be the actual product that they are working on or a fictional product. To have the most optimal effect, members should also be specific and elaborate on the component of the product that they want to assess fairness on. This will help to aid the discussion later by making the fairness criteria more relatable to the application scenario. As the tool can be used in any stage of the design process, product teams may choose to focus on an area that they have found or suspected to have fairness issues. This allows them to think about fairness concerns in a more open environment away from the technical details. Product teams developing complex AI systems can also consider breaking down the system into individual modules before starting so that the discussion will be focused and effective.

  2. The team has to choose an application card that is the most relevant to their application scenario. There are five categories to choose from: 1) life-critical systems 2) industrial and commercial uses 3) office, home, and entertainment 4) exploratory, creative, collaborative applications 5) sociotechnical applications. They are adapted from Shneiderman’s classification for usability motivation in the literature of Human-Computer Interaction (HCI). Similarly in ethical AI design, different AI applications will have different motivations and requirements for their design that will affect how they view fairness. Hence on each card, it will include thought-provoking statements and questions to provide a guide to the team in their evaluation of fairness criteria later on. It also includes application examples so that the team can more easily identify which application category their application scenario belongs to. Fig 2 shows an example of an application card.

  3. Using the application scenario and application card as the background, the team will be given the definitions of direct and indirect stakeholders to read and understand. Each member will take a reflection card and write a stakeholder role on the space provided for it. The members have the flexibility to decide if they want to write a direct or indirect stakeholder role. This will be their identity for the remaining steps.

  4. Once every member has written their stakeholder role, they will draw one fairness card each and write it down on the reflection card as well. This will be their fairness metric to reflect on later. There are a total of ten fairness cards according to the most widely used definitions identified in the machine learning literature. One side of the card contains the fairness metric name, definition, and the mathematical formula of it. On the flip side contains an illustration of how the fairness metric works under a certain scenario. All cards have the same scenario. As shown in an example of a card in Fig 2, the scenario is a bank that has a predictive model to determine if the loan application will be approved or denied. The cards are divided into two color schemes: orange and blue. This is so to help differentiate the cards more easily into group fairness (in orange) and individual fairness (in blue). It is also labeled at the top right-hand corner to highlight to the members. We included the mathematical formulae as some engineers might understand the metrics easier by referring to them. As the cards are from the existing literature of fairness in AI, if new and better fairness metrics are operationalized, the tool is flexible enough to include them.

  5. After every member gets a fairness card, they will need to write their thoughts on the reflection card. With the application card as a guide, they are required to think about how the fairness metric will impact them from the perspective of their stakeholder role. Two questions are on the card to prompt the team in their thought process, namely asking them what could go right and what could go wrong with the fairness metric they have gotten. The team has to remember to think from the shoes of their stakeholder role and not as themselves.

Statisical Parity

Demographic Parity

Equal Opportunity

Equalised Odds

Test Fairness

Treatment Equality

Counterfactual Fairness

Fairness in Relational Domain

Fairness Through Awareness

Fairness Through Unawareness