Coded Fairness Project | Master Thesis

Enabling a bias-sensitive development process of machine learning systems

––

Coded Fairness Project Overview

The Coded Fairness Project combines methods that promote bias-sensitive development of machine learning systems with discursive approaches to how to support such efforts within a company. The methods can then be applied in the form of a workshop by the people involved in the system’s development. The enclosed booklet provides background information, hints and instructions to support the implementation.


Machine learning is being used in more and more industries to solve complex problems. And although machine learning often represents actual added value, its use can also have undesirable side effects. Discrimination by algorithms is already an everyday problem, and as a result, old prejudices deeply rooted in our society are transferred and scaled into a new medium. The reason for this are the biases found within the algorithms.

Our solution includes a toolkit, consisting of printable templates for method posters and an accompanying booklet to conduct a method workshop.

-> What are biases and how can the Coded Fairness Project help?

In addition to an analog implementation of the methods, we offer a digital implementation. By providing a Miro board template, we not only make it easy to conduct the methods collaboratively and digitally, but also to conduct them remotely. (Miro is a collaborative online whiteboard application.) Both versions are available free of charge on our website and the Miroverse, which is the official Miro template library.

Our Approach

For the orientation of the toolkit, we derived the four basic principles of awareness, responsibility, inclusion and testing from our research and, based on these principles, developed various methods that were iterated through collaboration with experts from different disciplines such as psychology or computer science as well as through user testing.

The field of human-centered design encompasses many promising approaches and experiences that may also be applied to products being developed in other disciplines. We want to utilize and bring together these different perspectives to add another piece to the puzzle surrounding biases in machine learning - because in our eyes, this is an issue that needs to be covered across the spectrum.


Core Principles

The Coded Fairness Toolkit

Based on these core principles, we then developed the main component of our measures, the Coded Fairness Toolkit. This is a set consisting of a total of 14 methods designed to help developers prevent biases in their systems.

Coded Fairness Project Packaging

The methods draw on a variety of disciplines, including psychology, sociology, computer science, human-centered design and futuring methods. By mixing a factual and personal approach in the methods, a whole new productive level of bias prevention can be achieved.


Coded Fairness Project Methods
-> Analog method posters and collborative Miro board template to conduct the methods digitally

Booklet

The booklet enclosed with the methods provides the facilitator with important background information, hints and instructions for the individual methods. By providing recommendations for both the opening and ending of the workshop, we also enable even inexperienced facilitators to conduct our workshop.


Booklet

Website

Website and Communication

On the landing page we want to draw attention to the issue and the dangers of biases in machine learning systems. The website serves as a communication tool to show different examples of discrimination by algorithms as well as a product page. All the assets of the Coded Fairness Toolkit are here downloadable for free in English and German. -> codedfairnessproject.com

-> Mockup: Apple iPhone 11 & Macbook Pro Mockup (PSD) - www.unblast.com

Creating Motivation and Validation

However, there is often a lack of motivation to fix biases. Due to high costs and a high time requirement, fixing biases often does not offer companies an economic reward. Because of this, we want to create a basis for discussion with two labels and employee certificates on how companies can be motivated to not only implement and work with our methods, but also how to approach the general issue of harmful biases in machine learning.

-> Mockup: Flyer psd created by CosmoStudio - www.freepik.com

Certificate

Concept Overview

Coded Fairness Project Method Modules
-> Methods and their modules

Unlike most of the solutions available on the market, the Coded Fairness Toolkit is based on an approach that engages people while offering concrete measures that can be implemented. By linking these two dimensions, the toolkit enables sustainable growth in the development of fair algorithms. The modular composition of the toolkit also allows for individual application.


Coded Fairness Project Booklet
-> Coded Fairness Toolkit – Booklet

The methods set out to increase the developers' awareness of biases and convey the importance of a responsible and bias-sensitive approach. This may entail making the process more inclusive by diversifying your team, but also encouraging conversations with users that can help discover possible injustices. Finally, strategies to continuously examine the algorithm for harmful behaviour are provided and explained.

Coded Fairness Project Components
-> Coded Fairness artifacts overview (Mockup: Apple iPhone 11 & Macbook Pro Mockup (PSD) - www.unblast.com; Frame Mockup: Flyer psd created by CosmoStudio - www.freepik.com)

Our set of methods and the other artifacts are presented in the context of the fictitious organisation "Coded Fairness Project".

Coded Fairness Project Concept Overview
-> Coded Fairness Project – Concept and Overview

Coded Fairness Project Method Root Tree
-> Method root tree – development of the method versions

Work Process Impressions

For our research and the evaluation of our methods, we spoke with a wide range of experts. For this we were very interested in feedback from different disciplines. We spoke for instance with experts from the fields of psychology, data science, workshop design and also experts from the field of ML fairness. The versions of our toolkit were developed iteratively. Through expert interviews and an user testing, we were able to gather extensive feedback, which we used to adapt, add to, or partially discard methods.

But also the workshopdesign was carefully designed. For this we focused on different aspects, such as the positioning of the methods within the workshop, the workshop exerience trought the creation of a safe space, the beginning and ending excercises of the workshop. Furthermore we wanted to create easy to use components of the toolkit and desig a great user experience for the workshop participants and the facilitator.

Project Info

––

Team

Mike Lehmann
Vera Schindler-Zins
Marina Rost

General

3. semester (M.A.)
summer semester 2021

Master Thesis

My Role

User Research
Product Concept
Strategy
UX Design
Layout and illustrations

Supervision

Prof. Benedikt Groß
Florian Geiselhart