Machine learning is being used in more and more industries to solve complex problems. And although machine learning often represents actual added value, its use can also have undesirable side effects. Discrimination by algorithms is already an everyday problem, and as a result, old prejudices deeply rooted in our society are transferred and scaled into a new medium. The reason for this are the biases found within the algorithms.
Our solution includes a toolkit, consisting of printable templates for method posters and an accompanying booklet to conduct a method workshop.
In addition to an analog implementation of the methods, we offer a digital implementation. By providing a Miro board template, we not only make it easy to conduct the methods collaboratively and digitally, but also to conduct them remotely. (Miro is a collaborative online whiteboard application.) Both versions are available free of charge on our website and the Miroverse, which is the official Miro template library.
For the orientation of the toolkit, we derived the four basic principles of awareness, responsibility, inclusion and testing from our research and, based on these principles, developed various methods that were iterated through collaboration with experts from different disciplines such as psychology or computer science as well as through user testing.
The field of human-centered design encompasses many promising approaches and experiences that may also be applied to products being developed in other disciplines. We want to utilize and bring together these different perspectives to add another piece to the puzzle surrounding biases in machine learning - because in our eyes, this is an issue that needs to be covered across the spectrum.
Based on these core principles, we then developed the main component of our measures, the Coded Fairness Toolkit. This is a set consisting of a total of 14 methods designed to help developers prevent biases in their systems.
The methods draw on a variety of disciplines, including psychology, sociology, computer science, human-centered design and futuring methods. By mixing a factual and personal approach in the methods, a whole new productive level of bias prevention can be achieved.
The booklet enclosed with the methods provides the facilitator with important background information, hints and instructions for the individual methods. By providing recommendations for both the opening and ending of the workshop, we also enable even inexperienced facilitators to conduct our workshop.
On the landing page we want to draw attention to the issue and the dangers of biases in machine learning systems. The website serves as a communication tool to show different examples of discrimination by algorithms as well as a product page. All the assets of the Coded Fairness Toolkit are here downloadable for free in English and German. -> codedfairnessproject.com
However, there is often a lack of motivation to fix biases. Due to high costs and a high time requirement, fixing biases often does not offer companies an economic reward. Because of this, we want to create a basis for discussion with two labels and employee certificates on how companies can be motivated to not only implement and work with our methods, but also how to approach the general issue of harmful biases in machine learning.
Unlike most of the solutions available on the market, the Coded Fairness Toolkit is based on an approach that engages people while offering concrete measures that can be implemented. By linking these two dimensions, the toolkit enables sustainable growth in the development of fair algorithms. The modular composition of the toolkit also allows for individual application.
The methods set out to increase the developers' awareness of biases and convey the importance of a responsible and bias-sensitive approach. This may entail making the process more inclusive by diversifying your team, but also encouraging conversations with users that can help discover possible injustices. Finally, strategies to continuously examine the algorithm for harmful behaviour are provided and explained.
Our set of methods and the other artifacts are presented in the context of the fictitious organisation "Coded Fairness Project".
For our research and the evaluation of our methods, we spoke with a wide range of experts. For this we were very interested in feedback from different disciplines. We spoke for instance with experts from the fields of psychology, data science, workshop design and also experts from the field of ML fairness. The versions of our toolkit were developed iteratively. Through expert interviews and an user testing, we were able to gather extensive feedback, which we used to adapt, add to, or partially discard methods.
But also the workshopdesign was carefully designed. For this we focused on different aspects, such as the positioning of the methods within the workshop, the workshop exerience trought the creation of a safe space, the beginning and ending excercises of the workshop. Furthermore we wanted to create easy to use components of the toolkit and desig a great user experience for the workshop participants and the facilitator.
3. semester (M.A.)
summer semester 2021
Layout and illustrations
Prof. Benedikt Groß