Stanford Computational Policy Lab Stanford Computational Policy Lab Menu
Defining and Designing Fair Algorithms

Machine learning algorithms are increasingly used to guide decisions by human experts, including judges, doctors, and managers. Researchers and policymakers, however, have raised concerns that these systems might inadvertently exacerbate societal biases. To measure and mitigate such potential bias, there has recently been an explosion of competing mathematical definitions of what it means for an algorithm to be fair. But there's a problem: nearly all of the prominent definitions of fairness suffer from subtle shortcomings that can lead to serious adverse consequences when used as an objective. In this tutorial, we illustrate these problems that lie at the foundation of this nascent field of algorithmic fairness, drawing on ideas from machine learning, economics, and legal theory. In doing so we hope to offer researchers and practitioners a way to advance the area.


From EC 2018 and ICML 2018.

View on Google Slides to download ("File" > "Download as...").


Marissa Gerchick, Sam Corbett-Davies, Elan Dagenais, and Sharad Goel
Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq
Proceedings of the 23rd Conference on Knowledge Discovery and Data Mining (KDD 2017)
Sam Corbett-Davies, Sharad Goel, and Sandra González-Bailón
The New York Times
Sam Corbett-Davies, Emma Pierson, Avi Feller, and Sharad Goel
The Washington Post
Camelia Simoiu, Sam Corbett-Davies, and Sharad Goel
Annals of Applied Statistics, Vol. 11, 2017