Stanford Computational Policy Lab Stanford Computational Policy Lab Menu
Blind Charging

Mitigating bias in charging decisions with automated race redaction.

Prosecutors have nearly absolute discretion to charge or dismiss criminal cases. There is concern, however, that these high-stakes judgements may suffer from explicit or implicit racial bias, as with many other such actions in the criminal justice system.

To reduce potential bias in charging decisions, we designed a new algorithm that automatically redacts race-related information from free-text case narratives. In a first-of-its-kind initiative, we deployed this algorithm at the San Francisco District Attorney’s Office to help prosecutors make race-blind charging decisions, where it is now being used to review incoming felony cases.

Our redaction algorithm is able to obscure race-related information close to the theoretical limit. For example, the redacted case files leak no more information about one’s race than one can infer from the alleged crime alone. We are working to expand to other jurisdictions in California in 2020, with an eye toward a wider open-source release of our tool.

Hypothetical example of our redaction algorithm A hypothetical example of our redaction algorithm obscuring information that could be used to infer an individual’s race.