It may seem a surprise but the original data accountability standard was set by president Richard Nixon in the Fair Credit Reporting Act in 1970. Nixon – not otherwise known for his commitment to transparency, included the right for citizens to examine and challenge the data used to make algorithmic decisions about them.
Meanwhile, algorithms have become ubiquitous in our lives. They map out the best route to our destination and help us find new music based on what we listen to. Companies use them to sort through stacks of résumés from job seekers. Credit agencies use them to determine our credit scores. And the criminal justice system is increasingly using algorithms to predict a defendant’s future criminality.
ProPublica published earlier this year Machine Bias, a revealing article on how the Justice Department’s National Institute of Corrections is using software to predict future criminals across the US, biased against blacks.
“ProPublica obtained more than 7,000 risk scores and compared predicted recidivism to actual recidivism. The publication found the scores were wrong 40 percent of the time and were biased against black defendants, who were falsely labeled future criminals at almost twice the rate of white defendants.”
“The court ruled that while judges could use these risk scores, the scores could not be a “determinative” factor in whether a defendant was jailed or placed on probation. And, most important, the court stipulated that a pre-sentence report submitted to the judge must include a warning about the limits of the algorithm’s accuracy.”
This warning requirement is an important milestone in the debate over how our data-driven society should hold decision-making software accountable. But advocates for big data due process argue that much more must be done to assure the appropriateness and accuracy of algorithm results.