Reminiscent Minority Report, top investigative outlets have been reporting on statistical models used to predict recidivism rates of criminals within the United States (1, 2). Private firms with proprietary algorithms are being used in criminal courts around the country (1). The major implication of these risk assessments is the setting of sentencing guidelines (3). In an article by Pro Publica, a company called Northpointe provides risk assessments for judges in several states and jurisdictions. What the judges are presented with are categorical ratings such as “Low”, “Medium”, or “High” risk of recidivism (1). The article proceeds to explore the bias that is introduced along ethnic lines. Predictably, the system is negatively biased towards Blacks and Latinos and positively biased towards White ethnic groups (1). Northpointe is not willing to share how their algorithms function nor which data they had trained their algorithms on (1). The algorithms, intended to make the justice system fairer, do just the opposite by introducing an ethnic bias.
It is conceivable that given the right technical sophistication, data, and implementation, these algorithms could “accurately” predict the risk of recidivism. That is to say, the algorithms could be as accurate as any probabilistic model can be about a diverse and in many ways, unpredictable population. In a reality where the rule of law states “innocent until proven guilty”, how can probabilities about the risk of recidivism enter into argumentation written along such thick and bold binary lines? How can a judge or jury reason about probability without really understanding it? When presented with word like low, medium, and high risk, it is questionable as to whether individuals in the courtroom are asking themselves about the percentile boundaries for these categories, their confidence, and how representative the training data is of the given jurisdiction.
Statistical analysis have been used in courtrooms for some time now. DNA testing and forensic evidence is a place where statistics are used most frequently (4). Again, misinterpretation of the data is prevalent because the judge and jury do not view the statistical data in the same light. Commonly, a fallacy called the “prosecutors fallacy” is committed. Say for example that that there is a murder case on an island of 1000 individuals. Police identify a DNA fragment that could be found in 0.4 percent of the population. A resident who has done a commercial DNA kit has his DNA subpoenaed. The DNA is match. The judge and jury conclude that since 0.4 percent could match this DNA, the probability that this individual is the murder is 99.6 percent. This is a misinterpretation because there are 1000 individuals on the island, meaning that statistically, there could be 4 individuals on the island that could match the DNA fragment. This means that the probability that he is the murderer is only 25 percent.
Again an argument could be made to say that judges and jury’s could learn about Bayesian Inference and come to better interpret forensic evidence. When the discussion of forensic evidence is reduced to a statement of “reasonable degree of scientific certainty”, the binary nature of the moral universe in which courtrooms exist is brought to the fore. There is guilt or innocence. Judges and jury’s have to stand behind the decisions they make. How they reason about the evidence in front of them, has a lot to do with a personal values and morals, and I would argue only temporarily can deal with relative confidence. The sentencing is a hard line and declaration whereas the evidence is only relative. They necessarily do not fit well together.
Do the ways in which evidence in the courtroom is evaluated reflect a culture or is it a necessary consequent of the function of the judicial system? Namely, to say who is guilty and innocent on incomplete and relative information.
Add yours Comments – 1
[…] another post I discussed how algorithms are being used to forecast crime recidivism and how this is impacting sentencing guidelines. Recently, The Verge reported on a case where the […]