Artificial intelligence has established itself as the leading emerging technology and the future of humankind. As a result, various industries implement machine learning algorithms that coexist with human labor to complete job-related tasks, and the law enforcement industry doesn’t lag behind.
Primarily, the justice system uses risk assessment tools to help judges determine sentence length and probation options. Law enforcement also utilizes these tools to predict and prevent criminal activity using hi-tech image detection and facial recognition systems. ML models can even be trained to predict possible recidivism in convicts.
These advancements can potentially change how the entire justice system functions. But making decisions in law enforcement assumes immense responsibilities, as they decide the fates of individuals. Are current AI tools competent enough to make unbiased and fair predictions? Do they actually help or hinder the process? Read on to find out more.
The historical development and evolution of Artificial Intelligence (AI) in criminal justice have marked a significant transition in how justice systems operate and manage public safety. AI’s journey in this field began in the mid-1950s when John McCarthy, often credited as the father of AI, defined it as “the science and engineering of making intelligent machines.” This foundational concept set the stage for AI’s role in transforming law enforcement and criminal justice over the decades. AI’s capacity for pattern recognition, a critical aspect of human intelligence, has been particularly influential in various criminal justice applications, from crime prediction and AI in public safety to AI in law enforcement.
Examples of AI in law enforcement can be grouped into four main categories. These include:
Supporters of these AI models claim that criminal justice decision-making models can provide juster outcomes and eradicate the flaws in the traditional judicial system. For instance, these models consider age, sex, nationality, criminal history, type of crime, and probation of the culprit. The model is then trained to predict the recidivism of culprits: whether they are likely to relapse and commit another crime.
The most prominent benefits of deploying AI in the justice system include the following:
The integration of Machine Learning (ML) and Artificial Intelligence (AI) in the justice system, particularly through Risk Assessment Instruments (RAIs), is transforming judicial decision-making. RAIs, such as the Public Safety Assessment (PSA), use algorithms to predict a defendant’s potential for future misconduct, influencing crucial decisions like pretrial incarceration. These tools assess factors like age and past misconduct to generate risk scores, which judges use to set release conditions. While AI in criminal justice, including RAIs, promises enhanced consistency, accuracy, and transparency, studies have shown that statistical models can reduce defendant detention rates without increasing pretrial misconduct, thus promoting uniformity in judicial decisions.
However, the use of AI in criminal justice faces significant challenges, including potential biases, lack of individualization, and transparency issues. For instance, in Loomis v. Wisconsin, the defendant challenged the use of the COMPAS RAI in sentencing, raising concerns about opaque algorithms and data biases. Moreover, if historical data used in RAIs are biased, AI may perpetuate these biases, necessitating careful selection of outcomes for training AI models. To address these challenges, experts recommend maintaining human oversight in AI-assisted decisions, ensuring transparency, fairness, and continuous evaluation and refinement of AI algorithms, especially to predict the effectiveness of supportive interventions that could mitigate misconduct risks.
Tools that assist decision-makers in the justice system have been used since the 20th century. Also called risk assessment instruments (RAI), these are algorithms that predict which convicts pose the most threat to society and whether they’ll re-offend in the future. This data is later used in court hearings for sentencing, supervision, or release.
Below are some examples of widely used risk assessment tools in the justice system:
Correctional Offender Management Profiling for Alternative Solutions (COMPAS) is a risk assessment tool that uses artificial intelligence to predict violent recidivism, general recidivism, and pretrial release risk. It has also been used for general sentencing in courts, such as determining the length of the sentence.
Several states use COMPAS in their jurisdictions, including New York, California, and Florida’s Broward County. According to the software’s official guide, COMPAS takes into account the following factors to determine an offender’s likelihood of committing another crime:
Public Safety Assessment is an RAI used to predict an individual’s future risk of misconduct. It helps courts decide whether to incarcerate the offender before trial.
The algorithm uses several factors, including the age and history of misconduct of the individual, to produce three risk scores – the risk of committing new crimes, the risk of not committing new crimes, and the risk of not showing up for a court hearing. The system doesn’t interview the offender personally but instead answers nine questions by looking at previous court records.
These risks are later translated into release-condition recommendations, with higher risks corresponding to stricter release conditions. However, the judge is still empowered to make the final decision regardless of the algorithm’s outcome.
The LS/CMI data processing system helps organize and store all information about a culprit’s case and supervision. The survey involves 43 questions about the individual’s criminal history. Justice system employees use this data to set levels of probation, parole supervision, and prison sentences.
Current risk assessment tools have caused some controversy, with several experts claiming that they exacerbate the already prevalent bias in decision-making. The problem is that ML models are trained on previous data. For example, the data includes criminal records and information about previous court cases. But if this data contains biased decisions made by humans in the past, the algorithm will inherit the bias and be trained on faulty data, producing biased results.
One of the most prominent examples of biased decision-making includes the discrimination of African-American people in the conviction for marijuana possession. Research says that all racial groups have been estimated to consume marijuana at equal rates. However, African-Americans have been charged at much higher rates than other demographics. Therefore, a model that relies on historical records will unfairly predict African-Americans to be in a high-risk group.
The limitations of machine learning algorithms include two types of biases:
The existence of bias leaves the true impact of AI-assisted tools somewhat ambiguous. Critics also present other arguments, specifically against COMPAS, calling it incompetent for the following reasons:
Artificial Intelligence (AI) has significantly transformed the legal landscape, especially in legal research, evidence analysis, and digital forensics. The roots of AI in the legal field date back to the early 1960s, with the development of machine learning algorithms to aid in legal document review and information retrieval. Modern AI applications in this realm include Natural Language Processing (NLP) for efficient extraction of relevant information from legal databases, enhancing the speed and accuracy of legal research. The use of AI in criminal justice, particularly in AI crime prediction and AI in law enforcement, represents a major shift towards more data-driven and efficient legal processes.
The concept of AI judges and legal assistants has evolved from a speculative idea to a practical reality, offering the potential to expedite legal proceedings and reduce human biases in decision-making. However, this integration raises critical issues such as accountability in AI-driven decisions and the potential displacement of human legal professionals. While AI can streamline legal processes, it is essential that it complements human expertise. Human judges continue to play a vital role, providing context, moral and ethical judgment, and interpreting nuances that AI lacks. This human touch is particularly crucial in intellectual property law, where the understanding of creators’ and inventors’ rights and societal values is key. Human judges are also essential in overseeing AI-driven decisions, ensuring fairness and maintaining public trust in the legal system.
Addressing bias and ethical considerations in the application of Artificial Intelligence (AI) in criminal justice is a critical challenge. AI systems, including those used in crime prediction, public safety, law enforcement, and policing, are only as good as the data they’re trained on. Consequently, there’s a risk that these systems may perpetuate existing biases and discriminatory practices, leading to unfair outcomes, especially for marginalized groups. The lack of transparency in AI decision-making processes further complicates identifying and correcting errors or biases.
The ethical debate surrounding AI in criminal justice also involves the potential for these technologies to renew or exacerbate biases from the past. PredPol, a machine-learning algorithm used to classify areas as “high-risk” for crime, inadvertently focused on predominantly Latino and Black neighborhoods, highlighting the risk of reinforcing racial profiling in policing. Similarly, COMPAS has been scrutinized for producing biased risk assessments, disproportionately impacting Black defendants. Such issues underscore the need for careful consideration and reformation of AI tools in criminal justice to prevent the continuation of existing systemic biases.
The future of machine learning algorithms in the justice system remains ambiguous. But if everyone, including executive management and software developers, finds a solution that eliminates the current challenges, ML models can truly revolutionize the decision-making process in courts. Here are four recommendations that could help revitalize the role of AI in criminal justice:
Artificial intelligence in criminal justice is used to prevent crimes and ensure public safety. For instance, facial recognition systems and advanced technologies comprise various police departments’ investigative tools. Additionally, courts use risk assessment tools that help judges determine sentencing terms, bail, and probation.
However, the quality of these tools is a big topic of debate. Critics of these algorithms claim that they are trained to make predictions on biased data, resulting in discriminatory outcomes. But if designed properly, these tools can help people make informed decisions and restore equity in the justice system.
Try our real-time predictive modeling engine and create your first custom model in five minutes – no coding necessary!