Home Blog Exploring the Role of Artificial Intelligence in Modern Criminal Justice

Exploring the Role of Artificial Intelligence in Modern Criminal Justice

Published: December 9, 2022
Editor at Plat.AI
Editor: Ani Mosinyan
Reviewer at Plat.AI
Reviewer: Alek Kotolyan

Artificial intelligence has established itself as the leading emerging technology and the future of humankind. As a result, various industries implement machine learning algorithms that coexist with human labor to complete job-related tasks, and the law enforcement industry doesn’t lag behind.

justice system

Primarily, the justice system uses risk assessment tools to help judges determine sentence length and probation options. Law enforcement also utilizes these tools to predict and prevent criminal activity using hi-tech image detection and facial recognition systems. ML models can even be trained to predict possible recidivism in convicts.

These advancements can potentially change how the entire justice system functions. But making decisions in law enforcement assumes immense responsibilities, as they decide the fates of individuals. Are current AI tools competent enough to make unbiased and fair predictions? Do they actually help or hinder the process? Read on to find out more.

Historical Development and Evolution of AI in Criminal Justice

The historical development and evolution of Artificial Intelligence (AI) in criminal justice have marked a significant transition in how justice systems operate and manage public safety. AI’s journey in this field began in the mid-1950s when John McCarthy, often credited as the father of AI, defined it as “the science and engineering of making intelligent machines.” This foundational concept set the stage for AI’s role in transforming law enforcement and criminal justice over the decades. AI’s capacity for pattern recognition, a critical aspect of human intelligence, has been particularly influential in various criminal justice applications, from crime prediction and AI in public safety to AI in law enforcement​​.

AI in Law Enforcement and Public Safety 

Examples of AI in law enforcement can be grouped into four main categories. These include:

  • Facial Recognition: These tools are used to ensure public safety through video and image analysis. High-resolution cameras can provide investigators with information about a person’s behavior. For example, Janus uses a robust model that connects to a video surveillance system. It transmits data from the AI camera to a centralized server warning of abnormal or dangerous behavior.
  • Crime Forecasting: Several law enforcement agencies use machine learning algorithms to predict the types of individuals most likely to commit crimes. For instance, AI-powered tools have become integral to the Chicago Police Department’s Violence Reduction Strategy. They gather information and construct social networks that help them identify potentially high-risk individuals.
    Several courts utilize these tools to assess risks associated with culprits, like the possibility of committing a crime again or not showing up to court. They detect patterns in human behavior that allows them to make informed decisions about bail, sentencing, and parole. Even though the judge makes the final decision, the model’s outcome can guide them to deliver a fully objective and unbiased verdict.
  • Automated Threat Detection: Automated Threat Detection (ATD) refers to the use of technology, particularly artificial intelligence (AI) and machine learning (ML), to identify and assess potential threats without human intervention. This technology is increasingly important in various fields such as cybersecurity, public safety, and defense. 
  • AI in Incident Reporting and Analysis: This refers to the application of Artificial Intelligence technologies to improve the way incidents are reported, analyzed, and managed. This approach is increasingly being adopted across various sectors, including workplace safety, public health, law enforcement, and cybersecurity. AI in Incident Reporting and Analysis represents a significant advancement in how organizations handle safety and security challenges. It offers more efficient reporting, thorough analysis, predictive insights, and real-time decision-making support, contributing to overall risk reduction and improved safety outcomes. 

Supporters of these AI models claim that criminal justice decision-making models can provide juster outcomes and eradicate the flaws in the traditional judicial system. For instance, these models consider age, sex, nationality, criminal history, type of crime, and probation of the culprit. The model is then trained to predict the recidivism of culprits: whether they are likely to relapse and commit another crime.

What Are the Benefits of AI in Law Enforcement?

The most prominent benefits of deploying AI in the justice system include the following:

  • Expediting Human Tasks and Increasing Accuracy. Machine learning algorithms can accelerate the processes performed by humans, helping to shortlist suspects. For instance, high-quality cameras can capture everything from drug trafficking to gunshots and report these crimes to the relevant authorities in real time. AI-powered tools are also deployed to detect online-related crimes, including financial fraud, phishing attacks, and involvement in the dark web.
  • Predicting Crime Patterns. Using vast amounts of data on past cases, ML models can predict human behavior and their tendency toward committing a future crime. A basic example includes predicting the likelihood of previously convicted felons committing another crime. As a result, this information can be used to set or deny bail and probation.
  • Protecting Critical Infrastructure. AI and machine learning algorithms are applied across critical infrastructures to reduce the risks of terrorist and hacker attacks on transportation, utility, or Internet systems. Edge computing, 5G, and the Internet of Things maximize efficiency and prevent possibilities of terrorist attacks on critical infrastructures that can damage the environment or threaten human lives. 
  • Uncovering Criminal Networks. Finally, AI can help investigators analyze millions of bits of data to find patterns in images, language, and identities. This allows them to uncover networks of organized crime. AI tools can provide visual representations of organizational hierarchies and identify the nature of criminal networks in drug trafficking, human trafficking, counterfeiting, and weapons dealing.

Risk Assessment and Machine Learning in Justice Systems

The integration of Machine Learning (ML) and Artificial Intelligence (AI) in the justice system, particularly through Risk Assessment Instruments (RAIs), is transforming judicial decision-making. RAIs, such as the Public Safety Assessment (PSA), use algorithms to predict a defendant’s potential for future misconduct, influencing crucial decisions like pretrial incarceration. These tools assess factors like age and past misconduct to generate risk scores, which judges use to set release conditions. While AI in criminal justice, including RAIs, promises enhanced consistency, accuracy, and transparency, studies have shown that statistical models can reduce defendant detention rates without increasing pretrial misconduct, thus promoting uniformity in judicial decisions​​

However, the use of AI in criminal justice faces significant challenges, including potential biases, lack of individualization, and transparency issues. For instance, in Loomis v. Wisconsin, the defendant challenged the use of the COMPAS RAI in sentencing, raising concerns about opaque algorithms and data biases. Moreover, if historical data used in RAIs are biased, AI may perpetuate these biases, necessitating careful selection of outcomes for training AI models. To address these challenges, experts recommend maintaining human oversight in AI-assisted decisions, ensuring transparency, fairness, and continuous evaluation and refinement of AI algorithms, especially to predict the effectiveness of supportive interventions that could mitigate misconduct risks​​.

What Are Risk Assessment Tools?

Tools that assist decision-makers in the justice system have been used since the 20th century. Also called risk assessment instruments (RAI), these are algorithms that predict which convicts pose the most threat to society and whether they’ll re-offend in the future. This data is later used in court hearings for sentencing, supervision, or release.

Below are some examples of widely used risk assessment tools in the justice system:

COMPAS

Correctional Offender Management Profiling for Alternative Solutions (COMPAS) is a risk assessment tool that uses artificial intelligence to predict violent recidivism, general recidivism, and pretrial release risk. It has also been used for general sentencing in courts, such as determining the length of the sentence.

Several states use COMPAS in their jurisdictions, including New York, California, and Florida’s Broward County. According to the software’s official guide, COMPAS takes into account the following factors to determine an offender’s likelihood of committing another crime:

  • Pretrial Release – based on current and pending charges, prior arrest and pretrial history, employment status, community ties, and substance abuse.
  • General Recidivism – based on criminal history and associates, drug involvement, and juvenile delinquency.
  • Violent Recidivism – based on the history of violence and non-compliance, vocational/educational problems, the person’s age at intake, and age at first arrest.

Public Safety Assessment (PSA)

Public Safety Assessment is an RAI used to predict an individual’s future risk of misconduct. It helps courts decide whether to incarcerate the offender before trial. 

The algorithm uses several factors, including the age and history of misconduct of the individual, to produce three risk scores – the risk of committing new crimes, the risk of not committing new crimes, and the risk of not showing up for a court hearing. The system doesn’t interview the offender personally but instead answers nine questions by looking at previous court records. 

These risks are later translated into release-condition recommendations, with higher risks corresponding to stricter release conditions. However, the judge is still empowered to make the final decision regardless of the algorithm’s outcome. 

Level of Service/Case Management Inventory (LS/CMI)

The LS/CMI data processing system helps organize and store all information about a culprit’s case and supervision. The survey involves 43 questions about the individual’s criminal history. Justice system employees use this data to set levels of probation, parole supervision, and prison sentences.

Challenges of Criminal Justice Risk Assessment Tools

Current risk assessment tools have caused some controversy, with several experts claiming that they exacerbate the already prevalent bias in decision-making. The problem is that ML models are trained on previous data. For example, the data includes criminal records and information about previous court cases. But if this data contains biased decisions made by humans in the past, the algorithm will inherit the bias and be trained on faulty data, producing biased results.

One of the most prominent examples of biased decision-making includes the discrimination of African-American people in the conviction for marijuana possession. Research says that all racial groups have been estimated to consume marijuana at equal rates. However, African-Americans have been charged at much higher rates than other demographics. Therefore, a model that relies on historical records will unfairly predict African-Americans to be in a high-risk group.

The limitations of machine learning algorithms include two types of biases:

  • Bias and data – biased data produces biased outcomes. While the machine learning model may detect some patterns correlated with crime, these patterns may not accurately represent the actual cause of the crime. Most often, these patterns reflect the existing injustice in law enforcement. As long as machine learning algorithms rely only on past records, demographics that have historically been discriminated against may suffer from injustices.
  • Bias and humans – AI can reinforce human biases. Another problem arises if you look at the same issue differently. The risk assessment tool may produce a result that validates human bias. And since the judges can base their decisions on the algorithm, their decisions can be influenced by implicit biases.

The existence of bias leaves the true impact of AI-assisted tools somewhat ambiguous. Critics also present other arguments, specifically against COMPAS, calling it incompetent for the following reasons:

  • Lack of individualization. During the 2016 Loomis v. Wisconsin case, petitioners claimed that the sentence was carried out based on historical group tendencies of misconduct assessed by COMPAS. They further asserted that the court did not look at the culprit’s personal details but rather grouped them with criminals with similar behavior and issued a similar sentence. However, the court denied this statement, saying that the final decision did not entirely rely on the risk assessment tool.
  • Lack of transparency. Founders of COMPAS refuse to disclose a detailed explanation of how the software works, claiming it is a trade secret. For example, during the same Wisconsin court case, gender was included in the assessment process but without any details on how it was assessed and how much weight it had in the equation, making the petitioner believe it was discriminatory.

Artificial Intelligence (AI) has significantly transformed the legal landscape, especially in legal research, evidence analysis, and digital forensics. The roots of AI in the legal field date back to the early 1960s, with the development of machine learning algorithms to aid in legal document review and information retrieval. Modern AI applications in this realm include Natural Language Processing (NLP) for efficient extraction of relevant information from legal databases, enhancing the speed and accuracy of legal research. The use of AI in criminal justice, particularly in AI crime prediction and AI in law enforcement, represents a major shift towards more data-driven and efficient legal processes​​​​.

The concept of AI judges and legal assistants has evolved from a speculative idea to a practical reality, offering the potential to expedite legal proceedings and reduce human biases in decision-making. However, this integration raises critical issues such as accountability in AI-driven decisions and the potential displacement of human legal professionals. While AI can streamline legal processes, it is essential that it complements human expertise. Human judges continue to play a vital role, providing context, moral and ethical judgment, and interpreting nuances that AI lacks. This human touch is particularly crucial in intellectual property law, where the understanding of creators’ and inventors’ rights and societal values is key. Human judges are also essential in overseeing AI-driven decisions, ensuring fairness and maintaining public trust in the legal system​​​​​​​​.

AI Applications in Crime and Forensic Analysis

  • DNA Analysis. Forensic DNA testing has played a significant role in the criminal justice system, helping scientists perform more profound and precise analyses of human DNA. A group of researchers from Syracuse University developed an ML-based software that uses data mining techniques to improve the DNA dissection process, improving the accuracy of the results of DNA matching.
  • Gunshot Detection. Cadre Research Labs have employed an ML model that can analyze gunshot audio files on smart devices to assist law enforcement agents in investigations. The software can perform many tasks, including differentiating muzzle blasts from shock waves, determining shot-to-shot timings, determining the number of firearms present, assigning shots to weapons, and estimating probabilities of class and caliber.
  • AI in Digital Crime Scene Reconstruction. This feature refers to the use of Artificial Intelligence technologies to recreate and analyze crime scenes digitally. This innovative approach leverages AI algorithms and tools to interpret evidence and reconstruct events, providing a more comprehensive understanding of what transpired at a crime scene. 
  • Biometric Analysis in Forensics. AI-driven facial recognition can be used to identify individuals captured in surveillance footage or photographs from the crime scene. Similarly, biometric analysis can help in identifying unique physical characteristics of individuals involved.

Addressing Bias and Ethical Considerations

Addressing bias and ethical considerations in the application of Artificial Intelligence (AI) in criminal justice is a critical challenge. AI systems, including those used in crime prediction, public safety, law enforcement, and policing, are only as good as the data they’re trained on. Consequently, there’s a risk that these systems may perpetuate existing biases and discriminatory practices, leading to unfair outcomes, especially for marginalized groups. The lack of transparency in AI decision-making processes further complicates identifying and correcting errors or biases​​.

The ethical debate surrounding AI in criminal justice also involves the potential for these technologies to renew or exacerbate biases from the past. PredPol, a machine-learning algorithm used to classify areas as “high-risk” for crime, inadvertently focused on predominantly Latino and Black neighborhoods, highlighting the risk of reinforcing racial profiling in policing. Similarly, COMPAS has been scrutinized for producing biased risk assessments, disproportionately impacting Black defendants. Such issues underscore the need for careful consideration and reformation of AI tools in criminal justice to prevent the continuation of existing systemic biases​​.

Future Directions and Innovations in AI for Criminal Justice 

The future of machine learning algorithms in the justice system remains ambiguous. But if everyone, including executive management and software developers, finds a solution that eliminates the current challenges, ML models can truly revolutionize the decision-making process in courts. Here are four recommendations that could help revitalize the role of AI in criminal justice:

  • Human oversight over model implementation. First, it’s important to keep a human eye on every stage of AI infrastructure, from preparing the data to implementation. No matter how advanced these tools are, humans always make the final decision. The judge should always give a written explanation both in the case of complying with and contradicting the results of an RAI when delivering a verdict. This can help judges consciously motivate their decision and reduce the impact of arbitrary decisions made by software.
  • Transparent algorithms. In such a high-stakes context, deep knowledge and mastery of these tools are crucial for providing legitimate results. Policymakers using these tools should know exactly how they work. By disclosing detailed information about how risk determination works, judges and law enforcement can more accurately and effectively apply them.
  • No discrimination against certain demographics. Interested parties in the justice system should carefully examine and scrub the data to remove bias before using it for building ML models. This can help reduce the chance of some groups being treated more unjustly than others. Furthermore, model predictions should still be tested to observe whether groups with similar risk scores re-offend at similar rates. Only after multiple tests can agents finally confirm the model to be unbiased.
  • Continuous evaluation and monitoring. A machine learning model can only function effectively with constant monitoring and assessment of the results. By evaluating the results of new machine learning algorithms, policymakers can truly identify whether they generate the desired impact they aim to achieve.

Sum Up

Artificial intelligence in criminal justice is used to prevent crimes and ensure public safety. For instance, facial recognition systems and advanced technologies comprise various police departments’ investigative tools. Additionally, courts use risk assessment tools that help judges determine sentencing terms, bail, and probation. 

However, the quality of these tools is a big topic of debate. Critics of these algorithms claim that they are trained to make predictions on biased data, resulting in discriminatory outcomes. But if designed properly, these tools can help people make informed decisions and restore equity in the justice system.


Tigran Hovsepyan

Tigran Hovsepyan

Staff WriterTigran Hovsepyan is an experienced content writer with a background in Business and Economics. He focuses on IT management, finance, and e-commerce. He also enjoys writing about current trends in music and pop culture.


Recent Blogs