Detain/Release: Teaching Law Students About AI Through Simulated Detention Hearings
– Jason R. Tashea*
Today, law students cannot graduate with just an understanding of law and procedure. They need to leave university able to understand and vet how science and technology will affect their practice and clients. This is because technology systems are increasingly affecting legal systems in democracies around the globe: data informs litigation, probability ratios are admitted as evidence, and artificial intelligence appears at pretrial detention hearings. With these changes, lawyers are increasingly required to have new understandings of science and technology to do their job ethically and competently.
To this end, it’s no surprise that the American Bar Association includes knowledge of technology in the Model Rule defining competency, which has been adopted by 38 states. In a global document setting forth principles for the trustworthy adoption of AI in the legal system, the Institute of Electrical and Electronics Engineers lists competence as one of four principles, the only subject matter area in the report to do so. This principle expects the users of AI, like lawyers, judges, and clerks, to know when a particular tool is appropriate for use, its parameters, limits, and how to understand the result created by the system.
However, those with legal training often lack the foundations to interrogate these issues. This is a shortcoming in legal education.
To help close the divide between law students and novel technologies affecting the legal system, a colleague and I built an online simulation that helps users grapple with these difficult questions. We found that this approach engages students, increases their understanding of complex technology subjects, and prepares them for the world they will graduate into. Through sharing the simulation, we also found that lawyers and other professionals benefited from this approach.
To explain this idea further, this Article first gives a brief overview of the challenges we had teaching technology topics. Second, it introduces the simulation we built for our course. Last, the Article reflects on lessons we learned from using the simulation and how we want to evolve this project going forward.
Teaching Technology in Law School
For the past three years, I co-taught Criminal Justice Technology, Policy, and Law, a course I created and taught with Keith Porcaro at Georgetown University Law Center in Washington, D.C. The course was a practicum, which means that we partnered our students with criminal justice system stakeholders to work on real-world data and technology projects. This included partnerships with the Washington D.C. Office of the Attorney General to develop a system that that used juvenile justice data in policy making while protecting the privacy of data subjects and the Philadelphia District Attorney’s Office to leverage their internal data to improve pretrial diversion outcomes.
Alongside teaching our students about project and data management best practices, we taught various controversies surrounding technology in the criminal justice system. Topics included predictive policing, data collection and manipulation, and novel search and seizure techniques. These lectures went generally well.
Then we tried to teach our students about actuarial risk assessment tools used in pretrial detention hearings, also called a bail hearing. The problems began before class even started. While the readings were challenging, the real sin was that they included math equations, which distracted our students from understanding the content. The lecture compounded problems by including legal and statistical foundations pertaining to risk assessments. The students grasped the legal issues, however, our introductory attempts at likelihood ratios and regression analysis were met with near rebellion. We blamed ourselves for what turned out to be the worst lecture of the term.
Currently, the American criminal justice system is rife with novel technology issues, and we could have easily jettisoned risk assessments from future syllabi and covered a different topic. But we didn’t, because these tools keenly illustrate a national debate about criminal justice reform and the international debate over AI accountability and transparency. We didn’t have a choice but to push through and make the lecture better.
For background, pretrial actuarial risk assessments are meant to provide justice system actors–like corrections officials, judges, prosecutors, and defense attorneys–information about a defendant facing a bail decision–essentially, whether the person will be held in detention awaiting their trial. Built on various datasets, a risk assessment takes a series of factors about a defendant–like age, criminal history, and current criminal charge–and provides an output, such as the likeliness that the person will show up to their hearing, commit a new crime, or commit a new crime of violence.
While actuarial risk assessments have been used for nearly a century in the U.S., it is within the last decade that these tools have proliferated across the country. Just about every state has at least one jurisdiction using risk assessments. These tools have also been adopted statewide in Kentucky and New Jersey, for example.
Proponents of risk assessments argue that the tools are better at determining risk than humans are. A better understanding of a defendant’s risk will lower detention rates while improving public safety, goes the argument. However, there is limited empirical evidence supporting these common claims.
On the other hand, critics show evidence that the tools lack effectiveness, fairness across race and class, accountability, transparency, and the users and procurers of the tools are not competent to assess them.
Illustrating just one of these concerns, a 2017 report found that one popular risk assessment disproportionately labeled black defendants to be higher risk as compared to similarly situated white defendants, leading to unfair outcomes. Many argue that the biased outcomes of these tools are innate to the datasets and input factors they are built on. For example, an input factor that considers a defendant’s parents’ carceral history may seem like a fair question to ask a charged person seeking release into the community. However, in the U.S., about 1-in-17 white children has experienced an incarcerated parent, according to a 2015 study. By the same count, for black children, the number is 1-in-9. Disproportionate criminal justice system punishment like this creates data and patterns within that data that can lead risk assessments to reinforcing those same disproportionate outcomes.
As the concerns about risk assessments have grown, there have been more legal challenges brought. However, state Supreme Courts in Indiana and Wisconsin–the two highest courts in the country to have heard challenges to risk assessments–have both sided with the continued use of the tools. Neither opinion engages the harder issues regarding accountability and transparency. Notably, in the Wisconsin case, the Justices determined that a largely opaque risk assessment was fine to use so long as the judge used the risk score as one of many factors in his decision.
With the battle ongoing, it is this dynamic environment that we want our students to competently navigate.
After assessing our first year’s failed lecture, we took a different approach in our second and third years. First, we made structural changes to the syllabus and split up technical topics into a series of lectures over different days, giving our students a chance to ease into these issues. Second, we dropped the readings with the math equations. Instead, we used documents from the Wisconsin case mentioned above. Third, we developed Detain/Release, an online simulation that puts the user in the seat of a judge at a bail hearing.
In Detain/Release, we create an interactive environment that leads the user to think about how technology affects their decision making and, in turn, the criminal justice system more broadly. The task is simple, the user is served defendants on digital cards and asked whether or not that person should be detained or released pretrial. (Figure 1)
Each defendant card includes a distorted picture of a defendant, biographical information, criminal charge, a prosecutor’s recommendation, a defendant’s statement, and a risk assessment. Using Bureau of Justice Statistics and U.S. Census data, we can generate millions of fake defendants that generally reflect national demographics. The pictures are distorted because we did not want real people to find themselves as a fictional arrestee without prior consent. The risk assessment provides a low, medium, or high rating that a defendant will fail to appear for a future court date, commit a future crime, and commit a future crime of violence. The administrator of the simulation can also serve defendants without risk assessments.
To build out the simulation’s environment, we include a couple of extra mechanics. Above the defendant cards, there are two meters. One represents the local jail capacity and the other captures the public perception of fear. Both are finite and a simulation ends if either is maxed out. Both meters will increase and decrease over the duration of the simulation. The meters, while simplistic, are designed to simulate outside pressures a judge may face when deciding whether to detain a person pretrial. Adding to the environment, a local newspaper article will appear on the screen when a previously released defendant either fails to appear for a hearing or commits a new crime. Depending on the severity of the violation, the fear meter will increase proportionally.
For the administrator, Detain/Release has an active run tracker, which collects simulation data in real-time. (Figure 2) When using the simulation in class, we project the numbers on the wall so students can see the totality of their choices during a run. Broken down by user, the data includes the number of defendants processed, detained, released, the number of defendants that violated their release, and data from the jail and fear meters.
There is also a dashboard that aggregates the data collected by all of the users during a run or a series of runs (Figure 3). This information includes the group’s fidelity to the prosecutor’s request, the defendant’s story, and the risk assessment.
We also include a “line-up view”, which shows all of the faces of those detained or released during the session. There is a separate view that shows the collateral consequences of a user’s decisions, like the number of people that pleaded guilty or lost their job on account of being detained pretrial. Both feed the larger goal of building an ecosystem that illustrates knock-on effects of AI-assisted decision-making in the courts. The collateral consequences frame is the most impactful to the students when considering the effects of their choices.
Our class was a two-hour block, which allowed students to go through the simulation three times. Between each run of the simulation, we lectured on different topics, like the current practice of bail and pretrial detention, risk assessments, and the legal and political challenges to both.
What We Learned And Where We’re Going
All put together, this approach was a significant improvement over our first-year lecture.
Each simulation run brought to light new effects that risk assessments had on our students. One classroom instance exemplifies this: when a defendant had a high risk of a new violent crime, students detained the defendant 95 percent of the time. By contrast, there was little correlation between other factors and the detention rate, like a defendant’s statement and a prosecutor’s recommendation. We’ve seen this same correlation in other classes and presentations. This illustrates that risk assessments can have a dispositive effect on detention outcomes. The kicker at the end of the lesson is that there is no Detain/Release algorithm, it’s functionally a quasi-random number generator.
This reveal and the students’ data lead to fruitful in-class discussions about AI competency, transparency, accountability, and accuracy. Running this simulation later in the term, it’s also an opportunity for students to contextualize other lectures on artificial intelligence. With simulated experience, students are led to think through when risk assessments–or similar tools–are appropriate for use, their parameters, limits, and how to understand the result created by the system. Covering these topics in a hands on way makes students better prepared to practice law in an AI-assisted world. Further, it opens the doors to more philosophical questions about individual agency and judicial independence in the time of artificial intelligence. The simulation is currently hosted online and free for anyone to use.
While the simulation is an effective tool for the classroom, it is also something policymakers, lawyers, and other officials have asked to use. Detain/Release can help local prosecutors or defense attorneys learning about risk assessments for the first time think through the potential impact of algorithmic-assisted decision making in the judicial process.
To this end, we are developing an online continuing education course based on this class. We also have a version in the works that would allow the user to customize Detain/Release. In this version, we allow a user to pick an existing actuarial risk assessment tool, instead of relying on the simulation’s fake one. Second, the user can upload her jurisdiction’s pretrial detention hearing data. Together, this will allow stakeholders to see how a tool would treat people in their jurisdiction, which could inform adoption and use. Given access to multiple risk assessment tools, we can skin them for the Detain/Release ecosystem and create a novel, easy to use and needed platform for researchers and policymakers to compare various tools against each other.
While built for an American legal audience, simulations like Detain/Release need not be limited by jurisdiction. With the underlying issues of accountability, transparency, and competency being universal, there is an opportunity to make versions that reflect the realities of other countries. As Amnesty International called undertrial detention “India’s most ignored problem,” there may be increased interest in India to adopt risk assessments. Creating a Detain/Release simulation for the Indian justice system that reflects local demographic and crime trends, as well as procedural specifics, could help inform the debate around whether or not risk assessments should be adopted.
Regardless of country, simulations hold great potential to help law students learn complex and novel topics, while assisting policymakers, judges, and lawyers to better understand the risks, limits, and potential of new technologies affecting justice systems. To this end, simulations can be one tool to improve our everchanging reality.
* Jason R. Tashea is an adjunct professor of law at Georgetown University Law Center in Washington, D.C. and the product manager of Quest for Justice. Thank you to Surbhi Soni for her research assistance.
Disclosure: the author is a member of the IEEE Law Committee and contributed to this document.