ICE's New York Office Uses A Rigged Algorithm to Keep Arrestees in Detention

IN 2013, U.S. Immigration and Customs Enforcement quietly began using a software tool to recommend whether people arrested over immigration violations should be let go after 48 hours or detained. The software’s algorithm supposedly pored over a variety of risk factors before outputting a decision.

ICE's New York Office Uses A Rigged Algorithm to Keep Arrestees in Detention

A new lawsuit, however, filed by the New York Civil Liberties Union and Bronx Defenders, alleges that the algorithm doesn’t really make a decision, at least not one that can result in a detainee being released. Instead, the groups said, it’s an unconstitutional cudgel that’s been rigged to detain virtually everyone ICE’s New York Field Office brings in, even when the government itself believes they present a minimal threat to public safety.

The suit, which asks that ICE’s “Risk Classification Assessment” tool be ruled illegal and the affected detainees reassessed by humans, includes damning new data obtained by the NYCLU through a Freedom of Information Act lawsuit. The data illuminates the extent to which the so-called algorithm has been perverted. Between 2013 and 2017, the FOIA data shows, the algorithm recommended detention without bond for “low risk” individuals 53 percent of the time, according an analysis by the NYCLU and Bronx Defenders. But from June 2017 — shortly after President Donald Trump took office — to September 2019, that number exploded to 97 percent.

“This dramatic drop in the release rate comes at a time when exponentially more people are being arrested in the New York City area and immigration officials have expanded arrests of those not convicted of criminal offenses,” says the groups’ lawsuit. “The federal government’s sweeping detention dragnet means that people who pose no flight or safety risk are being jailed as a matter of course—in an unlawful trend that is getting worse.”

Individuals detained under what the lawsuit calls a “no-release policy” will remain jailed until they can be seen by an immigration judge. People arrested by ICE had no access to information about how they were classified by the algorithm — that’s why the FOIAs were necessary — and most don’t have access to lawyers at the time of their detention, Thomas Scott-Railton, a fellow at the Bronx Defenders told The Intercept. “The result,” he said, “is that people are detained for weeks, even months, without having been given the actual justification for their detention and without a real chance to challenge it.”

THE LAWSUIT ALLEGES that this algorithmic rubber stamp violates both the constitutional guarantee to due process and federal immigration law that calls for “individualized determinations” about release, rather than blanket denials with a computerized imprimatur. Reached by email, ICE New York spokesperson Rachael Yong Yow told The Intercept, “I am not familiar with the lawsuit you reference, but I am not inclined to comment on pending litigation.”

The risk assessment algorithm is supposed to provide a recommendation to ICE officers who are then meant to make the final decision, but the agency’s New York Field Office diverged from the algorithm’s ruling less than 1 percent of the time since 2017. When detainees are finally seen by a human, non-algorithmic immigration judge, the lawsuit says, “approximately 40% of people detained by ICE are granted release on bond.”

The Trump administration’s stepped-up immigration arrests of people without criminal convictions lay bare the perversity of the rigged no-release policy. “If the New York Field Office were actually conducting individualized determinations pursuant to its stated criteria,” the lawsuit says, “the percentage of people released should have actually increased since 2017 because more people arrested qualified for release.”

The technical reasons for this drastic change are clear. Algorithms are essentially problem-solving formulas that can operate at superhuman speed. ICE’s risk assessment algorithm originally functioned by automatically reviewing an immigration detainee’s personal history, weighing factors like their flight risk and threat to public safety, then spitting out one of four options: detention without bond, detention with the possibility of a release on bond, outright release, or a referral to a human ICE supervisor.

In 2018, Reuters reported that Trump’s inauguration brought a critical change to the risk assessment tool where the software was edited to simply remove the possibility of a “release” output. The NYCLU’s FOIA data also shows that the option for bond was removed in 2015. In other words, this ostensible problem-solving software was rigged to provide only one solution: detention.

BASED ON THE government’s own data, the decision-making tool functionally makes decisions about as well as a stopped clock would tell time. Rather than functioning as a tool that even attempts to aid human decision-making, FOIA data shows the opposite. The “Risk Classification Assessment” tool serves as a funnel to fast-track action in line with the Trump administration’s brutal immigration agenda.