Fact Checking NYPD Facial Recognition Final Policy
The NYPD has published its final policies for all 36 of the technology tools that are used for surveillance within the City. Eisenberg & Baum’s AI Fairness and Data Privacy Practice Group and URBAN AI (founder Renee Cummings, criminologist and data activist at University of Virginia) examined the Facial Recognition Technology policy and found it to be riddled with problematic, false, and misleading information, some of which we outline here:
The final policies on facial recognition technology falls short of the NY POST ACT (Int 0487-2018), which mandated the NYPD to disclose the description, capabilities, rules, processes, guidelines, and any safeguards and security measures designed to protect the information collected.
New York City public seeks nothing less than an accurate, comprehensive, and full disclosure of the use of controversial and untested technologies that would impinge on important Constitutional Rights of individuals to be free from unreasonable searches and seizures, and the right to be free from discrimination.
A breach of those rights allows individuals to sue for violations under Section 1983 (42 U.S.C. 1983) and under New York City and State Human Rights Laws. Citizens have also taken action against Clearview AI, under applicable laws including Privacy Law (i.e. lawsuits under Illinois Biometric Information Privacy Act), unlawful appropriation of likeness, unjust enrichment, and laws against unfair and deceptive business practices.
The NYPD’s relationship with Clearview AI has been discovered and outlined recently in Tate Ryan-Mosley’s April 9, 2021 article for MIT Technology Review, “The NYPD used a controversial facial recognition tool. Here’s what you need to know.”
Clearview AI has been controversial since Kashmir Hill’s January 19, 2020 scoop for The New York Times, “The Secretive Company That Might End Privacy as We Know It,” which exposed the untested technology’s deep ties with police departments across the country. This story came on the heels of heightened scrutiny of facial recognition technology generally following the discovery that the technology’s accuracy decreased when applied to darker skinned female facial images, a finding that was made public by Joy Buolamwini and her seminal work on the subject for MIT’s Media Lab, “Gender Shades Project,” recently popularized in the film Coded Bias.
The use of facial recognition technology by law enforcement and the Immigration and Customs Enforcement puts minority and immigrant communities at harm, as outlined in “ICE and DHS use Clearview AI, but won’t say how. They’re being sued for an answer.” by Anna Kramer on April 13, 2021 in Protocol.
We urge the NYPD Commissioner to strictly enforce the POST ACT and mandate accurate reporting of the policies as a whole, and take measures to ensure that any use of surveillance technologies are made transparent and that clear oversight mechanisms are established and implemented.
Further Reading: Final Report: Public Comments on NYPD Surveillance Technology Submitted by Eisenberg & Baum, LLP on Feb25th, 2021