The New York Police Department apprehended a protester in New York City utilizing facial recognition technology from Clearview AI, bypassing established policies that impose stringent restrictions on such practices.
According to a report by The City, [1] the NYPD opted to outsource a Clearview search via a fire marshal from the city's fire department, rather than conducting the search itself. The information obtained from this search led to the identification of a pro-Palestinian protester at Columbia University, who was accused of throwing a rock at a pro-Israeli protester during an April demonstration. The individual, 21-year-old Zuhdi Ahmed, a pre-med student at [2] the City University of New York or CUNY, has since been arrested and charged with second-degree aggravated harassment, a misdemeanor.
Alvin Bragg, [3] who is the Manhattan District Attorney charged Ahmed with a felony, assault in the third degree as a hate crime, which was later reduced to a misdemeanor of second degree aggravated harassment. A criminal court judge in June dismissed the case against Ahmed and in a lengthy [4] ruling raised red flags about government surveillance and practices that ran afoul of law enforcement’s own policies. Law enforcement carries an outsized burden regarding public trust in biometric technologies. Much of the apprehension surrounding tools such as facial recognition revolves around the possibility of overreach by police and other governmental entities. Efforts to regulate these technologies are designed to mitigate such risks. However, when authorities violate these regulations and engage in practices that fuel public fears, it undermines yet another fundamental element of confidence in biometric systems.
The New York Police Department [5] has been utilizing facial recognition technology since 2011. Amnesty International [6] reports that between 2017 and 2019, the NYPD deployed this technology in 22,000 instances. While the specifics of these cases have remained under wraps, a ruling from an appeals court in February 2025 mandated that the NYPD must release all documents pertaining to the management of its FRT database. Yet, many police departments struggle with transparency. When information is restricted, it fosters an environment of mistrust, often justified by instances where authorities appear to bypass regulations.
In a notable example of mishandled facial recognition technology by law enforcement, Cleveland is preparing for a significant legal hearing. Next month, the Eighth Ohio District Court of Appeals will tackle arguments in a murder case, [7] as reported by Cleveland news. This case has the potential to establish important precedents regarding the application of facial recognition methods in criminal investigations. Clearview AI has reported [8] a significant increase in the use of its facial recognition technology by law enforcement, with the number of searches doubling to 2 million in just one year. Additionally, the company’s extensive database of facial images for biometric analysis has expanded to an impressive total of 50 billion, as confirmed by [9] CEO Hoan Ton-That.
The deployment of facial recognition technology by law enforcement agencies is facing heightened scrutiny. While several police departments have prohibited its use, investigations reveal that some have sought assistance from neighboring jurisdictions to conduct searches on their behalf. Concerns persist among critics regarding the potential for misuse of the technology by police. This unease was exemplified by a recent incident in Evansville, Indiana, where an officer [10] resigned amid escalating pressure after it was discovered that he had employed the technology for personal purposes, specifically to search social media accounts. Evansville Police Chief Philip Smith said, “At that point, we observed an anomaly of very high usage of the software by an officer whose work output was not indicative of the number of inquiry searches that they had.”