Face Recognition

2020-06-24

Today, there was news of another victim of the (mis-)use of face recognition by law enforcement:

"In January, police pulled up to Williams' home and arrested him while he stood on his front lawn in front of his wife and two daughters, ages 2 and 5, who cried as they watched their father being placed in the patrol car."

The handling of the arrest was not exactly stellar either, based on the description provided by the victim's wife:

"They called me earlier that day, asking aggressively for Robert. On the phone, they referred to me as his “wife or baby mama.” "

Given that it is not unusual for US cities to spend 1/5 - 2/5 of their budgets on police, the current standards leave a lot to be desired.

What is also troublesome in the report is the police's apparently shallow understanding and improper handling of facial recognition technology:

"Investigators pulled a security video that had recorded the incident. Detectives zoomed in on the grainy footage and ran the person who appeared to be the suspect through facial recognition software."

It is hard to imagine why anyone would think that facial recognition software would work on a grainy, upscaled image, when it already has trouble with full-resolution images alone. Throughout these years, such facial recognition software has reportedly failed on people who deviate from the average Silicon Valley engineer: a middle-aged white man. This is unsurprising; the bias in the workplace trascends into the software that is often marketed as "neutral" and "objective". In the book The End of Trust, there are numerous accounts of such face recognition failures, with the recognition often misclasifying black men and women 50% of the times. At that point, you might as well flip a coin; here is a quick implementation:

#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <time.h>
#include <unistd.h>

int main(void) {
  srand(time(NULL) * getpid());
  if (rand() & 1) {
    printf("Heads\n");
  } else {
    printf("Tails\n");
  }
  return 0;
}

I have not done any rigurous analysis on the statistical properties of the almost-canonical srand() initialization in my implementation, but the authors of facial recognition software do not appear to have done much analysis on their systems either, so my implementation is likely to be no worse.

The top-level algorithm might look something like the following:

1. Get video evidence of the crime.
2. Run it through facial recognition.
3. Dispatch unit to perform arrest.

There seems to be little to no oversight between steps (2) and (3) above, and when there is, it appears that facial recognition, even when done by humans, is at best subjective and at worst a pseudo-science, as was already evidenced in the 2016 case of Steve Talley. Steve was incorrectly directed to by a facial recognition system as the person who had robbed a bank and assaulted an officer. A unit was dispatched to his house:

"It was just after sundown when a man knocked on Steve Talley’s door in south Denver. The man claimed to have hit Talley’s silver Jeep Cherokee and asked him to assess the damage. So Talley, wearing boxers and a tank top, went outside to take a look. Seconds later, he was knocked to the pavement outside his house. Flash bang grenades detonated, temporarily blinding and deafening him. Three men dressed in black jackets, goggles, and helmets repeatedly hit him with batons and the butts of their guns. He remembers one of the men telling him, “So you like to fuck with my brothers in blue!” while another stood on his face and cracked two of his teeth. “You’ve got the wrong guy,” he remembers shouting. “You guys are crazy.”"

It is really hard to grasp the concept of a democracy where the police beat you up without any real evidence. As explained later in the same article, facial recognition, even when done by humans, is subjective and unreliable:

"[...] and in 2009 a landmark paper by the National Academy of Sciences stated what many had long suspected: Apart from DNA testing, no other forensic method could reliably and consistently “demonstrate a connection between evidence and a specific individual or source."

A blind trust on facial recognition technology by law enforcement should therefore be disturbing to anybody.

What is also disturbing is that this software is proprietary and cannot be studied/audited by the public. Since facial recognition software is developed with taxpayer money, should it not then be software that the public can study and has control over? Facial recognition these days is done by neural networks, and the sauce is usually not the model but the training data. In that case, the data should likely not be available to the public -- it is people's faces after all -- and instead enjoy richer privacy protections. But the system as a whole should at least be auditable by an independent third party, training data included, so that the party can draw conclusions from its inputs and outputs, perform the relevant statistical anlysis, and generally check whether the software works as intended. Why else should the public trust the software? Of course, the existence of such facial recognition system is contingent on a database of faces, which is cause for suspicion when looked at with some perspective, but those have existed for years and we don't seem to have a problem with them? In asking these questions, I am also assuming that the software actually works and solves a real problem, which is a long stretch based on what we have seen so far, and that it is software that we as a society actually want, which is an even longer stretch.

Police departments in the US, far from being transparent and open to public scrutiny, often even operate in rather shady ways and with companies of questionable existence. On late March 2020, right in the middle of the first covid wave in the US, the Vallejo City Council approved a $766,018 purchase of a Stingray in violation of state law regulating the acquisition of such technologies. A few months later, the California Commission on Peace Officer Standards and Training (POST) refused to publish training materials on face recognition and automated license plate readers on the account of copyright restrictions, violating a California law that went into effect on January 1, 2020. Back in 2019, we learned how 400 police deparments across the country had deals with Amazon Ring, and how municipalities were paying Amazon up to $100,000 to subsidize purchases of the cameras. The police were later "coached" by Amazon Ring employees on obtaining access to video feeds without a warrant. And that same year, we learned how many police departments poured thousands into contracts with Clearview AI, a company that scraped people's photos off sites like Twitter and Facebook, in violation of their terms of service, to build a massive database with which to train face recognition algorithms and be able to pull up people's identities on demand.

The grim reality should not deter us from resisting against the use of discriminatory and unjust use of technologies by law enforcement. Long before the George Floyd protests, the Electronic Frontier Foundation (EFF) had already made big strides in this direction. A ban on face recognition in San Francisco, Oakland and Berkeley made it into A.B. 1215, a state-wide 3-year moratorium on the use of the technology by law enforcement in California. Similar bans have emerged in other areas, showing how small, localized victories can quickly ripple throughout the country. And just this week, Santa Cruz, California, issued a ban on predictive policing, which encompasses more unjust applications of technology than face recognition. With the help of the George Floyd protests and the talk of police reform, perhaps we can channel the concerns on the use of these technologies and implement stricter and wider bans and restrictions.

If this is a topic you are interested in, please find references and links for futher reading below, and consider supporting the work of organizations like the Electronic Frontier Foundation (EFF).

References & Further Reading

ACLU / Congress: Stop Face Surveillance Technologies

CNN / This man says he's stockpiling billions of our photos

EFF / Amazon’s Ring Is a Perfect Storm of Privacy Threats

EFF / Amazon Ring Must End Its Dangerous Partnerships With Police

EFF / California Agency Blocks Release of Police Use of Force and Surveillance Training, Claiming Copyright

EFF / Face Recognition

EFF / Five Concerns about Amazon Ring’s Deals with Police

EFF / Vallejo Must Suspend Cell-Site Simulator Purchase

EFF / Victory! California Governor Signs A.B. 1215

Govtech / Santa Cruz, Calif., Bans Predictive Policing Technology

I Got My File From Clearview AI, and It Freaked Me Out

NPR / 'The Computer Got It Wrong': How Facial Recognition Led To False Arrest Of Black Man

Oakland Privacy / Oakland Privacy Sues Vallejo Over Stingray Purchase

Popular Democracy / Congress Must Divest the Billion Dollar Police Budget and Invest in Public Education

The Intercept / Losing Face - How a Facial Recognition Mismatch Can Ruin Your Life