Why face recognition technology is making some cities nervous

Facial recognition is becoming one of the 21st century’s biggest public space issues

Why face recognition technology is making some cities nervous

Automation and artificial intelligence (AI) are often marketed as wondrous and futuristic technologies that will help us live more convenient lives. But beyond the idealistic marketing hype the reality is far more malignant.

“AI” isn’t just asking Siri for directions or telling Alexa to turn on your lights; it’s already being used—and has been used for decades—to decide who receives medical care, who attends which schools, and who receives housing assistance.

It can also be used to identify people and people and make predictions about their behavior, a type of AI known as facial recognition.

Facial recognition software is already widespread. Customs and Border Protection uses it to screen non-U.S. residents on international flights and TSA plans to expand this to all international travelers.

The New Orleans police department, in partnership with Palantir, was using facial recognition in its predictive policing program for six years before the public knew about it.

It’s estimated that half of all American adults are in some sort of facial recognition database.

And at AI Now—a symposium held October 16 about the intersection of artificial intelligence, ethics, organizing and accountability, presented by an NYU research institute of the same name—panelists warned that facial-recognition technology has troubling implications for civil rights, especially amid current debates about who has access to public space.

“We should be deeply worried about the impact of AI face recognition on civil rights,” said Sherrilyn Ifill, president and director-counsel of the NAACP Legal Defense Fund. “So much of this implicates public space, contested public space, who can step into it, and what happens to us when we do.”

The rise of mass surveillance and biometric analysis in public space

Since 9/11, mass surveillance has become the norm for law enforcement, which uses a variety of tactics to monitor people. Body cameras have become more common and security cameras are already ubiquitous. What’s changing today isn’t that we are being watched, it’s who’s doing the watching: computers.

“Autonomous facial recognition” is a system that uses computer vision to look at images. Snapchat’s filters and Apple’s FaceID use computer vision, for example. Then, an algorithm—designed by human programmers and data scientists—processes those images and uses machine learning to analyze them by cross referencing the image to other data sources.

The facial recognition software technology companies are developing can identify a person in a crowd in real time, track someone’s movements, detect emotions, and predict behavior.

In a highly controversial study, one AI researcher claimed he could use facial recognition to predict someone’s sexuality.

IBM used NYPD footage to create software that lets police search for people by their skin color.

There are, of course, huge caveats with facial recognition technology: it’s notorious for being highly inaccurate.

Amazon’s face-recognition software—which analyzes facial attributes gender and emotions—wrongly identified 28 members of Congress as people who have been charged with a crime.

Facial recognition software used by the Metropolitan Police, law enforcement in London, had an embarrassingly high rate of false-positive identifications: 98 percent.

Why facial recognition is a Civil Rights issue

During the AI Now panel, Nicole Ozer, the technology and civil liberties director of the ACLU’s California chapter, called facial recognition technology a “grave threat” that “feeds off and exacerbates bias in society.”

If facial recognition systems sound a lot souped-up versions of phrenology’s quack science, it’s because they are.

Can someone look a criminal? Look they’re about to cause harm? Appear to be a risk? Who decides that these traits actually signify criminality or risk? Or what a neutral expression is? This isn’t much different than the stereotyping and profiling that happens today, which disproportionately affects people of color.

“It’s not just recognizing a face; it’s evaluating a person,” Ifill said of the technology.

Facial recognition technology accelerates such judgments, makes them at a larger scale, and does so with a greater level of opacity. Then“we deposit this tech into institutions that are unable to address inequality, and drop it into a period of racial profiling and Stop and Frisk,” Ifill said.

Take New York City’s Gang Database, which currently lists between 17,000 and 20,000 individuals. It’s only one-percent white—and it’s not entirely clear how someone is added to the list or removed, which has potentially damaging effects if someone is labeled a gang member when they have no affiliations.

In December 2017 and February 2018, NAACP Legal Defense and Educational Fund and the Center for Constitutional Rights filed a Freedom of Information Law request to understand how the NYPD builds, maintains, and and audits the database but didn’t receive the documents it requested. In August, the advocacy groups sued the NYPD for failure to disclose its practices.

Why public space is a civil rights issue

The civil-rights implications of facial recognition become particularly clear in public space.

What’s troubling about this tech—and data collection in general—entering the public realm at a rapid clip is that they erode our constitutional rights to privacy, free speech, freedom of assembly, and due process.

Meanwhile, our current political climate is becoming more hostile to these rights in public space. It’s not too far of a stretch to see how facial recognition could make it worse.

The Trump Administration is actively trying to restrict access to public space that’s popular with protesters. Would fewer people attend demonstrations if they knew their image could end up in a database just by attending? Police departments are already scanning crowds and protests to find and arrest people with outstanding warrants by cross referencing footage with social media profiles.

Facial recognition adds fuel to the fire by making it impossible to move through public space freely. As Ifill said during the panel: “The Civil Rights movement was about dignity in public space.” Facial recognition has the potential to strip dignity, all behind the scenes, and all surreptitiously.

Ifill draws parallels to the 1958 Supreme Court Case NAACP v. Alabama. In an attempt to intimidate Civil Rights activists, Alabama subpoenaed the NAACP’s membership list, which the organization declined to do. The Supreme Court ruled that identifying who was in the NAACP would infringe on the right to privacy and free association.

In Wales, the human rights organization Liberty sued South Wales police for using facial recognition technology and told the Guardian, “The police’s creeping roll facial recognition into our streets and public spaces is a poisonous cocktail—it shows a disregard for democratic scrutiny, an indifference to discrimination, and a rejection of the public’s fundamental rights to privacy and free expression.”

Public space, in its lowest common denominator, is space that’s open and accessible to all people regardless of race, age, income, sex/gender, or economic background.

With facial recognition software in widespread use, you essentially lose privacy as soon as you enter the public realm.

And public space ceases to be public if people can’t freely use it—including using it free of fear that they will be inaccurately profiled by humans or autonomous systems.

Powerful technology with virtually no oversight

This summer, a group of Amazon employees called on Jeff Bezos to stop selling its face-recognition technology—called Amazon Web Services Rekognition—to law enforcement, citing concerns about violating human rights, especially considering the targeting of black activists by law enforcement and ICE’s mistreatment of migrants and refugees. Just this week, Jeff Bezos essentially washed his hands of any culpability to civil or human rights violations related to Rekognition, saying that society’s “immune response” will eventually fix biased tech.

This week, an anonymous Amazon employee wrote a Medium post which discusses how law enforcement around the country—specifically Orlando, Florida, and and unnamed Oregon sheriff’s department—is testing facial recognition software with live video feeds and mugshot databases with virtually no public oversight.

It’s not just law enforcement that’s playing into mass surveillance. Cities are using machine learning and computer vision to “optimize” their operations and redesign their systems. Sidewalk Labs is building a data-driven “smart city” in Toronto. Consumer brands are latching onto facial recognition software to target advertising in public space.

“We only know the tip of the iceberg on how the government is ramping up face surveillance and how companies are using it,” Ozer said.

More public oversight of facial recognition may be coming. This summer, the ACLU asked Congress to institute a moratorium on the use of facial recognition by government agencies. Microsoft also urged Congress to regulate the technology.

New York City created an “Automated Decision Systems Task Force” to create recommendations on how AI should be regulated and monitored. This summer, 20 experts in civil rights and artificial intelligence authored a letter to the task force with preliminary suggestions about what policy could look .

In Toronto, Sidewalk Labs is proposing a “Civic Data Trust” to oversee the information it will eventually collect.

Here’s a chilling game: count how many security cameras you pass on your way to work. I tried it yesterday and passed 85 on a trip that included about six blocks of walking in Brooklyn and Manhattan and passing through two subway stations.

Some were private (the ones outside my apartment building and in its hallways, the cameras in front of shops and bodegas, and security cameras in and around my office building).

Thirty-seven were associated with New York City governance (the NYPD cameras on light poles and service vehicles, security cameras in LinkNYC kiosks, cameras in MTA stations).

What was troubling to me is that I don’t know, off the top of my head, what the privacy policies are for any of these companies, how long footage is stored, who has deals to share data, or how secure any of the data is.

Was I being profiled? By leaving my apartment, did my image enter some sort of database? I would have no idea, just everyone else who ventures outside. (I looked into it and learned that some of the MTA’s cameras feed into NYPD’s Domain Awareness System, which uses facial recognition.

According to a LinkNYC spokesperson, the company has a provision banning the use of facial recognition technology and does not send footage to the NYPD unless it is subpoenaed.)

In her opening statement to the symposium, AI Now co-founder Kate Crawford said a few words that stuck with me: “AI isn’t tech; it’s power, politics, and culture.” Right now, our political climate is hostile to our civil liberties, it’s using its power to restrict access to public space, and we live in a culture of fear. What good is a public space if people are too anxious to use it?

Источник: https://archive.curbed.com/2018/10/19/17989368/facial-recognition-public-space-ai-now

San Francisco banned facial recognition tech. Here’s why other cities should too

Why face recognition technology is making some cities nervous
An illustration of a face scan. Adapted from Getty Images

San Francisco has become the first US city to ban the use of facial recognition technology by the police and local government agencies.

This is a huge win for those who argue that the tech — which can identify an individual by analyzing their facial features in images, in videos, or in real time — carries risks so serious that they far outweigh any benefits.

The “Stop Secret Surveillance” ordinance, which passed 8-1 in a Tuesday vote by the city’s Board of Supervisors, will also prevent city agencies from adopting any other type of surveillance tech (say, automatic license plate readers) until the public has been given notice and the board has had a chance to vote on it.

The ban on facial recognition tech doesn’t apply to businesses, individuals, or federal agencies the Transportation Security Administration at San Francisco International Airport. But the limits it places on police are important, especially for marginalized and overpoliced communities.

Although the tech is pretty good at identifying white male faces, because those are the sorts of faces it’s been trained on, it often misidentifies people of color and women. That bias could lead to them being disproportionately held for questioning when law enforcement agencies put the tech to use.

San Francisco’s new ban may inspire other cities to follow suit. Later this month, Oakland, California, will weigh whether to institute its own ban. Washington state and Massachusetts are considering similar measures.

But some argue that outlawing facial recognition tech is throwing the proverbial baby out with the bathwater.

They say the software can help with worthy aims, finding missing children and elderly adults or catching criminals and terrorists.

Microsoft president Brad Smith has said it would be “cruel” to altogether stop selling the software to government agencies. This camp wants to see the tech regulated, not banned.

Yet there’s good reason to think regulation won’t be enough.

For one thing, the danger of this tech is not well understood by the general public — not least because it’s been marketed to us as convenient ( will tag your friends’ faces for you in pictures), cute (phone apps will let you put funny filters on your face), and cool (the latest iPhone’s Face ID makes it the shiny new must-have gadget).

What’s more, the market for this tech is so lucrative that there are strong financial incentives to keep pushing it into more areas of our lives in the absence of a ban.

AI is also developing so fast that regulators would ly have to play whack-a-mole as they struggle to keep up with evolving forms of facial recognition.

The risks of this tech — including the risk that it will fuel racial discrimination — are so great that there’s a strong argument for implementing a ban the one San Francisco has passed.

A ban is an extreme measure, yes. But a tool that enables a government to immediately identify us anytime we cross the street is so inherently dangerous that treating it with extreme caution makes sense.

Instead of starting from the assumption that facial recognition is permissible — which is the de facto reality we’ve unwittingly gotten used to as tech companies marketed the software to us unencumbered by legislation — we’d do better to start from the assumption that it’s banned, then carve out rare exceptions for specific cases when it might be warranted.

The case for banning facial recognition tech

Proponents of a ban have put forward a number of arguments for it. First, there’s the well-documented fact that human bias can creep into AI.

Often, this manifests as a problem with the training data that goes into AIs: If designers mostly feed the systems examples of white male faces, and don’t think to diversify their data, the systems won’t learn to properly recognize women and people of color.

In 2015, Google’s image recognition system labeled African Americans as “gorillas.” Three years later, Amazon’s Rekognition system matched 28 members of Congress to criminal mug shots. Another study found that three facial recognition systems — IBM, Microsoft, and China’s Megvii — were more ly to misidentify the gender of dark-skinned people (especially women) than of light-skinned people.

Even if all the technical issues were to be fixed and facial recognition tech completely de-biased, would that stop the software from harming our society when it’s deployed in the real world? Not necessarily, as a new report from the AI Now Institute explains.

Say the tech gets just as good at identifying black people as it is at identifying white people. That may not actually be a positive change.

Given that the black community is already overpoliced in the US, making black faces more legible to this tech and then giving the tech to police could just exacerbate discrimination.

As Zoé Samudzi wrote at the Daily Beast, “It is not social progress to make black people equally visible to software that will inevitably be further weaponized against us.”

Woodrow Hartzog and Evan Selinger, a law professor and a philosophy professor, respectively, argued last year in an important essay that facial recognition tech is inherently damaging to our social fabric.

“The mere existence of facial recognition systems, which are often invisible, harms civil liberties, because people will act differently if they suspect they’re being surveilled,” they wrote.

The worry is that there’ll be a chilling effect on freedom of speech, assembly, and religion.

It’s not hard to imagine some people becoming too nervous to show up at a protest, say, or a mosque, especially given the way law enforcement has already used facial recognition tech. As Recode’s Shirin Ghaffary noted, Baltimore police used it to identify and arrest protesters of Freddie Gray’s death.

Hartzog and Selinger also note that our faces are something we can’t change (at least not without surgery), that they’re central to our identity, and that they’re all too easily captured from a distance (un fingerprints or iris scans). If we don’t ban facial recognition before it becomes more entrenched, they argue, “people won’t know what it’s to be in public without being automatically identified, profiled, and potentially exploited.”

Facial recognition: “the plutonium of AI”?

Luke Stark, a digital media scholar who works for Microsoft Research Montreal, made another argument for a ban in a recent article titled “Facial recognition is the plutonium of AI.”

Comparing software to a radioactive element may seem over-the-top, but Stark insists the analogy is apt.

Plutonium is the biologically toxic element used to make atomic bombs, and just as its toxicity comes from its chemical structure, the danger of facial recognition is ineradicably, structurally embedded within it.

“Facial recognition, simply by being designed and built, is intrinsically socially toxic, regardless of the intentions of its makers; it needs controls so strict that it should be banned for almost all practical purposes,” he writes.

Stark agrees with the pro-ban arguments listed above but says there’s another, even deeper issue with facial ID systems — that “they attach numerical values to the human face at all.” He explains:

Facial recognition technologies and other systems for visually classifying human bodies through data are inevitably and always means by which “race,” as a constructed category, is defined and made visible. Reducing humans into sets of legible, manipulable signs has been a hallmark of racializing scientific and administrative techniques going back several hundred years.

The mere fact of numerically classifying and schematizing human facial features is dangerous, he says, because it enables governments and companies to divide us into different races.

It’s a short leap from having that capability to “finding numerical reasons for construing some groups as subordinate, and then reifying that subordination by wielding the ‘charisma of numbers’ to claim subordination is a ‘natural’ fact.”

In other words, racial categorization too often feeds racial discrimination. This is not a far-off hypothetical but a current reality: China is already using facial recognition to track Uighur Muslims.

As the New York Times reported last month, “The facial recognition technology, which is integrated into China’s rapidly expanding networks of surveillance cameras, looks exclusively for Uighurs their appearance and keeps records of their comings and goings for search and review.

” This “automated racism” makes it easier for China to round up Uighurs and detain them in internment camps.

Stark, who specifically mentions the case of the Uighurs, concludes that the risks of this tech vastly outweigh the benefits.

He does concede that there might be very rare use cases where the tech could be allowed under a strong regulatory scheme — for example, as an accessibility tool for the visually impaired.

But, he argues, we need to start with the assumption that the tech is banned and make exceptions to that rule, not proceed as if the tech is the rule and regulation is the exception.

“To avoid the social toxicity and racial discrimination it will bring,” he writes, “facial recognition technologies need to be understood for what they are: nuclear-level threats to be handled with extraordinary care.”

Just as several nations came together to create the Non-Proliferation Treaty in the 1960s to curb the spread of nuclear weapons, San Francisco may now serve as a beacon to other cities, showing that it’s possible to say no to the spread of a risky new technology that would make us identifiable and surveillable anywhere we go.

We may have been largely hypnotized by facial recognition’s seeming convenience, cuteness, and coolness when it was first introduced to us. But it’s not too late to wake up.

Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

“,”author”:”Sigal Samuel”,”date_published”:”2019-05-16T11:00:00.000Z”,”lead_image_url”:”https://cdn.vox-cdn.com/thumbor/W2lBvQtyJjKPgmCxrFwNUQJUoEQ=/0x369:5200×3092/fit-in/1200×630/cdn.vox-cdn.com/uploads/chorus_asset/file/10115831/GettyImages_607358443.jpg”,”dek”:null,”next_page_url”:null,”url”:”https://www.vox.com/future-perfect/2019/5/16/18625137/ai-facial-recognition-ban-san-francisco-surveillance”,”domain”:”www.vox.com”,”excerpt”:”Don’t want constant surveillance? We can fight back.”,”word_count”:1645,”direction”:”ltr”,”total_pages”:1,”rendered_pages”:1}

Источник: https://www.vox.com/future-perfect/2019/5/16/18625137/ai-facial-recognition-ban-san-francisco-surveillance

NEWS
Leave a Reply

;-) :| :x :twisted: :smile: :shock: :sad: :roll: :razz: :oops: :o :mrgreen: :lol: :idea: :grin: :evil: :cry: :cool: :arrow: :???: :?: :!: