Skip to main content

Is Clearview AI’s facial recognition legal? We need to figure it out soon

Facial Recognition Composite
No one seems to be able to figure out if what Clearview AI is doing is legal, a quandary that has exposed the messy patchwork of laws that allow exploitation of people’s personal data to profligate.

As reported first by CNET, Google and YouTube recently sent cease-and-desist letters to Clearview — the controversial law enforcement facial recognition technology — over its scraping of their sites for people’s information. Twitter sent one in January, while Facebook is said to be reviewing Clearview’s practices.

Recommended Videos

Meanwhile, Clearview AI founder Hoan Ton-That has made a claim that it’s his First Amendment right to collect public photos and use them in the way he has. He told CBS News in an interview that, essentially, spilling the data of millions of people to the police is totally legal. Scholars say that is both true and not true. What is clear is this technology isn’t going away, and these legal questions will need to sort themselves out somehow, and soon.

Please enable Javascript to view this content

“There have been arguments that there is a First Amendment right to access publicly available information on the internet and scrape it,” said Matthew Kugler, associate professor at Northwestern University. But what happens to that data after you collect it is not necessarily protected.

“We don’t have a binding answer to that,” Kugler told Digital Trends. “It’s within the realm of plausible argument, but Clearview is on shaky ground if they want to say ‘I have this data and I can do whatever I want with it.’”

“It’s not surprising that the CEO of Clearview AI is weaponizing the First Amendment to justify his company’s practice of scraping online photos of people’s faces without their consent,” said Evan Selinger, a professor of philosophy and technology at the Rochester Institute of Technology. “What’s required to properly govern Clearview AI and so much else is a thoroughgoing reexamination of what private and public mean that shifts the key debates in law, ethics, design, and even everyday expectations. In short, it’s long overdue to acknowledge that people can have legitimate privacy interests in publicly shared information.”

A patchwork of laws

As of this writing, there are few, if any, federal statutes governing online privacy. What exists instead is a patchwork of state laws, including those of Virginia, Illinois — under whose law Facebook recently lost a $500 million lawsuit over its photo tagging and facial identification features — and California, which just recently enacted the strongest law so far in the union.

“I have a First Amendment right to expose the private data of millions of people” sounds crazy, but it’s legally tenable.

HiQ used a similar argument in its CFAA case vs. LinkedIn. https://t.co/u3IZMGyeyW

— Tiffany C. Li (@tiffanycli) February 4, 2020

A similar situation was litigated recently when LinkedIn sent a cease-and-desist to the startup hiQ, which sold data on people to their employers based on what hiQ could scrape from LinkedIn. The reasoning was, theoretically, to help businesses keep track of their workforces, Reuters reported at the time.

LinkedIn claimed that this violated its terms of use; hiQ, in turn, said it couldn’t run its business without being able to scrape LinkedIn’s data, setting up a fight between the First Amendment and the 1986 Computer Fraud and Abuse Act, which prohibits unauthorized computer access.

The case ran up against the same privacy versus tech issues now facing Clearview AI: Is it fair to take publicly available data, store it, repackage it, process it, and sell it, or is that a violation of privacy?

Basically, we haven’t yet figured that out. The LinkedIn case ultimately found that scraping is protected and legal, and hiQ is still in business. But, as Albert Gidari put it, this particular case with Clearview raises other tough issues, so it’s unlikely there will be a clear resolution regarding this conduct in the short term.

Gidari, the consulting director of privacy at the Center for Internet and Society at Stanford University, told Digital Trends via email that although Clearview asserts its right to scrape and use photos, “individuals also have a statutory right to their image.” For example: California prohibits unauthorized use of a person’s voice, image, or name for someone else’s benefit (which is clearly what Clearview does). “There is no doubt that these photos fall under the Illinois statute and probably under CCPA [the California law] as biometric use without consent,” he wrote in an email.

However, Clearview also asserts that its use of images is in the public interest, which could be allowed if a judge were to find that argument convincing.

The consequences of eating too much cake

Facial Recognition Composite
izusek/Getty Images

Chris Kennedy, the chief information and security officer for cybersecurity firm AttackIQ, told Digital Trends that these are all signs of a reckoning between the information buffet that we’ve been enjoying and the privacy ramifications we will soon have to face. “We live in an age of a growing distrust for technology,” he told Digital Trends. “The last 20 years, we’ve had our cake and ate it, too. We had free sharing of information, we enabled e-commerce, and now it’s just become the expectation that you’ll put yourself out there on the Internet. We paying the price now.”

Basically, Kennedy says, all the goodwill that was built up over the early years of the Internet, when people were having their informational cake and eating it as well, is starting to erode, partly because there are no clear rules to follow, and therefore no clear expectations as to what will happen to you on the Internet. That needs to change, he said.

Kennedy is certain we’re moving in a very pro-Clearview AI-direction; that is, the facial recognition genie is out of the technological bottle, and there’s no way to reel it back in.

“We can’t slow the pace of technology without significant cultural shifts … and enforceable laws,” he told Digital Trends. “It can’t be this toe in the water stuff like CCPA or GDPR [the name for European digital privacy laws]. It has to be, ‘this is how it is, these are the expectations in the management of your data and information, you must adhere to them or risk the consequences.’ It’s like when a hurricane comes. You leave, or you pay.”

Maya Shwayder
I'm a multimedia journalist currently based in New England. I previously worked for DW News/Deutsche Welle as an anchor and…
Amazon bans police from using facial recognition tech Rekognition for 1 year
Amazon Logo

Amazon has barred police from using its facial recognition technology for one year.

In a company blog post Wednesday, Amazon said it will implement a one-year "moratorium on police use of Amazon’s facial recognition technology" -- known as Rekognition.

Read more
IBM will no longer develop or research facial recognition tech
IBM's Summit Supercomputer

IBM CEO Arvind Krishna says the company will no longer develop or offer general-purpose facial recognition or analysis software. In a June 8 letter addressed to Congress and written in support of the Justice in Policing Act of 2020, Krishna advocates for new reforms that support the responsible use of technology -- and combat systematic racial injustice and police misconduct.

“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” wrote Krishna in the letter.

Read more
‘Dazzle’ makeup won’t trick facial recognition. Here’s what experts say will
martymoment CV dazzle

As demonstrators protest against racism and police brutality, some have suggested that extravagant makeup can block facial recognition technology they worry have been deployed by authorities.

But the creator of this “CV Dazzle” makeup style said the patterns, which were designed to fool an older method of facial detection, won't trick more sophisticated algorithms — though he and other experts said protesters can take steps to evade detection.

Read more