Clearview AI and You

Featuring Cato Institute

Is there a picture of you on social media?

If so, you are possibly among the trove of people whose images are part of a massive facial‐​recognition database. A company called Clearview AI collected billions of publicly available images from websites to build a facial image search engine. Clearview’s clients include public and private organizations alike, including the FBI, local police departments, Walmart, Macy’s, and even the NBA.

Too often, police across the country use surveillance technology without first informing the public. Although law enforcement agencies have used Clearview’s technology to investigate crimes, it could also allow them to identify protesters, journalists, and people simply engaged in legal activity.

A liberal society requires citizens and residents to have private areas. Facial recognition databases used by law enforcement should only include data related to people with outstanding warrants for violent and other serious crimes. Local officials should halt the use real‐​time identification and be transparent about the surveillance technology they plan to use.

Facial recognition can be useful, but without right protections and restrictions it’s a surveillance nightmare.


RELATED

PODCAST: Clearview and the Cops

A tech company promises to link up photos of unknown people with their presence on the web for private clients and police. What does that mean for privacy, and for how police do their jobs?


Facial Recognition Technology Is Getting out of Control

Technology may be moving faster than the law, but that’s not a reason for officials to resign themselves to an inevitable world where the abolition of privacy is the price of a social life.