Carnegie Mellon University

mock-up of printable glasses

November 30, 2016

CyLab researchers spoof state-of-the-art facial recognition algorithms with printable eyeglasses

Want to fool facial recognition algorithms into thinking you’re Russell Crowe? Just ask a team of CyLab researchers to print you a pair of paper eyeglasses.

In their paper, titled “Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition,” the researchers describe a method of spoofing the most advanced facial recognition systems using colorful eyeglasses that can be printed on a regular inkjet printer. The team presented their work last month at the Association for Computing Machinery (ACM) Computer and Communications Security conference in Vienna, Austria.

“We had two goals,” says CyLab researcher Mahmood Sharif, a Ph.D. student in the Department of Electrical and Computer Engineering (ECE) and lead author on the study. “We wanted to tell if we were able to do impersonation, and we wanted to see if we could dodge recognition.”

Putting himself in the shoes of an adversary, Sharif explains that achieving the goal of impersonation could give someone access to resources that they normally don’t have access to. The second goal—dodging recognition completely—would be useful in giving a person more privacy.

“This is a step towards building better algorithms, not just for facial recognition but for neural networks,” says Lujo Bauer, a professor of ECE, faculty member in the Institute for Software Research and a co-author of the study. “We’re trying to figure out how to tune and tweak neural networks so that they’re robust to these types of adversarial examples.”

Starting with an original image of, say, actress Reese Witherspoon’s face, the team’s system used a neural network to iteratively change an eyeglasses-shaped region around Witherspoon’s eyes until the facial recognition algorithm misclassified her as Russell Crowe.

“Suppose we start with gray eyeglasses,” Sharif explains. “The algorithm asks, how do I change the color of these glasses such that, in the next iteration, the face-recognition system will think it’s more likely that the image is of Russell Crowe?”

The team successfully spoofed state-of-the-art facial recognition algorithms into thinking that images of themselves were actually images of celebrities like Carson Daly, Milla Jovovich, and others. In other instances, they were able to dodge recognition completely.

Sharif says that these spoofs were all performed to help improve future facial recognition models.

“Turns out, in an adversarial setting, these facial recognition algorithms can be fooled in ways that humans cannot be fooled,” Sharif says. “So the question is, how do we close this gap? How do we make machine learning models or artificial intelligence models more similar to humans in the sense that they aren’t fooled by very minimal modifications to the input?”

Other authors of the study included Societal Computing Ph.D. student Sruti Bhagavatula and University of North Carolina professor Michael K. Reiter.

Story originally published here.