Artificial Intelligence- Generated Faces are Seen as More Trustworthy Than Real Faces

Ray Williams
3 min readNov 30, 2022

Researchers claim that people cannot identify the difference between a face produced by artificial intelligence (AI) using StyleGAN2 and a real face, and are urging measures to prevent “deep fakes.”

StyleGAN2- Is a generative adversarial network (GAN) introduced by Nvidia researchers in December 2018

Text, audio, image, and video that have been artificially generated have already been used for propaganda, fraud, and so-called “revenge porn.”

In experiments conducted by Dr. Sophie Nightingale of Lancaster University and Professor Hany Farid of the University of California, Berkeley, participants were asked to differentiate between realistic faces and state-of-the-art StyleGAN2 synthesized faces, as well as the degree of trust the faces evoked.

Their research was published in the Proceedings of the National Academy of Sciences.

According to the findings, artificially created faces are not only extremely photorealistic but are also practically indistinguishable from actual faces and are also perceived as more trustworthy.

The purpose of the study was to determine whether people’s judgments of reliability could aid in the recognition of fake photographs.

“Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable — and more trustworthy — than real faces,” Nightingale said.

The ramifications of people being unable to recognize AI-generated visuals are discussed by the researchers: “Perhaps most pernicious is the consequence that in a digital world in which any image or video can be faked, the authenticity of any inconvenient or unwelcome recording can be called into question.”

· In the first trial, 315 individuals identified 128 of a total of 800 faces as being either real or artificial. • In a second experiment, 219 new individuals were instructed and given feedback on how to classify faces. Their accuracy rate was 48%, close to a chance performance of 50%.

· In a second experiment, they used the same 800 faces from the first experiment to classify 128 more faces, but even after training, the accuracy rate only increased to 59%.

“Faces offer a rich amount of information, with milliseconds of exposure being sufficient to draw implicit conclusions about personal qualities like trustworthiness. We questioned if artificial faces elicit the same judgments of credibility. If not, then a person’s judgement of credibility may be able to tell a real face from a fake one,” the researchers stated.

In a third study, 223 individuals were asked to rank the reliability of 128 faces chosen from an original collection of 800 faces on a scale from 1 (extremely unreliable) to 7. (very trustworthy).

It is statistically significant that the average rating for synthetic faces was 7.7% MORE trustworthy than the average rating for actual faces.

The researchers reported: “Perhaps most intriguingly, we discover that fake faces are more reliable than actual faces. “

· Black faces were rated as more trustworthy than South Asian faces but otherwise, there was no effect across races.

· Women were rated as significantly more trustworthy than men.

The researchers also reported that “A smiling face is more likely to be rated as trustworthy, but 65.5% of the real faces and 58.8% of synthetic faces are smiling, so facial expression alone cannot explain why synthetic faces are rated as more trustworthy.” The researchers hypothesize that because synthetic faces resemble average faces, which are also seen as more reliable, they may be more trusted.

To protect the public from “deep fakes”, they also proposed guidelines for creating and distributing synthesized images.

“Safeguards could include, for example, incorporating robust watermarks into the image- and video-synthesis networks that would provide a downstream mechanism for reliable identification. Because it is the democratization of access to this powerful technology that poses the most significant threat, we also encourage reconsideration of the often-laissez-faire approach to the public and unrestricted releasing of code for anyone to incorporate into any application.

“At this pivotal moment, and as other scientific and engineering fields have done, we encourage the graphics and vision community to develop guidelines for the creation and distribution of synthetic-media technologies that incorporate ethical guidelines for researchers, publishers, and media distributors.”

--

--

Ray Williams
Ray Williams

Written by Ray Williams

Author/ Executive Coach-Helping People Live Better Lives and Serve Others