Why AI-Generated Photos of Faces Can Spread Distrust

Ray Williams
6 min readFeb 15, 2023

AI-generated images are spreading in our culture, and the result can have a negative impact on the ability to trust others.

Researchers claim that people cannot identify the difference between a face produced by artificial intelligence (AI) using StyleGAN2 and a real face, and are urging measures to prevent “deep fakes.”

StyleGAN2- Is a generative adversarial network (GAN) introduced by Nvidia researchers in December 2018

Text, audio, image, and video that have been artificially generated have already been used for propaganda, fraud, and so-called “revenge porn.”

In experiments conducted by Dr. Sophie Nightingale of Lancaster University and Professor Hany Farid of the University of California, Berkeley, participants were asked to differentiate between realistic faces and state-of-the-art StyleGAN2 synthesized faces, as well as the degree of trust the faces evoked.

Their research was published in the Proceedings of the National Academy of Sciences.

According to the findings, artificially created faces are not only extremely photorealistic but are also practically indistinguishable from actual faces and are also perceived as more trustworthy.

The purpose of the study was to determine whether people’s judgments of reliability could aid in the recognition of fake photographs. “Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable — and more trustworthy — than real faces,” Nightingale said.

The ramifications of people being unable to recognize AI-generated visuals are discussed by the researchers: “Perhaps most pernicious is the consequence that in a digital world in which any image or video can be faked, the authenticity of any inconvenient or unwelcome recording can be called into question.”

In the first trial, 315 individuals identified 128 of a total of 800 faces as being either real or artificial.

In a second experiment, they used the same 800 faces from the first experiment to classify 128 more faces, but even after training, the accuracy rate only increased to 59%.

“Faces offer a rich amount of information, with milliseconds of exposure being sufficient to draw implicit conclusions about personal qualities like trustworthiness. We questioned if artificial faces elicit the same judgments of credibility. If not, then a person’s judgement of credibility may be able to tell a real face from a fake one,” the researchers stated.

In a third study, 223 individuals were asked to rank the reliability of 128 faces chosen from an original collection of 800 faces on a scale from 1 (extremely unreliable) to 7. (very trustworthy).

It is statistically significant that the average rating for synthetic faces was 7.7% MORE trustworthy than the average rating for actual faces.

The researchers reported: “Perhaps most intriguingly, we discover that fake faces are more reliable than actual faces. “

Black faces were rated as more trustworthy than South Asian faces but otherwise, there was no effect across races. Women were rated as significantly more trustworthy than men.

The researchers also reported that “A smiling face is more likely to be rated as trustworthy, but 65.5% of the real faces and 58.8% of synthetic faces are smiling, so facial expression alone cannot explain why synthetic faces are rated as more trustworthy.” The researchers hypothesize that because synthetic faces resemble average faces, which are also seen as more reliable, they may be more trusted.

To protect the public from “deep fakes”, they also proposed guidelines for creating and distributing synthesized images.

“Safeguards could include, for example, incorporating robust watermarks into the image- and video-synthesis networks that would provide a downstream mechanism for reliable identification. Because it is the democratization of access to this powerful technology that poses the most significant threat, we also encourage reconsideration of the often-laissez-faire approach to the public and unrestricted releasing of code for anyone to incorporate into any application.

“At this pivotal moment, and as other scientific and engineering fields have done, we encourage the graphics and vision community to develop guidelines for the creation and distribution of synthetic-media technologies that incorporate ethical guidelines for researchers, publishers, and media distributors.”

A Second Study on the Our Trust of Faces

Manos Tsakiris, Professor of Psychology at the University of London conducted a study to show that faces created by AI now look more real than genuine photos.

He says that “even if you think you are good at analyzing faces, research shows many people cannot reliably distinguish between photos of real faces and images that have been computer-generated. This is particularly problematic now that computer systems can create realistic-looking photos of people who don’t exist.”

He reported on a story in which a fake LinkedIn profile with an AI profile picture successfully connected with US officials and other influential individuals on the networking platform. He says that “counter-intelligence experts even say that spies routinely create phantom profiles with such pictures to home in on foreign targets over social media.”

While these deep fakes are not new Tsakiris warns that they are becoming widespread in society and being used extensively in business and social media.

Also, he warns, the images are being used for malicious purposes, such as political propaganda, espionage and information warfare.

Tsakiris reports on his study published in the journal Science, which reports that our inability to distinguish these artificial faces from the real thing has implications for our online behaviour. He says, “Our research suggests the fake images may erode our trust in others and profoundly change the way we communicate online.”

He goes on to assert that people perceived AI faces to be even more real-looking than genuine photos of actual people’s faces. While it’s not yet clear why this is, this finding does highlight recent advances in the technology used to generate artificial images.

Tsakiris and his co-researchers “also found an interesting link to attractiveness: faces that were rated as less attractive were also rated as more real. Less attractive faces might be considered more typical and the typical face may be used as a reference against which all faces are evaluated.”

Tsakiris sounds like a warning bell because, he says that “ these artificial faces as authentic may also have consequences for the general levels of trust we extend to a circle of unfamiliar people — a concept known as ‘social trust. We often read too much into the faces we see, and the first impressions we form guide our social interactions. In a second experiment that formed part of our latest study, we saw that people were more likely to trust information conveyed by faces they had previously judged to be real, even if they were artificially generated. We found that trust was eroded once people were informed about the potential presence of artificial faces in online interactions. They then showed lower levels of trust, overall — independently of whether the faces were real or not.”

Researcher Timothy R. Levine published research in the Journal of Language and Social Psychology that shows we have a default assumption that other people are truthful and trustworthy. So the spread of fake profiles and other AI online content the question of how this can alter our “truth default” state, eventually eroding social trust.

Tsakiris argues that “People must be more critical when evaluating digital faces. This can include using reverse image searches to check whether photos are genuine, being wary of social media profiles with little personal information or a large number of followers, and being aware of the potential for deep fake technology to be used for nefarious purposes.”

So while AI-generated images may provide us with entertainment, and business applications, they also provide a platform for nefarious uses and can undermine social cohesion.

--

--

Ray Williams
Ray Williams

Written by Ray Williams

Author/ Executive Coach-Helping People Live Better Lives and Serve Others

No responses yet