Fake faces made by artificial intelligence appear more trustworthy to humans than genuine people.
Artificial intelligence and deep learning — an algorithmic learning process used to educate computers – are utilised to create a human who appears authentic, a technology known as a ‘deepfake.’
Participants were asked to classify faces created by the StyleGAN2 algorithm as authentic or artificial in an experiment. The participants’ success percentage was 48%, which was somewhat lower than flipping a coin.
Participants were trained on how to spot deepfakes using the same data set in a second experiment, but the accuracy rate only improved to 59 percent.
Participants were trained on how to spot deepfakes using the same data set in a second experiment, but the accuracy rate only improved to 59 percent.
Most individuals are unable to determine if they are viewing a deepfake video, according to research from the University of Oxford, Brown University, and the Royal Society, even when they are warned that the content they are watching could have been digitally manipulated.
The researchers then looked at whether trustworthiness judgments could aid humans in recognising images.
“Faces provide a rich source of information, with exposure of just milliseconds sufficient to make implicit inferences about individual traits such as trustworthiness. We wondered if synthetic faces activate the same judgements of trustworthiness,” Dr Sophie Nightingale from Lancaster University and Professor Hany Farid from the University of California, Berkeley, wrote in Proceedings of the National Academy of Sciences.
“If not, then a perception of trustworthiness could help distinguish real from synthetic faces.”
Unfortunately, synthetic faces were found to be 7.7 per cent more trustworthy than real faces, with women rated more trustworthy than men.
“A smiling face is more likely to be rated as trustworthy, but 65.5 per cent of the real faces and 58.8 per cent of synthetic faces are smiling, so facial expression alone cannot explain why synthetic faces are rated as more trustworthy,” the researchers wrote.
It’s been hypothesised that these faces are more trustworthy because they resemble ordinary faces, which people perceive more trustworthy in general.
The researchers suggested that criteria for the creation and distribution of deepfakes be established, including “incorporating robust watermarks” and reconsidering the “often-laissez-faire approach to the public and unfettered sharing of code for anybody to include into any application.”
Deepfake crime could be the most dangerous form of crime through artificial intelligence, a report from University College London suggested.
“People now conduct large parts of their lives online and their online activity can make and break reputations. Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity,” said Dr Matthew Caldwell.