People who distrust fellow humans show greater trust in artificial intelligence.

  • People nowadays show greater trust toward AI according to a group of researchers.
  • The researchers claim that their findings have practical implications for both designers and users of AI tools in social media.
  • The study was published in the journal New Media & Society.


A person’s distrust in humans predicts their trust in artificial intelligence’s ability to moderate online content according to a recent study published. The researchers claim that their findings have practical implications for both designers and users of AI tools in social media.

“We found a systematic pattern of individuals who have less trust in other humans showing greater trust in AI’s classification,” said S. Shyam Sundar, the James P. Jimirro Professor of Media Effects at Penn State. “Based on our analysis, this seems to be due to the users invoking the idea that machines are accurate, objective and free from ideological bias.”

The study, which was published in the journal New Media & Society, also discovered that “power users,” or experienced information technology users, had the opposite tendency. They had less faith in the AI moderators because they believe machines lack the ability to detect nuances in human language.

The study discovered that individual differences such as distrust of others and power usage predict whether users will invoke positive or negative machine characteristics when confronted with an AI system for content moderation, influencing their trust in the system. According to the researchers, personalising interfaces based on individual differences can improve the user experience. In the study, content moderation entails monitoring social media posts for problematic content such as hate speech and suicidal ideation.

“One of the reasons why some may be hesitant to trust content moderation technology is that we are used to freely expressing our opinions online. We feel like content moderation may take that away from us,” said Maria D. Molina, an assistant professor of communication arts and sciences at Michigan State University, and the first author of this paper. “This study may offer a solution to that problem by suggesting that for people who hold negative stereotypes of AI for content moderation, it is important to reinforce human involvement when making a determination. On the other hand, for people with positive stereotypes of machines, we may reinforce the strength of the machine by highlighting elements like the accuracy of AI.”

According to the study, users with conservative political ideologies are more likely to trust AI-powered moderation. This, according to Molina and coauthor Sundar, who also co-directs Penn State’s Media Effects Research Laboratory, may be due to distrust in mainstream media and social media companies.

The researchers enlisted the help of 676 people from the United States. The participants were told they were helping to test a new content moderating system. They were taught about hate speech and suicidal ideation before viewing one of four different social media posts. The posts were either flagged or not flagged for fitting those definitions. Participants were also told whether the decision to flag the post was made by AI, a human, or a combination of the two.

The demonstration was followed by a questionnaire that asked the participants about their individual differences. Differences included their tendency to distrust others, political ideology, experience with technology and trust in AI.

“We are bombarded with so much problematic content, from misinformation to hate speech,” Molina said. “But, at the end of the day, it’s about how we can help users calibrate their trust toward AI due to the actual attributes of the technology, rather than being swayed by those individual differences.”

Molina and Sundar say their results may help shape future acceptance of AI. By creating systems customized to the user, designers could alleviate skepticism and distrust, and build appropriate reliance in AI.

“A major practical implication of the study is to figure out communication and design strategies for helping users calibrate their trust in automated systems,” said Sundar, who is also director of Penn State’s Center for Socially Responsible Artificial Intelligence. “Certain groups of people who tend to have too much faith in AI technology should be alerted to its limitations and those who do not believe in its ability to moderate content should be fully informed about the extent of human involvement in the process.”


Global Artificial Intelligence Summit & Awards

If you or your organization is building cutting-edge AI solutions to adapt to a world driven by technology, Please nominate yourself/or your company for GAISA AWARDS 2022.

Please follow the below-given link to nominate:

All further details regarding the event will be updated on the official website of Gaisa 2022. Please Visit:

Related posts

Leave a Comment