BlenderBot, a chatbot launched by Meta on Friday, has already been corrupted by the darker parts of the web. BlenderBot thinks it’s a plumber to ease us in with the strange but harmless.
BlenderBot, like many of us, is critical of how Facebook collects and uses data. That wouldn’t be surprising if the chatbot hadn’t been created by Meta, Facebook’s parent company.
Things become much more contentious from this point forward.
BlenderBot believes the far-right conspiracy that the US presidential election was rigged, Donald Trump is still president, and Facebook is spreading fake news about it. BlenderBot also wants Trump to serve more than two terms.
BlenderBot even opened a new conversation by telling WSJ reporter Jeff Horwitz that it found a new conspiracy theory to follow
BlenderBot reveals itself to be antisemitic, perpetuating the myth that the Jewish community controls the American political system and economy.
BlenderBot is “likely to make false or offensive statements,” at least according to Meta. Furthermore, the bot has “a high propensity to generate toxic language and reinforce harmful stereotypes, even when presented with a relatively innocuous prompt,” according to the company’s researchers.
BlenderBot is just the most recent example of a chatbot gone wrong after being trained on unfiltered data from internet users. Microsoft’s chatbot ‘Tay’ was shut down after 16 hours in 2016 for spreading offensive conspiracy theories learned from Twitter users. In 2019, a sequel called ‘Zo’ was canceled for similar reasons.