Twitter announced on Wednesday that it is testing a new tool that automatically bans abusive posts, as the US social media platform faces increased pressure to shield its users from online abuse.
Users who use the new Safety Mode will have their “mentions” filtered for seven days, preventing them from seeing messages marked as likely to contain hate speech or insults. According to Twitter, the functionality will be tested first by a small number of English-speaking users, with priority given to “marginalized populations and female journalists,” who are frequent targets of abuse.
“We want to do more to reduce the burden on people dealing with unwelcome interactions,” Twitter said in a statement, adding that the platform is committed to hosting “healthy conversations”.
Twitter, like other social media companies, enables users to report offensive postings, such as racist, homophobic, and sexist statements. However, activists have long argued that flaws in Twitter’s rules allow violent and racist remarks to remain online in many cases.