How AI helped Whatsapp to ensure user safety?

Meta-owned WhatsApp banned over 1.9 million accounts in May alone, according to its twelfth User Safety Monthly Report, which was published on Friday in compliance with the IT Rules 2021.

A WhatsApp spokesperson said, “Over the years, we have consistently invested in Artificial Intelligence and other state of the art technology, data scientists and experts, and in processes, in order to keep our users safe on our platform. In accordance with the IT Rules 2021, we’ve published our report for the month of May 2022. This user-safety report contains details of the user complaints received and the corresponding action taken by WhatsApp, as well as WhatsApp’s own preventive actions to combat abuse on our platform.”

According to the user safety report, WhatsApp received 149 reports (user complaints) in May for which no action was taken, 303 ban appeals, 23 of which were acted upon, 29 other support-related reports, one of which was acted upon, 34 product-related reports, none of which were acted upon, and 13 safety-related reports, none of which were acted upon.

In total, 528 reports were received, with 24 of them being acted upon. Abuse detection occurs at three points in an account’s life cycle: upon registration, during messaging, and in response to negative input received by WhatsApp in the form of user reports and blocks. A team of analysts augments these technologies to examine edge cases and aids in the long-term improvement of WhatsApp’s effectiveness.

In all, 1,910,000 accounts were banned.

Mails can also be sent to the India Grievance Officer via post. Information on how to contact the Grievance Officer and WhatsApp in India is available on its website.

Related posts

Leave a Comment