Facebook’s army of malicious bots are being trained to research anti-spam methods !!

Facebook might be have some well-meaning efforts in place but some of the bad actors manage to get through its safeguards and policies. The social media platform is now upping its guards and is experimenting with a new way to strengthen its anti-spam walls and preempt bad behaviours that could breach its safeguards – an army of malicious bots.

The platform is developing a new system of bots that it can simulate bad behaviours and stress-test the platform to unearth flaws and loopholes. These automated bots are trained and have been taught how to behave like a real person using all the data of Facebook jas acquired from its two billion-plus users.

To make sure that this experiment does not interfere with real-life usage, Facebook has built a parallel version of sorts of the platform itself where these are bots are allowed to run loose. In this parallel Facebook-verse these bots can send message to each other, comment on posts, send the friend requests, visit pages etc. And most importantly, they can stimulate extreme scenarios such as trying to sell drugs, guns etc to see how Facebook’s algorithms are trying to prevent them.

Facebook says that this is new system can be hosted in thousands or even millions of bots and since it runs on the same code the platform’s users are actually using, the actions these bots take are faithful to the effects that would be witnessed in real life. Mark Harman, the person leading the project, wrote in a blog post that the project is currently in a research-only stage and the hope is that it will eventually help Facebook improve their services and spot integrity issues before it affects people on the platform.