Facebook is the biggest social media worldwide. Over 2.7 billion users are using the Facebook platform as of the second quarter of 2020. So, the urge of the requirement of artificial intelligence in sorting contents and handling moderations have emerged. Facebook is now planning to introduce machine learning to manage the huge queue on moderation.
Content moderation in Facebook refers to the screening and monitoring of user-generated content based on Facebook rules and regulations on whether to publish a particular content on Facebook or not. Once the content is uploaded by a user to the platform, it’s been gone through a moderation process to ensure the appropriateness (which includes everything from spam to hate speech and content that “glorifies violence”) of the content while making sure the content upholds the rules and regulations, legal and not harassing.
There is a huge queue on Facebook for the moderation process. So, the requirement of AI in the moderation process has become crucial. As a result, Facebook has announced its update to be carried out via artificial intelligence. It implies that the company has started using machine learning in order to truncate the current moderation queue.
Following are the basic steps on how the AI uses in moderating Facebook.
- All the posts and uploads done by the users on the Facebook platform are considered to be offensive and violate the rules and regulations of Facebook.
- These uploads are flagged by the users themselves or by the machine filters.
- They are first been sort out and automatically (responses could involve removing a post or blocking an account, for example) selected to be reviewed later by a human moderator. In the meantime, other less damaging contents are been added to the queue.
Generally, AI technologies have a significant potential to moderate the contents in three ways. The first impact is that AI can be used to improve in the pre-moderation stage and flag content for review by humans which will increase the accuracy in moderation. Simple approaches such as hash matching, keyword filtering, detecting the nuances of language such as sarcasm, natural language analysis, sentiment analysis, object detection, scene understanding in complex video and image contents are some of the methodologies used in this stage. AI approach known as Recurrent neural networks enables sophisticated video analysis.
The second impact is that AI can be implemented to synthesize training data to improve pre-moderation performance. Generative AI techniques, as an example ‘generative adversarial networks’ (GANs), can create new and original video, images, text, and audio. Images of harmful content such as nudity or violence can be recreated with this approach. These images can supplement existing examples of harmful content when training an AI-based moderation system.
The third impact is AI can assist human moderators by increasing their productivity and reducing the potentially harmful effects of content moderation on individual moderators. This can be done by prioritizing content to be reviewed by human moderators. Priotizing is done by considering the level of harmfulness of the content or the level of uncertainty from an automated moderation stage. This AI can also reduce the impact on the human moderators caused by the exposure to the varying the level and type of harmful content.
More than 15000 employees are assigned for this moderation of Facebook content all around the world. They have been criticized in the past for not giving these workers enough support, employing them in conditions that can lead to trauma. Their job is to categorize the posts and uploads which violate Facebook policies. Previously, the contents posted are sorted out based on the chronological order in which they were reported. But now the company has changed its perspective about the sorting order and currently they pay more attention to the contents which require instant notice. Threfore, this whole process is aided by machine learning.
In the near future, Facebook is planning to implement a combination of such machine algorithms in order to enhance the efficiency of the moderation process and reduce the queue. In fact, these machine algorithms will facilitate sorting out of the flagged contents and emphasizing them and taking into consideration based on three major factors which include virality, severity, and the tendency of them to violate the company’s rules and regulations.