Facebook took moderation action against almost 1.5bn accounts and posts which violated its community standards in the first three months of 2018, the company has revealed.
In its first quarterly Community Standards Enforcement Report, Facebook said the overwhelming majority of moderation action was against spam posts and fake accounts: it took action on 837m pieces of spam, and shut down a further 583m fake accounts on the site in the three months. But Facebook also moderated 2.5m pieces of hate speech, 1.9m pieces of terrorist propaganda, 3.4m pieces of graphic violence and 21m pieces of content featuring adult nudity and sexual activity.
“This is the start of the journey and not the end of the journey and we’re trying to be as open as we can,” said Richard Allan, Facebook’s vice-president of public policy for Europe, the Middle East and Africa.
The amount of content moderated by Facebook is influenced by both the company’s ability to find and act on infringing material, and the sheer quantity of items posted by users. For instance, Alex Schultz, the company’s vice-president of data analytics, said the amount of content moderated for graphic violence almost tripled quarter-on-quarter.
Facebook isn’t the only platform taking steps towards transparency. Last month YouTube revealed it removed 8.3m videos for breaching its community guidelines between October and December.
“I believe this is a direct response to the pressure they have been under for several years from different stakeholders [including civil society groups and academics],” said York.