While artificial intelligence is able to sort through nearly all spam and content glorifying al-Qaeda and ISIS and most violent and sexually explicit content, it is not yet able to do the same for attacks on people based on personal attributes like race, ethnicity, religion, or sexual and gender identity, the company said in its first ever Community Standards Enforcement Report.
In total the social network took action on 3.4m posts or parts of posts that contained such content. "This increase is mostly due to improvements in our detection technology", the report notes.
Facebook has faced a storm of criticism for what critics have said was a failure to stop the spread of misleading or inflammatory information on its platform ahead of the USA presidential election and the Brexit vote to leave the European Union, both in 2016. During the first quarter, Facebook found and flagged just 38% of such content before it was reported, by far the lowest of the six content types.
Facebook removed 21 million pieces of adult nudity and sexual activity in Q1 2018, 96% of which was found and flagged by its technology before it was reported.
That's up from 0.06-0.08% during the last three months of 2017.
By comparison, the company was first to spot more than 85 percent of the graphically violent content it took action on, and nearly 96 percent of the nudity and sexual content. The rate at which we can do this is high for some violations, meaning we find and flag most content before users do.
Tuesday's self-assessment - Facebook's first breakdown of how much material it removes - came three weeks after Facebook tried to give a clearer explanation of the kinds of posts that it won't tolerate. The company has pledged in recent months to use facial recognition technology - which it also uses to suggest which Facebook friends to tag in photos - to catch fake accounts that might be using another person's photo in their profile pictures.
Most of the content was found and flagged before users had a chance to spot it and alert the platform.
The last stat that Facebook highlighted was hate speech; it admitted its technology wasn't very good at picking it up so it still gets reviewed by review teams.
Among the most noteworthy numbers: Facebook said that it took down 583 million fake accounts in the three months spanning Q1 2018, down from 694 million in Q4 2017. It also purged 583 million fake accounts, "most of which were disabled within minutes of registration".
Responses to rule violations include removing content, adding warnings to content that may be disturbing to some users while not violating Facebook standards; and notifying law enforcement in case of a "specific, imminent and credible threat to human life". "It's partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important".
Spam: Facebook says it took action on 837 million pieces of spam content in Q1, up 15% from 727 million in Q4.