Facebook said it took down 583 million fake profiles in the first three months of the year, usually within minutes of their creation.
The company also scrubbed 837 million pieces of spam and acted on 2.5 million instances of hate speech, it said Tuesday in its first-ever report on how effectively it’s enforcing community standards.
Facebook came under intense scrutiny earlier this year over the use of private data and the impact of unregulated content on its community of 2.2 billion monthly users, with governments around the world questioning the Menlo Park, Calif.-based company’s policies. Today’s report, which will come out twice a year, can also show how well Facebook’s artificial intelligence systems learn to flag items that violate the rules before anyone on the site can see them.
The conclusion from the first metrics: some problems are better suited to computerized solutions than others. Almost 100 percent of the spam and 96 percent of the adult nudity was flagged for takedown, with the help of technology, before any Facebook users complained about it. But only 38 percent of hate speech was noticed by the AI. Hate speech is harder to deal with because computers often can’t understand the meaning of a sentence — such as the difference between someone using a racial slur to attack somebody, and someone telling a story about that slur.