<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=192888919167017&amp;ev=PageView&amp;noscript=1">
Sunday,  November 17 , 2024

Linkedin Pinterest
News / Business

Facebook says it removed 583 million fake accounts

By Sarah Frier, Bloomberg
Published: May 15, 2018, 5:10pm

Facebook said it took down 583 million fake profiles in the first three months of the year, usually within minutes of their creation.

The company also scrubbed 837 million pieces of spam and acted on 2.5 million instances of hate speech, it said Tuesday in its first-ever report on how effectively it’s enforcing community standards.

Facebook came under intense scrutiny earlier this year over the use of private data and the impact of unregulated content on its community of 2.2 billion monthly users, with governments around the world questioning the Menlo Park, Calif.-based company’s policies. Today’s report, which will come out twice a year, can also show how well Facebook’s artificial intelligence systems learn to flag items that violate the rules before anyone on the site can see them.

The conclusion from the first metrics: some problems are better suited to computerized solutions than others. Almost 100 percent of the spam and 96 percent of the adult nudity was flagged for takedown, with the help of technology, before any Facebook users complained about it. But only 38 percent of hate speech was noticed by the AI. Hate speech is harder to deal with because computers often can’t understand the meaning of a sentence — such as the difference between someone using a racial slur to attack somebody, and someone telling a story about that slur.

“It’s a work in progress always,” Guy Rosen, Facebook’s vice president of product management, said in a briefing. “These are the same metrics we’re using internally to guide the metrics of the teams. We’re sharing them here because we think we need to be accountable.”

Congressional hearing

Chief Executive Officer Mark Zuckerberg faced several questions during his April congressional testimony about content removal. Why, for example, did Facebook make it possible for people to sell opiates on the site, even though it says that content is banned? Why are certain people banned, even if they did nothing wrong? Zuckerberg explained that Facebook is hiring thousands of people who can, over the course of millions of content decisions, train a better artificial intelligence system. Recently, Facebook released for the first time the internal rules about what stays up and what comes down.

The enforcement of those rules has been spotty, especially in regions where Facebook hasn’t hired enough people who speak local languages, or in subjects unfamiliar to its AI program. The company has come under fire for failing to remove content that has incited ethnic violence in Myanmar, leading Facebook to hire more Burmese speakers. A Bloomberg report last week showed that while Facebook says it’s become effective at taking down terrorist content from al-Qaida and the Islamic State, recruitment posts for other U.S.-designated terrorist groups are found easily on the site.

While AI is getting more effective at flagging content, Facebook’s human reviewers still have to finish the job. A photo with nudity may be porn, or it may be art, and human eyes can usually tell the difference. The company expects to have 20,000 people working on security and content moderation by the end of the year.

Facebook says it’s going to measure the size of its problems based on “prevalence” of content, or percentage of overall views of items on Facebook.

Loading...