Facebook Removed 3.2 Billion Fake Accounts in Q2, Q3 2019

Nov. 14, 2019



Social media giant Facebook has released the fourth edition of its Community Standards Enforcement Report detailing the steps it has taken in the previous two quarters of the year to ensure content that violates its community standards doesn’t remain on the site.

In the report, the company highlights that itremoved a whopping 3.2 billion fake accounts in the last two quarters, i.e., from April to September this year. These accounts were caught before they were activated on Facebook, which is why they don’t reflect in the company’s reported user-figures. The company estimates thatabout 5% of its massive 2.45 billion user base is comprised of fake accounts.

The company has also, for the first time in its community standards report, included data from Instagram. The report mentions that the companyremoved over 4.5 million pieces of content relating to self-injury and suicidefrom Facebook. On Instagram, the company removed a total of 1.68 million pieces of content that encouraged self-injury or suicide.

Moreover, the company said that over the last two years it hasinvested in technologies that can proactively detect hate speech on its platform, allowing Facebook to remove such content without someone having to report it, and in many cases, says Facebook, before anyone even gets to see such content. To do this, Facebook is“identifying images and identical strings of text that have already been removed as hate speech, and machine-learning classifiers that look at things like language, as well as the reactions and comments to a post, to assess how closely it matches common phrases, patterns and attacks that we’ve seen previously in content that violates our policies against hate.”