Facebook and Instagram removed millions of posts for violating its rules – CNET

facebook-instagram-logos-phones-3

Facebook is battling nasty content on its various platforms.


Angela Lang/CNET

Facebook said Wednesday that from April to September it removed millions of posts for violating its rules against hate speech, child nudity and other offensive content. For the first time, the social network also released data about content taken down from Instagram, the photo app it owns. 

During the second and third quarter, Facebook removed 58 million posts for adult nudity and sexual activity, 5.7 million posts for harassment and bullying and 11.4 million posts for hate speech, according to its biannual community standards enforcement report.

The data highlights how the world’s largest social network is still taking action against millions of posts that flow through its site and through Instagram even as Facebook CEO Mark Zuckerberg pushes for free expression. 

From July to September, Facebook removed 11.6 million pieces of content for violating its rules against child nudity and sexual exploitation, up from nearly 7 million in the previous quarter. On Instagram, more than 753,000 posts about child nudity and sexual exploitation were taken down in the third quarter. 

Facebook attributed the rise in these takedowns to improvements in detecting and removing content, including how Facebook stores digital fingerprints, called “hashes,” of pieces of content that run afoul of its rules against child nudity and sexual exploitation.  

Guy Rosen, Facebook’s vice president of integrity, also said in a blog post that the company improved how it detects hate speech so posts are removed before people even see them. That includes identifying images and texts the company already pulled down for violating its policies.

“While we are pleased with this progress, these technologies are not perfect and we know that mistakes can still happen,” Rosen said in the post. 

The company also included new data about suicide and self-injury content and terrorist propaganda.

From April to September, Facebook pulled down 4.5 million posts for depicting suicide and self-injury. On Instagram, it took down 1.7 million of these posts for violating policy. 

Facebook’s report also included more details about how much content it removed in the wake of the Christchurch terrorist attack. 

In March, a gunman who killed 50 people at two mosques in Christchurch, New Zealand, used Facebook to livestream the attacks. From March 15 to Sept. 30, Facebook removed about 4.5 million posts related to the attack. The company said it identified about 97% of those posts before users reported them.

source: cnet.com