Site icon Soko Directory

Spam Content the Most Common Violation Against Facebook’s Rules

Cyber

Facebook has revealed that spam content is currently the most common violation against its rules in a new report covering Q4 2018 and Q1 2019.

Amid the spread of fake news and increasing levels of inflammatory content circulating online, the social network has come under immense pressure to better regulate what’s happening on its watch.

The content that Facebook is actively trying to keep from its site can be broken down into eight categories: graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, bullying and harassment, child nudity and sexual exploitation, regulated goods (drugs and firearms) and, last but definitely not least, spam.

Spam Content

Between January and March of this year, 1.8 billion posts categorized as spam were removed from Facebook, accounting for 96 percent of all content taken action on (excluding fake accounts).

Read Also: Facebook Restricts Use of Live Streaming Feature to Curb Online Violence

Facebook says that even though spammers routinely try new tactics to avoid detection, the social network has invested in enhanced detection technology, including improvements in machine learning, to more accurately detect and take action on spam violations.

Violence and Graphic Content

34 million posts containing violence and graphic content were also taken down or covered with a warning, 99 percent of which were found and flagged by Facebook’s technology before they were reported.

In the past two quarters, Facebook revealed that the prevalence of violence and graphic content has remained similar to previous quarters.

Tragic, real-world events—for example, the Tlahuelilpan pipeline explosion in Mexico—were meaningful contributors to graphic content on Facebook, as people shared images and videos of these events.

Adult Nudity or Sexual Activity

Likewise, 97 percent of all posts taken down or flagged for containing adult nudity or sexual activity were pinpointed and identified automatically before they were reported – 19 million were given warning labels or deleted in total.

The prevalence of adult nudity and sexual activity violations on Facebook fluctuated during the last six months. It declined in Q4 2018 before rising again in Q1 2019.

During the same time period, content actioned for adult nudity decreased. “The reason for this is that we prioritized our efforts in other, more harmful, content areas,” said the report.

Unfortunately, Facebook’s technology has been significantly less successful at identifying posts containing hate speech.

Of the 4 million pieces of content the company took action against for including hate speech, only 65 percent were flagged by Facebook before users reported a violation of the platform’s Community Standards.

When it comes to spam, the content most frequently deleted, disabling fake accounts is critical.

During the first quarter of the year, more than 2 billion fake accounts were disabled and most of them were removed within minutes of registration.

Read Also: Facebook is Losing to Instagram as Most Preferred Platform for Online Marketing

Exit mobile version