Facebook, amid pressure from activists and regulators across the globe, has confirmed that it will be launching a stand-alone body to deal with abusive content, hate speech, alongside other inappropriate content.
Speaking on Thursday during the announcement, Mark Zuckerberg, chief executive of Facebook said that the company has been vamping up security and that the new independent body will be constituted in 2019. He noted that it will serve as the basis on what content will be yanked from the platform.
Content reported by users or those captured by Facebook’s AI has been going through the new internal security system the platform has been developing.
The constitution of the appeals body as well as how Facebook will ensure it runs in alignment with the platform’s policies and principles will also be determined next year. The company says that it will also begin giving reports on content removal summaries on a quarterly basis in par with the earnings reports.
“We have made great strides in ensuring Facebook is free of terrorism, hate, and bullying. What the network is doing is to find the right balance between giving people a voice and ensuring they are safe,” Zuckerberg said.
Some of the challenges that Facebook encounters includes people engaging in inappropriate and more sensational content, which are unhealthy civilized conversations, Zuckerberg noted.
He also noted that online bullying represents a hard thing to combat for AI systems since it tends to be personal and subjective. Someone, for instance, might jokingly mock a friend in a post and the system could interpret this as being mean.
That is why the detection also requires a clear comprehension of the gamut of languages used by Facebook along with cultural contexts.
In the new transparency report by Facebook, it stated the platform has improved at proactively recognizing content that violates policies before any user reports them, especially hate speech, graphic content, and violence.
“However, there are other areas that we still have a lot of work to do,” the statement read.
Meanwhile, the network reports that since the last transparency report, the amount of Facebook violations detected proactively, before anyone reported it, has more than doubled.