Over 12m pieces of Covid-19 misinformation removed by Facebook

Facebook removed more than 12 million pieces of disinformation content related to Covid-19 between March and October this year, the latest data from the social network shows.

The company’s new Community Standards Enforcement Report found that the millions of entries were removed for containing misleading claims, such as false preventives and exaggerated cures, that could lead to imminent bodily harm.

During the same period, Facebook said it had added warning labels to approximately 167 million pieces of Covid-19-related content, with links to articles by third-party fact-checkers debunking the claims.

And while Facebook said the pandemic continued to disrupt its content review workforce, it said some enforcement statistics were returning to pre-coronavirus levels.

This was due to improvements in the artificial intelligence used to detect potentially harmful messages and the extension of detection technologies to more languages.

Guy Rosen, vice president of integrity at the social network, said, “As the Covid-19 pandemic continues to disrupt our content review workforce, we see some enforcement statistics returning to pre-pandemic levels.

“Our proactive content violation detection rates have increased in most policies from Q2, thanks to improvements in AI and the expansion of our detection technologies to more languages.

“Even with reduced assessment capacity, we still prioritize the most sensitive content that humans can rate, including topics such as suicide and self-harm and child nudity.”

For the period between July and September, Facebook said it took action against 19.2 million violent and explicit content, more than four million more than in the previous quarter.

In addition, the site took action on 12.4 million pieces of content related to child nudity and sexual exploitation, up from about three million in the previous reporting period.

3.5 million harassment or harassment content was also removed from 2.4 million during this period. More than four million violent graphic content was taken action on Instagram, as well as a million pieces of child nudity and sexual exploitation and 2.6 million posts related to bullying and harassment, an increase in every area.

The report added that Instagram had taken action against 1.3 million pieces of content linked to suicide and self-harm, up from 277,400 in the past quarter.

It also found that Facebook had enforced 22.1 million messages considered hate speech, 95% of which were proactively identified by Facebook and its technologies.

Facebook and other social media companies are constantly being scrutinized about their monitoring and removal of both misinformation and harmful content, particularly this year during the pandemic and in the run-up to the US presidential election.

In the UK, online security groups, campaigners and politicians are urging the government to bring to parliament the introduction of its online damages bill, which is currently being postponed until next year.

The bill proposes stricter regulations for social media platforms with severe financial penalties and possibly even criminal liability for executives if sites fail to protect users from harmful content.

Facebook has previously said it would welcome more regulation within the industry.

Mr. Rosen said Facebook would “continue to improve our technology and enforcement efforts to remove harmful content from our platform and keep people safe while using our apps”.

.

Menu