Facebook is ‘prioritising profits’ over tackling hate speech

Facebook profits are surging. Is hate speech surging too?

Facebook has reduced its number of human content reviewers – even as the platform sees an apparent ‘explosion’ in hate speech, campaigners have warned.

The latest Facebook Transparency report says that the site had to proactively remove 22.5m pieces of hate speech in the second quarter of this year worldwide – up from 9.6m in the previous period.

But even as Facebook saw a surge in ad revenues and users during the pandemic, it has shifted from human reviewers to secret algorithms to tackle hate speech. This is despite the platform admitting this policy has hurt enforcement in other areas, such as removing self-harm and child exploitation images.

LFF and other media outlets discovered that the reliance on automation during the global lockdown was leading to legitimate political posts being removed by the platform.

Facebook said it could not estimate the prevalence of hate speech on its services – meaning it is difficult to know what proportion is removed by the platform’s AI moderation.

Imran Ahmed, CEO of the Center for Countering Digital Hate – which is calling for a Facebook boycott from advertisers in the UK – said: “These figures suggest that hate speech is exploding on Facebook. We have been warning for some time that a major pandemic event has the potential to inflame xenophobia and racism.

“Hidden in this report is the fact that Facebook has reduced its human reviewing capacity, weakening enforcement action on vile materials relating to suicide, self-injury and child sexual exploitation.

“Facebook always underinvest in human review, instead prioritising shareholder profits that have made Mark Zuckerberg the world’s newest centibillionaire. They will continue to do so until they are forced either by advertisers – who give them 98% of their revenue – or, as a backstop, legislators and regulators, who can impose fines and criminal charges for non-compliance with their statutory duty of care to users.”

Labour is calling on the government to prioritise the ‘Online Harms Bill’ which was first promised more than a year ago. Since then, high profile examples have shown why the lack of regulation is letting down users – include Twitter’s ‘sluggish’ reaction to antisemitic tweets from Grime artist Wiley, allowing the spread of dangerous anti-vaxx and other conspiracies, and the failure to address racist abuse targeting MPs such as Diane Abbott. 

A quick search shows there are hundreds of groups on Facebook still pushing 5G conspiracy theories and falsely linking it to coronavirus.  

Last month, Facebook released the final report of its ‘civil rights audit’, following criticism of its ‘lax’ approach to removing hate speech on the platform. The site had come under fire for refusing to flag or remove hate speech from political leaders such as President Trump, though it has become more proactive amid a flurry of bad press.

Commenting on the civil rights audit, hate speech analyst Melissa Ryan wrote: “The reports [calls some of] Facebook’s decisions on hate speech and voter suppression “vexing and heartbreaking.” It notes where Facebook has made improvements and the vast majority of problems that remain unaddressed. It also provides a path forward for Facebook to address their continued civil rights failures.”

She added: “What it doesn’t offer is any commitment from Facebook to enact any of the recommended policies. The authors of the report make suggestions and will continue to consult with Facebook on civil rights issues. But as of right now Facebook hasn’t committed to anything beyond that.

“Facebook continues to treat civil rights as a partisan political issue and a PR problem. Instead of focusing their civil rights work on protecting their users from hate, harassment, and harm, Facebook’s actions always center first on playing politics and attempting to minimize damage.”

Facebook says it is updating its policies to more specifically account for certain kinds of implicit hate speech, such as content depicting blackface, or stereotypes about Jewish people controlling the world.

But the platform acknowledged that its human capacity to review harmful content had dropped substantially during the coronavirus outbreak. In its transparency report, the platform admitted that this meant some dangerous content was slipping through the cracks: “With fewer content reviewers, we took action on fewer pieces of content on both Facebook and Instagram for suicide and self-injury, and child nudity and sexual exploitation on Instagram.”

Ian Russell, who campaigns to make the internet safer after his daughter Molly ended her life aged 14 after viewing graphic self-harm images on Facebook-owned Instagram, is calling for the Tories to act fast to tackle online harms on social media.

Labour has launched a consultation for organisations, members and individuals to shape the party’s response to the challenges posed by the ‘evolving digital landscape’.

Josiah Mortimer is co-editor of Left Foot Forward.

Comments are closed.