How Meta’s new policies will put marginalised people at risk 

Is the UK safe from Meta’s new platform policies? You can watch our video about it as well.

In January, Meta announced updates to how content on Instagram, Threads and Facebook will be moderated. In Meta’s More Speech and Fewer Mistakes video, Mark Zuckerberg explains the reasoning behind Meta’s new content moderation policy, disguising the changes to their platforms as improvements to social media. In reality, Meta is implementing policies that will actively allow for more online abuse, misinformation, disinformation and election interference to proliferate. 

Mark Zuckerberg: ‘We're going to get rid of fact checkers and replace them with community notes.’

Meta’s previous third-party content moderation policies already needed updates. Too much misinformation and disinformation was being missed. Online abuse was driven by algorithms designed to favour conflict. There was not nearly enough efficient content removal.

If we weren’t satisfied with the moderation before, swapping staff with social media users means content will require more stringent oversight. Removing fact-checkers in favour of community notes is only going to allow for divisive conversations online to increase, and  tech-facilitated harm easier: 

Mark Zuckerberg: ‘We're going to simplify our content policies and get rid of a bunch of restrictions on topics like immigration and gender that are just out of touch with mainstream discourse.’

In Meta’s case, ‘simple’ means allowing racist, gender-based hate and misogynoir to be published online, because it’s not explicitly illegal. But legal content can still be harmful. 

Meta’s Hateful Conduct policy defines hateful content as, ‘direct attacks against people – rather than concepts or institutions – on the basis of what we call protected characteristics (PCs): race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease.’

However, this same policy allows people with these protected characteristics to be referred to as objects or property; for LGBTQ+ folks to be referred to as mentally ill; for women to be referred to as inferior to men. It allows for an entire group’s existence to be denied; intolerance on the grounds of racism and islamophobia; sex- and gender-exclusive language when discussing accesses to space often limited by sex or genders, like bathrooms, teaching roles, health and support groups – and more. 

Meta’s move may contravene the Online Safety Act which is the UK law (since 2023) that requires platforms like Meta to reduce the risks of illegal content on their platforms. But Meta’s platform changes are likely to result in an increase of content that are offences under:

  1. The Crime and Disorder Act 1998, due to racially or religiously aggravated harassment; 

  2. The Protection from Harassment Act 1997, due to putting people in fear of violence 

  3. And the Public Order Act 1986, for [causing] harassment, alarm or distress 

When the Centre for Countering Digital Hate analysed the impact Meta’s policy changes will have on their users, they found the content enforcement switch will lead to 277 million pieces of content that would’ve been previously moderated, spreading on  the platform each year.   

We need the Government and Ofcom to ensure that these rollbacks to platform policies do not set a precedent. If there is no accountability for Meta, other platforms are likely to follow suit. .Ofcom, the regulator, must be granted the proper powers to stop Big Tech from rolling back protections and implement comprehensive hateful misconduct policies to protect marginalised folks.   

Mark Zuckerberg: ‘For a while the community asked to see less politics because it was making people stressed, so we stopped recommending these posts, but it feels like we're in a new era now, and we're starting to get feedback that people want to see this content again.’

Meta’s decision to allow more ‘political’ content will only amplify debate on topics such as immigration and gender, which are now, specifically, no longer being moderated by fact-checkers.

Given the times we live in and the changes to Meta’s policies, this could favour right wing, hateful and conspiracist discourse online. Interesting timing. 

Mark Zuckerberg: ‘We tried, in good faith, to address those concerns without becoming the arbiters of truth, but the fact checkers have just been too politically biased and have destroyed more trust than they've created.’

Meta is choosing to allow social media users who have no incentive to work against their bias, moderate instead. Meta is allowing the same people who may be contributing to digital misogynoir and other forms of online abuse become the arbiters of what is considered violent or harmful. 

Research from an organisation called Whose Knowledge? has already revealed the bias among editors of crowd-sourced websites like Wikipedia. The same issues occur on Twitter (now known as X), and will likely now occur on Meta platforms. Essentially, Meta is co-signing harmful rhetoric to be posted, unchecked, so it’s continually being normalised into mainstream discourse. 

Mark Zuckerberg: ‘There's also a lot of illegal stuff that we still need to work very hard to remove. But the bottom line is that after years of having our content moderation work focused primarily on removing content, it is time to focus on reducing mistakes, simplifying our systems, getting back to our roots about giving people a voice.’

The question is, who’s voice is Meta prioritising? Certainly not those who have been targeted by bigotry and will be continually harmed by false narratives online – remember the UK’s race riots during the summer 2024 that originated from misinformation rooted in xenophobia, Islamophobia and racism? The Southport murders were inaccurately blamed on an ‘illegal migrant’ in a social media post that continued to spread as riots erupted in cities across the UK, making people of colour and their businesses targets for violence and vandalism.  What about how Meta’s algorithm fed inflammatory anti-Rohingya content that incited discrimination based on disinformation? Despite warnings from activists, Myanmar authorities blocking Facebook because of the ethnic violence it triggered, and Meta’s own report outlining how its systems contributed to the viral hate speech, Meta didn’t enforce its own policies to stop it.    

Meta’s moderation changes are not about simplifying systems. It’s about staying in political favour with the US Government and the Trump administration. The only thing they are protecting is their annual profits and their seat at the table. 

We know the harm these policy rollbacks  to social media will cause, but Ofcom and the UK Government doesn’t have to accept or support them. 

So, what are we doing? We have already written to the Government alongside nine other organisations to request them to introduce minimum standards and legal protections for vulnerable groups so companies can’t roll back protections in this way. 

As for users, we’ve already seen swathes of people leaving platforms like Facebook, WhatsApp and X to join other social media platforms that hold different values, practices and algorithmic design. Earlier this year, we made a decision to stop posting on X. Currently, we’re still on Instagram, but it may reach a point where it goes against our mission and values to continue to be here. 

Previous
Previous

UK Public sector AI use requires accountability to protect fundamental rights.

Next
Next

Glitch’s response to Ofcom’s illegal content codes