While the inclusion of women and girls into the Online Safety Act is hugely significant, it is just one part of a law that will have a wide ranging impact on us all. We have broken down six areas of the Act that we believe are significant specifically to the experiences of Black women online. We’ll look at promising changes to the online landscape, as well as our ongoing concerns with the law aimed at making “the UK the safest place in the world to be online”. The six areas are:
- media literacy
- ‘intersectionality’ clauses
- end-to-end encryption
- data access for researchers
- definition of harms
- categorisation of services
1. Media literacy
Media literacy is back in the Act! This is what Glitch refers to as digital citizenship — the essential role we all play to understand the online space, how it can be used (and abused), and our impact on others when we step online. We were very concerned when media literacy was removed from the Online Safety Act, and are grateful for the amendments brought forward by the government in the Lords to add it back in.
Media literacy is an important part of online gender-based violence prevention work, including spotting and responding to abusive language and behaviour, and recognising dis/misinformation. This is significant for Black women’s experience online, as digital misogynoir, an issue we explore in detail in our Digital Misogynoir Report, is a prevalent issue that will only be ended with a concerted effort from all digital citizens. If you don’t know what it is or how to look for it, you will struggle to be part of the solution. Media literacy also includes understanding our roles in the online space, such as how we interact with people in the political and public eye. We know that Black women in the public eye experience disproportionate levels of harm online, and we hope that better media literacy will help change this.
The amendments to the Act include raising the level of public awareness (and understanding) of the nature and impact of harmful content, especially content and activities that disproportionately affect particular groups, including women and girls.
Media literacy is crucial for the prevention of online abuse: in our “public health” approach to online abuse, we want to prioritise changing people’s behaviours, centring education over punishment.
2. ‘Intersectionality’ clauses
Peers in the Lords ensured that a response to intersecting discrimination was clearly named in the Act’s Introduction, amending it to include: ‘risks which particularly affect individuals with a certain characteristic[/s]’.
That “‘s” is doing a lot of heavy lifting!
It means that the Act acknowledges that more than one characteristic can be considered at once — so when online harms are addressed in the Act, we hope this means the intersections of racism and misogyny will be considered.
While we are yet to see what this means in practice, and what it means specifically for Black women, the inclusion is notable as an overarching understanding of what the legislation aims to do.
3. End-to-end encryption
End-to-end encryption (E2EE) is a safety mechanism to protect privacy and data. It is something you may have noticed in your messaging apps — for example, under your chats on WhatsApp it states ‘Your personal messages are end-to-end encrypted’. This means that your data — e.g. your WhatsApp message to a friend — is kept secret until it reaches your friend’s phone, and cannot be read on its journey between your phone and your friend’s. If the message were intercepted, it would be scrambled. Only your friend’s phone has the key to unscramble it and read it, protecting your privacy and data. Not even the company you’re using to send messages can read what you’ve written — it is “encrypted” for them, too.
The Online Safety Act explores the possibility of breaking this encryption in certain situations. In a piece in the Financial Times on the issue, the government is quoted as saying “‘The [Act] states that as a last resort, on a case-by-case basis and only when stringent privacy safeguards have been met, [the Act] will enable Ofcom to direct companies to either use, or make best efforts to develop or source, technology to identify and remove illegal child sexual abuse content — which we know can be developed.’” The piece in the Financial Times, released ahead of the Commons debate on 12 September covered the news that ministers would not immediately enforce the Act’s proposed powers relating to breaking end-to-end encryption. They stated that Ofcom would only do so when it was “technically feasible” and when the technology has been accredited as meeting a minimum standard of accuracy in detecting only child sexual abuse and exploitation content. Experts believe this might take years, if it is ever possible.
As an organisation dedicated to ending gendered and racialised abuse online, we care deeply about both the concerns raised by children’s rights groups supporting the government’s approach and digital privacy advocates in relation to end-to-end encryption, which has made headlines in recent months as WhatsApp and Signal have threatened to remove their services from the UK if the Online Safety Act forced them to break end-to-end encryption already in place on their platforms.
We believe it is crucial that the Online Safety Act holds tech corporations accountable for design decisions that enable and perpetuate abuse from perpetrators on their platforms. We are also concerned about overextended government power that can be used to hurt marginalised groups, including Black women and girls. Alongside other digital rights groups, we called on the government “to carefully consider amendments to the [Act] that protect encryption and ensure that privacy remains a fundamental right to be enjoyed by all citizens.” The letter highlights concerns that breaking end-to-end encryption poses a serious threat to privacy, a right which is crucial to ensuring that freedom of expression and freedom of association are also protected. Unfortunately, these amendments in the Lords were not successful.
When Black women’s freedom of expression is already threatened by the deliberate silencing effect of online abuse, further impacts on expression and privacy are concerning. The Act’s proposal of breaking end-to-end encryption may open Black women up to further abusive attack techniques, for example for Black women in political and public life, as a serious privacy threat.
Child sexual abuse is an incredibly important problem that we, as a society, need to do more to address, through prevention and safeguarding. However, we don’t think that bypassing encryption is the only or best mechanism for this. We call for greater funding for the children’s rights sector and for child support services (as a part of our tech tax campaign), as well as for better mechanisms for reporting and education for children, rather than a content-based surveillance approach.
4. Data access for researchers
Data access for researchers is an incredibly important issue that we raised in The Digital Misogynoir Report. It is at the heart of not just the work of civil society organisations and UK-based academic researchers — who would be disadvantaged compared to EU researchers with greater data access rights within the EU — but also essential for the regulator Ofcom, who themselves need to understand the harms happening on platforms in order to adequately regulate them. Data access helps us to evidence the harms Black women are subjected to online, so we can highlight specific lived experiences and call for changes to be made.
While the Online Safety Bill did not initially include guidance or requirements of platforms around data access for researchers, amendments made in the House of Lords mean that the issue is now included to some extent — in no small part thanks to the hard work of fellow campaigners like the Center for Countering Digital Hate, whose letter on the issue we signed in June. Ofcom is now required, through the Act, to research the state of data access. The regulator must publish this research within 18 months of the Act becoming law, including guidance for platforms and researchers based on their findings.
This isn’t everything we hoped for — it doesn’t create a “mandatory data access pathway” to protect the rights of researchers, for example — but it still has the potential to make a positive difference. We’ll need to keep a close eye on Ofcom’s initial guidance, just as we will on the guidance they produce on women and girls’ safety online (they’re going to be busy!).
There is also another law in the works that might change the story on this issue. The Data Protection and Digital Information Act has been highlighted by the government as a possible route to better protect researchers’ access to information. The government has stated it will report back to the House of Commons on this issue — we know from the speeches that followed in the Commons that many MPs will be keeping a close eye on this.
We believe it will be incredibly difficult for Ofcom to oversee changes made by platforms without independent research — like that in the Digital Misogynoir Report — to monitor online harms on platforms. Though they may disagree, we believe that platforms themselves would benefit from the focused attention and solutions suggested through independent research, especially those advocating for improvements in online safety.
5. Definition of harms
The wording of any law is often complex and can seem impenetrable — and the use of definitions in this Act is a good example of that. We’re here to help make sense of the legalese.
The Act talks about “harms”, and breaks this down into two camps — adults’ experiences online, and children’s. The safety measures included for adults in the Act are specifically defined by whether they are illegal or not — only illegal content counts as harm when the Act addresses adult safety. In contrast, children’s safety looks at illegal and “legal but harmful” content — so behaviours or functions that, while not explicitly illegal under UK law, are still accepted to be so harmful that they should be regulated against.
The definitions of harm have changed over the course of the Bill becoming law — Baroness Beeban Kidron successfully pushed a change that includes characteristics and functionality when defining harm to children, meaning that the companies designing the platforms have a responsibility for harm caused by how those platforms are made. This addition prompted the government to make their own change, too, when looking at harm to children. They added a focus on the extent to which the design and functions of a service affect the level of harm to children — for example, if an adult or older child can search for and/or contact children on a platform.
While these changes don’t explicitly link to Black women & girls’ experiences online, a broader definition of harm that pushes platforms to be accountable for safe design as well as individual users can only strengthen the safety of the most marginalised online.
6. Categorisation of services
Glitch’s recent research The Digital Misogynoir Report highlights the importance of regulation on small, high-harm platforms, as well as larger mainstream platforms. This research shows that racist and sexist content is being posted on both mainstream social media platforms, and smaller, high-harm platforms. The research also highlights that discourse from the smaller platforms moves onto bigger platforms, moving the norm of what is deemed socially acceptable in these spaces towards more harmful discord against Black women.
Ofcom has not yet indicated which small, high-harm platforms will specifically be included in the grouping for the highest level of regulation — but thanks to a successful amendment by Baroness Nicky Morgan in the House of Lords this will be decided with a risk-based approach, alongside a size based approach, rather than only regulating the biggest platforms. This has been welcomed by MPs, including the Shadow Cabinet minister Alex Davies Jones, stating in the Commons debate on 12 September: ‘I am particularly proud to see the Government adopt an amendment that represents a move towards a risk-based approach to service categorisation’. She went on to reference previous cross-party calls for amendments in the Commons in 2022 which aimed to ensure ‘the most harmful website and platforms, including 4chan and BitChute, which regularly host and promote far right, antisemitic content’ did not slip ‘through the cracks of legislation’.
This expanded threshold for ‘Category 1’ is significant as these will be by far the most regulated services. Glitch answered Ofcom’s call for evidence around categorisation, which they are currently considering as they decide which services will fall into various categories of regulation.
Phew! That was a lot of detail. We weren’t kidding when we said this Act covers a lot of ground!
We hope that this breakdown has given you some answers about what the new law might mean for you — we’ll keep you posted on what happens next, and how we at Glitch are keeping up the push to make the online space safer for all of us.