Glitch’s response to Ofcom’s illegal content codes
Our approach to analysis
Under the Online Safety Act (OSA), Ofcom has powers to regulate user-to-user and search services by imposing new duties to safeguard users from illegal content on digital platforms. When analysing Ofcom’s Illegal Content Codes of Practice, we have applied a design justice lens to guide us in understanding and evaluating how these measures will challenge or reinforce current safety practices, transparency and inequities, particularly in relation to how system inequalities such as racism and sexism shape experiences of online harm. We have applied a design justice framework for our analysis because both the government (Department for Science, Innovation and Technology) and Ofcom refer to ‘safety by design’ as a standard for categorised services and digital platforms in general.
Dr. Sasha Costanza-Chock’s work on design justice is a framework and set of principles that advocates for designing systems, tools and technologies in ways that explicitly challenge, rather than reproduce, structural inequalities. The application of a design justice framework reveals how the design of platforms impacts the distribution of affordances (1) and/or dis/affordances between groups of people, pushing us to question how design reproduces and/or challenges the matrix of domination.
In applying this lens, we ask how certain groups’ identity characteristics (based on race, gender, disability, etc.) might be protected (or not) from exposure to illegal content, by Ofcom’s regulatory approach.
Proportional for who?
Section 10 of the OSA lays out Ofcom’s powers in relation to ‘proportionate measures’ in relation to the safety duties about illegal content – defining proportionality in relation to level, nature and severity of risk as deemed by the most recent illegal content risk assessment, and the size and capacity of the provider (2). Given there is no previous illegal content risk assessment, the regulatory regime should overall consider the nature, severity and level of risk as guiding principles for its own definition and implementation of proportionate measures.
The effectiveness of Ofcom’s regulatory regime does depend greatly on how it interprets and defines said proportionality and under which incentives. In our response to the Science, Innovation and Technology Committee’s inquiry in December 2024, we expressed concern that if Ofcom is under-resourced, particularly in terms of its ability to respond to legal challenge from globally powerful tech companies, it risks making the OSA ineffective in practice. Our concerns were confirmed in a Westminster Hall Debate on 25 Feb 2025 where Parliamentary Under Secretary of State (Department for Science, Innovation and Technology) confirmed that Ofcom “has had to spend a long time consulting on the codes to ensure that they are as proofed against judicial review as possible.” This is hugely concerning given that Ofcom should be resourced with the proper funding, authority, and expertise to push platforms to make necessary changes, without fear of retribution from companies. Particularly given users have so little power to challenge Ofcom, in comparison to the vast resources of tech companies. Later in the debate it was (re)confirmed that “to avoid any challenge, [Ofcom] must ensure that it gets the codes right.” So we ask, right for who?
Ofcom’s fear of pushing things ‘too far’ for companies is certainly present in the codes – leading to measures that feel contradictory. For example, though Ofcom made a significant decision to apply risk assessments to their everyday recommender system design adjustments, they have then diluted the measures by not imposing corrective action in response to risks. Ofcom has stopped short of requiring mitigations on “every single test or [to] opt for a design adjustment that appears to be the safest for users based on a single test result,” presumably to ensure the measure is ‘proportionate’ given the vast number of design adjustments that are made to recommender systems (3). But does not clarify when mitigations are required. So while Ofcom establishes “several steps to secure that the test results will influence future design choices” such as logging risks and explaining subsequent design decisions, there is a de-facto get out clause on stopping or mitigating those risks, creating a tick box exercise rather than a strong implementation regime (4). Unfortunately, while this approach may improve the identification of harm, it fails to “effectively mitigate and manage the risk of the service [and] risks of harm to individuals” as per the OSA’s intention for ‘safety duties about illegal content’ (5).
Finally, Ofcom has made a series of exemption decisions that reduce the intention and scope of the OSA, which could be better understood as the product of pressure of judicial review. These include the exemptions of for small or single-risk platforms from Category 1 requirements, the exemption of risk assessments for ‘significant changes’ to recommender systems, the exemption of recommender systems that underpin search functionalities on user-to-user services or network recommender systems that recommend other users to connect with or groups to join (6).
Delivering on the intention of the OSA
We recognise that Ofcom is implementing a new regulatory regime under resourcing and staffing pressures. However, while Ofcom’s implementation of the codes introduces some positive measures, it seems it falls short of the OSA’s original ambition in relation to protections for marginalised social groups.
An example of this in the introduction of the Act, it specifies ”To achieve that purpose, this Act (among other things) imposes duties which, in broad terms, require providers of services regulated by this Act to identify, mitigate and manage the risks of harm (including risks which particularly affect individuals with a certain characteristic) from— (i)illegal content and activity, and (ii)content and activity that is harmful to children and confers new functions and powers on the regulator, OFCOM” (7). Here it is clear that the remit of the OSA provides Ofcom with the powers to impose duties that address differential experiences of users with specific characteristics who are particularly impacted by ‘harm’. The OSA also includes specific requirements for Ofcom to consult with groups with protected characteristics as per the 2010 Equality Act, which would be leaning on participatory policy-making approaches (8). However, as far as we know, Ofcom has not imposed any measures onto companies to assess, or mitigate, for risks impacting specific groups (other than broadly between children and adults as per the high-level split in the codes in the Act).
In doing so, Ofcom has declined to pursue any obligations on platforms to assess how design decisions affect different groups. For example, where Ofcom imposes safety metrics onto providers in relation to recommender systems, they limit this to focus only on the number of illegal content items identified and their reach, without having to report on the nature of the content or who is being exposed (8). This omission means that the regulator will miss a significant opportunity to learn about, and protect (via their duties under OSA itself) users from the kinds of content items that are flagged as ‘harmful’, and that which is particularly targeted to those from marginalised backgrounds. This is just one example of a major flaw in its approach to assessing risk. This is especially concerning, given the OSA details the responsibility of providers under ‘user empowerment duties’ to assess “the likelihood of adult users with a certain characteristic or who are members of a certain group encountering relevant content which particularly affects them” (9). In addition, the OSA requires platforms to detail “features which adult users may use or apply if they wish to increase their control over content” should be available specifically for abusive content on the basis of race, religion, sex, sexual orientation, disability, or gender reassignment (10). Thus, with Ofcom neglecting to impose metrics that account for the differential experiences of users with specific characteristics in the code, the regulatory regime has been significantly weakened.
Undermining safety-by-design
One critical gap is Ofcom’s lack of safety-by-design framework. There are several safety frameworks, such as design justice, privacy-by-design, fairness-by-design, security-by-design, etc., that could analyse how platform designs impact safety for different groups of users. As far as we know, Ofcom has chosen not to use a specific framework, nor has it encouraged providers to apply one over another. By advocating for safety-by-design in general, without articulating, or advocating for, the application of a consistent framework, Ofcom has missed the mark again in leading and guiding digital platforms to the expectations set out in the OSA and the Secretary of States Statement of Strategic Purposes. This is partly due to the lack of clarity in its overarching decision making process and also because of contradictions introduced by the Codes, which could have been avoided in a fully articulated safety-by-design framework. Moreover, the lack of safety-by-design framework in context of the lack of metrics about specific groups mentioned above, has resulted in a regulatory approach that invisibilises harms experienced disproportionately by groups who are already marginalised.
Given the OSA articulates that “the measures described in the code of practice must be proportionate and technically feasible: measures that are proportionate or technically feasible for providers of a certain size or capacity, or for services of a certain kind or size, may not be proportionate or technically feasible for providers of a different size or capacity or for services of a different kind or size” (11). Ofcom does have a requirement to consider technical feasibility – with a clear mandate specifically around differentiating between platforms of different size and capacity (12). However, the codes apply measures “where it is technically feasible for a provider to implement them,” to all providers a number of times (12). This not only allows platforms to delay or limit compliance by citing technical difficulties but also contradicts the regulator’s commitment to pushing providers towards safety-by-design approaches. For example, in the case of Child Sexual Abuse Material (CSAM), hash-matching technology is only required if technically feasible, with no mention of platform size or capacity. This effectively prioritises feasibility (including for example cost) over user safety, presenting these regulatory measures as optional rather than mandatory.
Although Ofcom is set to investigate claims of technical infeasibility, overall the loophole allows platforms to avoid implementing any kind of ‘safety-by-design’ measure and even disincentives it. For instance, in the case of content moderation, Ofcom not only creates a ‘get out’ clause, but provides an incentive for platforms not to develop technical feasibility to implement this. As it states in ICU C2.2 of the codes, “the provider should, as part of its content moderation function, have systems and processes designed to swiftly take down illegal content and/or illegal content proxy of which it is aware, unless it is currently not technically feasible for them to achieve this outcome [emphasis added]” .
Acting in line with its own objective for “online services to be designed and operated with safety in mind”, Ofcom had, and still has, the opportunity to be imaginative and forward-thinking about the boundaries and possibilities of ‘technical feasibility’ for different providers. A true safety-by-design regulatory approach would demand changes to the way a platform works, in order to shift the dial towards prioritising safety for users, as intended by the OSA.
Who is being protected?
In our response to Ofcom last year we laboured the point that harm is not distributed equally. Without recognising how harm is unevenly distributed, Ofcom has implemented “gender-blind” and “race-blind” policies, implementing a regulatory regime that could further entrench bias and inequity between groups of users. For example, despite us directly engaging the Ofcom team on the legal basis for intersectionality they have not applied this legal framework. Automated content moderation is the only chapter that explicitly responds to Glitch feedback in the measure of ensuring elements such as hash databases and URL for CSEA “do not plainly discriminate on the basis of protected characteristics (such as sex or race)” (13). The explicit use of “or” rather than “and/or” in the example given, immediately demonstrates that Ofcom has failed to consider groups such as Black girls who are “members of a certain group” who may be subject to discrimination on the basis of multiple protected characteristics i.e. sex and race and/or other, as is the intention of the OSA.
Meanwhile, Ofcom has chosen to introduce some duties only in relation to harms to children. The reasoning behind this decision is completely unclear, given there are separate children’s codes and that both codes are given equal importance in the OSA. For example, in terms of automated content moderation tools, Ofcom has focused only on risks to children, limiting the scope of recommended hashmatching and URL detection to child sexual abuse material. Consultation on content moderation measures impacting adult users has been postponed to a forthcoming consultation in spring 2025. The same issue exists in the U2U settings, which applies “restricting the automatic display of location information” only to providers with high-risk of child grooming content. This restriction to ‘high-risk’ providers excludes and ignores the fact that women and girls fall victim to stalking and harassment, of which the display of location information can facilitate. Further, our work on the disproportionate nature of violence against women and girls on the basis of one or more characteristics such as sex, gender, race, religion, disability illustrates the need to ensure platform affordances cannot be weaponised against this community (14). This is a missed opportunity to enforce targeted protections for at-risk communities, particularly adults. Glitch will continue to work to influence Ofcom to correct this regulatory void in terms of safety for Black women specifically, moving forward.
Footnotes
(1) Affordances is a term describing the relationship between people or ‘users’ of a platform and add-on features on social media platforms. They are “key to understanding and analyzing SNS interfaces and relations between technology and users” lasade-anderson, temi, & Sobande, F. (2025). Ideology as/of Platform Affordance and Black Feminist Conceptualizations of “Canceling”: Reading Twitter. Television & New Media, 26(1), 119-131
(2) Online Safety Act: Part 3, Ch 2, 10, (a-b)
(3) Ofcom’s Illegal Content Codes of Practice (7.52)
(4) Ofcom’s Illegal Content Codes of Practice (7.55)
(5) Online Safety Act: Part 3, Ch 2, 10 (2)
(6) Ofcom’s Illegal Content Codes of Practice (ICU A3-7, ICU C3-C10, ICU D8-9, ICU D14, ICU E1, ICU J1-3)
(7) Online Safety Act: Part 1, 2 (a)
(8) Online Safety Act: Part 4, Ch 5, 78, 2(f)
(9) Online Safety Act: Part 3, Ch 2, 14, 5, (c)
(10) Online Safety Act: Part 3, Ch 2, 15 (2) and 16 (4)
(11) Online Safety Act: Schedule 4, 2, (c)
(12) Ofcom’s Illegal Content Codes of Practice (2.40, 2.45, 2.109, 2.110, 3.57, 4.69, 4.236)
(13) Ofcom’s Illegal Content Codes of Practice (4.137)
(14) Ofcom’s Illegal Content Codes of Practice (8.73)