Glitch’s response to the Government’s inquiry on social media, misinformation and the role of algorithms

The wave of racist and anti-immigration riots during July and August 2024 were driven by misinformation shared on social media platforms. When false claims about the religion and citizenship status of the teenager who killed three young girls in Southport spread across the internet, disinformation and inflammatory, racist, and anti-immigrant rhetoric ensued, followed by incitement of violence against people of colour, Muslims, and businesses they owned. 

Following these race riots, the Science, Innovation and Technology Committee (SITC) launched an inquiry in November 2024 to examine how social media, misinformation, and profit-driven algorithms, that benefit from viral engagement, contribute to spreading false and harmful content. Glitch’s evidence to the inquiry built on the works of Timnit Gebru, Brandeis Marshall, Dr. Safiya Umoja Noble and other Black women scholars, activists and technologists who have been raising the alarm on the dangers of social media algorithms and platforms’ commercial incentives behind them.

To what extent do the business models of social media companies, search engines and others encourage the spread of harmful content, and contribute to wider social harms?

Social media platforms have become vast and powerful tools for finding community, sharing content, economic activity and the dissemination of news and information. It is important to note that while these services appear to be free to users, they are paid for indirectly through advertising revenues due to the volume, concentration and targeting of users, which can feed significant market dominance and power for platforms. In 2020, approximately 80% of all search and display advertising was accrued by Google or Facebook alone.

Platforms depend on one or more types of advertising to target users, which include but aren’t limited to: 

  • Contextual: for example, via user searches or location

  • Behavioural: for example via perceived characteristics based on tracked online behaviour

  • Programmatic: for example via real-time bidding for ad spaces

This infrastructure requires vast amounts of personal data collection by platforms and an ecosystem of players in the supply chain. This ecosystem includes platforms with software to allow marketers to buy ad impressions in real time and others that enable publishers to manage and fill the space they have available for ads. Social media platforms such as Facebook, TikTok or YouTube essentially combine the functions of ad exchanges, supply-side and demand-side platforms within their platforms’ own sizable ecosystem. Whereas search engine Google, runs “one of the biggest ad exchanges in the world as well as some of the biggest supply-side and demand-side platforms”. A concerning trend is companies such as Meta or Google circumventing data protection and competition laws by hosting business ads and users’ browsing activity within their large ecosystem of platforms, and using third-party cookies to track and surveil user activities across a range of sites and platforms.

In this context, user’s information rights and other rights, including the rights to privacy, nondiscrimination, assembly and association, and economic, social, and cultural rights are at risk. Content moderation is deployed downstream to mitigate for these impacts of automated amplification and audience targeting with misinformation, disinformation or harmful content based on users’ assumed personal characteristics or political beliefs. This is accompanied by abstruse terms and conditions, information on data use and influential default settings within the choice architecture for users on how their data will be used.

In many cases this data is used to algorithmically drive user engagement and to sell behavioral advertising. These practices are responsible for exploiting users’ vulnerabilities, triggering psychological trauma, depriving people of job opportunities while pushing disturbing content to others. Crucially, these practices are underpinned by what Ariadna Matamoros-Fernández calls “platformed racism,” which is “a new form of racism derived from the culture of social media platforms – their design, technical affordances, business models and policies – and the specific cultures of use associated with them.” This means that platforms’ business models can actively drive discriminatory practices.

Beyond the speed and volume of content, many other structural issues contribute to harmful content moderation practices including: a distinct lack of resource to non-English content; under-resourced content moderation teams; and, poor worker rights and wellbeing support for moderators. The lack of resource and capabilities in content moderation practices also leads to disproportionate censorship of users’ from minoritised or marginalised communities who are more likely to engage in counter-speech or intra-community language which is poorly understood by moderators. Finally, due to platforms’ business models primary motive being profitability and revenue, we have seen that when social media platforms are under financial pressure, the rights and safety of users are further deprioritised, with many disbanding or drastically reducing already low resourced trust and safety work.

How do social media companies and search engines use algorithms to rank content, how does this reflect their business models, and how does it play into the spread of misinformation, disinformation and harmful content?

Recommender algorithms are the fundamental way platforms push (suggest or promote) ‘personalised’ content on to users to keep them engaged. In 2018, YouTube’s Chief Product Officer, Neal Mohan, stated that at least 70% of users watch time comes from AI-driven recommendations. 

Search engines use recommender algorithms to prioritise and serve results that match the queries of users, whereas social media platforms primarily use a ‘news feed’ as the core area of engagement for users and advertisers. Content in the news feed is based on several data signals, as explained above. With the application of algorithms on these social media companies able to prioritise content they think will keep the user engaged, and guide user engagement with trending topics or content, with the ultimate goal of maximising profits from surveillance-based advertising

It is essential that algorithms, and the values and groups that are prioritised by them, are subjected to significant scrutiny because of their “near-ubiquitous” presence in our daily digital lives, shaping our experiences both visibly and invisibly. Notably, algorithms can actively shape our views, access to information, safety and mental health. Research by Safiya Umoja Noble as far back as 2018 uncovered the deep and disturbing bias in Google’s search algorithms which were found to “reinforce oppressive social relationships and enact new modes of racial profiling” at the cost of deeply harmful content for and about Black women and girls. Yet, social media companies often cite issues of intellectual property and competition as barriers to sharing how their algorithms rank content, which means algorithmic transparency has been a long standing issue, which is now finally being tackled by legislation such as the Digital Services Act in the EU. Unfortunately the same cannot be said for the UK’s Online Safety Act, which falls short on this issue. 

However, existing research clearly shows that the way recommender algorithms work to capture attention, actually promotes and pushes extreme and harmful content onto users. A couple of examples include:

  • After years of “inaction” from YouTube on recommender algorithm transparency, Mozilla Foundation created a browser extension and crowdsourcing project to research the harms inflicted by YouTube’s algorithm. The study found that 71% of reports came from videos recommended by the algorithm (including misinformation, violent or graphic content, hate speech, and spam/scams), with recommended videos being 40% more likely to be reported as harmful, than videos searched for by the user. 

  • Laura Bates’ research on the ways boys are radicalised into extreme misogyny via memes, jokes and forums dedicated to genuine topics for men and boys including dating, health and mental health means often boys are not actively seeking this content but are being pushed the content by algorithms that are harmful to them and girls.

Recommender algorithms also play a role in pushing videos that include hate, abuse and misinformation – particularly of public figures. This is particularly harmful when considering the disproportionate level of abuse faced by Black women and other minoritised people in public life, while gender representation remains a critical issue in UK politics, particularly at the local level. During the 2024 General Election, Glitch supported a number of Black women MPs who experienced the emotional and psychological toll of abuse and digital misogynoir, some of which was widely promoted by social media platforms’ algorithms.

What role do generative artificial intelligence (AI) and large language models (LLMs) play in the creation and spread of misinformation, disinformation and harmful content?

As there is no universally agreed definition of AI, the terms used to refer to different AI technologies can differ. For this consultation response, Glitch focussed on software-based AI systems as per the European Commission on AI’s definition: “Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals. AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications).” 

Generative AI and LLMs play a consequential role in the creation and dissemination of misinformation, disinformation, and harmful content. Generative AI and LLMs are audiovisual and text-based, respectively. Generative AI enables the automated production of hyper-realistic synthetic media, including deepfakes, altered videos, fabricated audio, and false narratives. Glitch’s soon to be published research shows the ways Black women are targeted by deepfake and AI-related abuses, including identity theft, harassment, misinformation, disinformation, privacy invasion and image-based sexual abuse. 

Both AI and LLMs depend on machine learning which involves feeding huge amounts of data into a ‘learning algorithm’ creating patterns and rules based on that data that can use those rules to make predictions. Researchers at Cornell University found that datasets often used as a source for LLMs included “troublesome and explicit images and text pairs of rape, pornography, malign stereotypes, racist and ethnic slurs, and other extremely problematic content”, demonstrating how decisions made upon this data can be significantly discriminatory and unsafe. In addition, LLMs can pose significant risks for human rights abuses given they can be harnessed to significantly amplify the velocity, variety, and volume of disinformation and misinformation. For example there’s evidence of Google’s LLM advising on health benefits of eating rocks, while Open AI’s Chat GPT was found to cite court decisions that did not exist.

Research by the UN shows the human rights consequences of generative AI with harms such as violent speech, bias and exclusion, and invasion of privacy of often vulnerable and marginalised populations. By looking at the value chain, the UN identifies human rights risks across dataset preparation, modelling, and model deployment such as risk of erasure, risk of online harassment and risk of gender-based violence, with pornographic deepfakes being an important issue. Generative AI allows malicious actors to conduct personalised attacks by creating tailored convincing fake media and harmful content, intensifying psychological harm and reinforcing systemic inequalities. By reducing the cost, time, and technical skills required to produce such content and the mass availability of new tools, generative AI can facilitate large-scale disinformation operations and targeted campaigns of online harassment. This has implications both for the decisions that companies make to design, develop and incorporate these technologies into their products and the need for improved content policies enforced against user-generated content.

What role did social media algorithms play in the riots that took place in the UK in summer 2024?

The rapid dissemination of disinformation during this period was facilitated by platform algorithms designed to prioritise engaging content, regardless of its veracity. This highlights a number of core issues in relation to the algorithm and the wider design infrastructure. For example, accounts with large followings who shared these unfounded claims would have been granted priority in people’s newsfeeds due to the algorithm giving larger reach to more influential accounts. On X for example, the disinformation was spread by accounts that are ‘verified’, essentially building in pay-for-reach design with tiered services providing even greater prioritisation for those who want to spread their content wider and faster, rife for spreading disinformation quickly

The scale and speed of the way disinformation spread across different platforms highlights the inadequate coordination between platforms of removing content that is cross posted, even when a threat to national security. It is also notable that closed messaging apps such as Telegram, Signal or Whatsapp were used by far-right groups to share details of planned protest/incite violence, so need to be considered as an important part of the ecosystem of disinformation, misinformation and harmful content. 

How effective is the UK’s regulatory and legislative framework on tackling these issues?

Online Safety Act (OSA)

Glitch is supportive of the risk assessments imposed on tech companies in the OSA given the potential impact to prevent harm. However these duties should be further strengthened by introducing fundamental rights risk assessments (in relation to the Human Rights Act 1998 for example) and algorithmic transparency requirements, reflective of the EU Digital Services Act. 

The introduction of priority offences in the OSA is not a preventative harm-reduction measure. Also, the narrow remit of the legislative (and therefore regulatory frameworks), significantly impacts the ability to address a wider spectrum of harmful online content as they focus on explicit and often lacking criminal definitions. This constrained focus often fails to account for the complex, subtle, and evolving nature of digital abuse that does not necessarily meet the legal criteria for criminal behaviour. In addition, given that groups such as Black women are less likely to seek criminal justice and more likely to experience poorer outcomes in the criminal justice system, the introduction of priority offences will not necessarily reflect in justice for online harms impacting Black women. Glitch therefore advocates for noncriminal avenues of redress to be introduced and for ring fencing of 10% of the Digital Services Tax to support prevention and redress for the most impacted communities. We also caution against an approach that criminalises a suite of new offences without investment in Media Literacy by Government and companies themselves. 

The Act’s transparency obligations could be impactful, but only if additional measures are taken including ensuring that transparency efforts are robust and data-driven. Currently, some tech companies release transparency reports, but they do not collect or release disaggregated data. Duties should be placed on tech companies to release transparency reports with disaggregated data available for independent research on content moderation: including race, gender, and other identifying characteristics of abusive posts, flagged posts, and content takedown. 

Ofcom

The effectiveness of the OSA in combating harmful social media content depends on how well its provisions are enforced, its ability to respond to emerging harms, and its capacity to hold tech companies accountable. Ofcom’s enforcement role is central to this, but it needs proper funding, authority, and expertise to keep up with the ever-changing tech landscape. The lack of funding seems particularly acute in Ofcom’s Media Literacy work, which is fundamental to making current legislation and regulation work. Ofcom and the Government should work together to ensure Ofcom’s Media Literacy strategy has funding behind to execute on public education campaigns and resources. 

In our statement, we welcomed Secretary of State for SIT Peter Kyle’s draft Strategic Priorities for online safety, for providing in no uncertain terms that Ofcom’s expanded powers under the OSA gives it scope to thoroughly and effectively regulate platforms in relation to online safety. It is clear however that if Ofcom is under-resourced, particularly in terms of its ability to legally challenge globally powerful tech companies, it risks making the OSA ineffective in practice. If Ofcom are led by narrow legal definitions in the OSA and are under pressure from legal teams in large tech companies, we may find the regulator cannot hold companies to account on these duties. 

What role do Ofcom and the National Security Online Information Team play in preventing the spread of harmful and false content online?

Ofcom

Ofcom plays a critical role in preventing the spread of harmful and false content online, primarily through its regulatory functions under the Online Safety Act. Ofcom’s responsibilities include:

  • enforcing compliance with content moderation standards

  • assessing the effectiveness of platforms’ safety measures

  • requiring transparency in how these companies manage and report on harmful content

In the wake of the riots, The Rt Hon Peter Kyle MP, Secretary of State for Science, Innovation, and Technology, encouraged Ofcom to adopt an ambitious approach to regulating online safety. However, Ofcom appears to be risk-averse, adhering to narrow interpretations of its regulatory powers under the current legislation. The government has decided against immediate amendments to the Online Safety Act (OSA), arguing that such changes would delay the implementation of existing provisions set to take effect in 2025, which will impose new obligations on social media platforms to tackle harmful content. Post-2025, the government plans to review the impact of the OSA and has not dismissed the possibility of further legislative amendments.

For Ofcom to effectively enforce online safety regulations and challenge influential tech companies, it must be adequately resourced and equipped with the necessary legal resources to follow through on challenges to companies. The legal resources of multinational tech firms is well-documented, with substantial capital allocated to lobbying against regulatory measures in the EU and UK. High-profile legal confrontations, such as the Gonzalez vs. Google case, illustrate the lengths to which these companies will go to contest regulatory actions. A recent legal commentary described Meta as “an aggressive defendant with limitless resources and an appetite for challenging every conceivable aspect of our case,” underscoring the intense legal battles Ofcom faces. Without sufficient legal and financial backing, Ofcom’s ability to be ambitious and assertive in its regulatory role could be severely compromised, limiting its effectiveness in enforcing new laws and safeguarding online safety.

National Security Online Information Team

The National Security Online Information Team (NSOIT) aims to address the spread of harmful and false content online. NSOIT operates by monitoring and analyzing misinformation and disinformation threats in collaboration with various government and external partners. This team, as is the case with a range of national bodies, has trusted flagger status with the major social media platforms. However, the decision to take action on flagged content ultimately rests with the platforms themselves. NSOIT has faced criticism for its lack of transparency and limited engagement with civil society on its operational strategies and definitions used in flagging disinformation. Efforts by Public Technology to uncover more about NSOIT’s activities through Freedom of Information requests were declined, citing the need to protect the government’s relationship with social media platforms and to preserve a ‘safe space’ around ministers and government officials. 

NSOIT took an active role in tracking activity around the time of the summer riots, though it is unclear what the scale and impact of this work was.This secrecy has raised concerns about NSOIT’s operations during significant events like the pandemic and social unrest, highlighting the need for clearer disclosure of its policies and actions to foster greater public trust and accountability.

Previous
Previous

Glitch’s response to Ofcom’s illegal content codes

Next
Next

A New Chapter for Glitch Charity