Skip to content
AI generated image of a Black woman on a yellow background. Her face gradually disintegrates to the left to show the dehumanisation of Black women - used as the cover of the Digital Misogynoir Report.


Our research aims to raise awareness and understanding of online harms experienced by Black women and other marginalised groups. Our methods show how to apply intersectionality as best practice. Here are a few examples of our work so far.

The Digital Misogynoir Report

‘The Digital Misogynoir Report: Ending the dehumanising of Black women on social media’ shows the widespread and alarming prevalence of digital misogynoir across five social media platforms, and calls into question efforts by tech companies and the UK government to make online spaces safer, as dehumanising abuse directed towards Black women is allowed (and often enabled) to proliferate online

Our Digital Misogynoir Report analyses almost one million social media posts and is the first to examine the widespread prevalence of digital misogynoir on social media. Despite the widespread and unchecked prevalence of digital misogynoir, current online safety research, policy efforts and the actions of major tech companies have largely ignored the combined racialised and gendered nature of online abuse.

Glitch’s report provides a statistical analysis of digital misogynoir and we call for immediate action to combat the violent dehumanisation of Black women online.

READ our full report

FUND this work by donating towards the report recommendations

JOIN our community to learn more about the findings, and how to get involved in campaigns to make a change

SUPPORT our call to political parties to include digital misogynoir in their manifesto pledges

(If the report does not open for you, try copying the hyperlink & popping it in your web browser! If it still won’t download, wing us an email on

Watch our report launch:

Hear from our expert Julia in their talk at the Oxford Internet Institute about our findings:

The Ripple Effect Report

1,800,000 people suffered threatening behaviour online in the past year. Glitch’s and EVAW’s Ripple Effect Report showed that this is worse for women and significantly worse for women of colour, and that this has sadly increased during the Covid-19 pandemic.

Roundtables on AI harms: deepfake abuse & non-criminal redress

We are currently organising roundtables with experts on racism and AI harms came together to exchange ideas on how to move research forward in this area, with a particular focus on non-carceral responses to deepfake abuse.

READ our briefing on roundtable #1

WATCH roundtable #1 here:

WATCH roundtable #2 on ‘User redress for AI harms’ here: