The Heart Part 5 is part of Kendrick’s tradition of getting his fans ready for a new album. From Heart Part 1 back in 2010 showing him as a 22 year old fresh faced rapper, grafting to get recognised, to The Heart Part 5 where a seemingly anxious and thoughtful Kendrick goes on to morph into a number of prominent Black men using deepfake technology.
So what is deepfake technology and how does he do it?
Deepfake technology is when an image or video of someone is digitally altered to do or say something they didn’t, or to impose someone’s face onto an existing video or image.
In Kendrick’s case, he used images and videos of public figures OJ, Kanye West, Jussie Smollett, Will Smith, Nipsey Hussle, and Kobe Bryant on a video of his own face, showing them to rap his lyrics and take on his expressions. In line with his album, Kendrick uses deepfake to explore themes of morality through these figures, many of whom have been at the heart of significant public controversy including violence and gender based violence in particular, among them. The video is captivating, chilling, creative and sends a powerful message.
The observant among us also probably noticed that the parts of the video showing Jussie Smollett and Will Smith in particular, were hyper realistic. This is likely because as actors, a lot more footage of them already exists, allowing the technology to draw on more data to create higher quality deepfakes than of others.
So what does this have to do with online gender based violence?
Deepfake technology can be used in any context. Kendrick has opened the flood gates of showing the artistic potential of this digital tool. As part of this he explored themes of gender based violence through his choice of showing figures like OJ.
Despite this artistic potential, research shows that deepfakes are used more for harm than anything else. Currently, politicians are raising the alarm about deepfakes because of incidents of this technology being used against political leaders, for disinformation, showing them to be saying things they never said and are not true. This includes for example Director Jordan Peele creating a deepfake video of Barack Obama to raise awareness about the issue.
Despite the attention deepfakes have had for their potential dangers for political disinformation, research actually shows that up to 90% of deepfakes target women with sexually explicit material without their consent. This abusive content is known as deepfake image based sexual abuse (or non-consensual intimate images).
This is when deepfake technologies are used to digitally impose someone’s face onto sexually explicit images or videos without their consent. These tools target women and sometimes only work on women, showing they are designed and used specifically for online gender based violence. In one case of a Telegram Bot that was found to be creating deepfake image based sexual abuse research, at least 100,000 victims were identified, including underage girls.
What’s the impact?
Much like how moved so many of us were when we saw Kendrick transform into public figures like Kobe Bryant and Nipsey Hussle who’ve passed away in recent years, deepfake videos and images can feel very real and can have a profound impact on those watching as well as those in the videos.
Image based sexual abuse using deepfakes can also have profound impacts on victim-survivors, especially in contexts of domestic abuse or when they are not able to get help from their communities due to fear of being blamed or judged. The impact of these videos can include profound mental and emotional distress, breakdown of relationships and even loss of employment.
In an era where uploading images and photos of ourselves is an important way to connect, share and enjoy online spaces, women should be able to do so without fear of being targeted.
What needs to change?
The UK is behind on regulating deepfakes as the current legislation being developed (the Online Safety Bill) falls short of regulating specific technologies or considering online gender based violence (see our campaign to include women and girls in the Bill here).
The EU Artificial Intelligence Act does include deepfakes, and is going through the European Parliament currently. However, it currently classifies deepfakes as “limited risk” AI meaning the only obligation is for deepfakes to be labelled as such, with nothing to limit or stop their use for image based sexual abuse, and no clear approach to preventing labels from being removed, risking making it even more difficult for victim-survivors to prove it’s not real.
In China, the government seems to be acting more decisively to ban deepfakes with some exceptions that require labelling. Although it is unclear again how they will track and prevent these technologies from proliferating via open source platforms.
In any case, it is clear that without gender-informed legislation that stops the development and use of these tools for harmful and abusive purposes, we will continue to see more and more victims.
So as we appreciate and feel the impact of deepfake technologies used for artistic expression, let’s also bring attention to the dark side of these technologies and the need for change.
To keep up to date with the latest, sign up to our newsletter.