top of page

Down the deepfake hole



It has been a few months since word got out about a site where the famous Twitch streamer Atrioc created deepfake porn about some of his colleagues, friends and acquaintances. But it shouldn’t be a shock, since the numbers give a completely different view. When we merge deepfake and porn, we get an extremely dangerous outcome, since creating false digital media becomes easy. Deepfake porn has become easily available, most of the time through open source algorithms or designated apps, and the number of its users has grown rapidly, especially after the Atrioc scandal. This brings us to the questions that we’ll try to give an answer to in the next paragraphs: how will the people affected be able to safeguard their reputation, job, family and mental wellbeing? How will it be possible for people to differentiate between real and AI-generated porn? And will it ever be possible to get rid of non-consensual material in a place where nothing is allowed to be forgotten?

First, let’s explain what is revenge porn and how the deepfake technology works.

Revenge porn is a specific type of porn that has the objective of getting revenge on a previous partner, with the intent of compromising the image and reputation generally of women, through the release and diffusion of private pictures or pornographic videos, obviously without consent and sometimes without the knowledge of the victim. Deepfakes, which can be used in the film and education industry, today are mainly used to generate sexually explicit media for cyber exploitation, that amounts according to a study by Sensity (originally Deeptrace) to 96% of the available videos on the Internet.

To create deepfake images, a GAN algorithm is used, which stands for Generative Adversarial Networks. GAN is made up of two neural networks that are engaged in a competition: one has to create data similar to the given data set and the other, called a discriminatory model, has to classify whether the data given to it by the first network is synthetic or original. When the discriminatory model is no longer capable of differentiating between original and synthetic data, that means that the data will appear authentic to the untrained human eye.

Since deepfake porn is a very recent phenomenon, amplified during the coronavirus pandemic, there is not enough data, but when it comes to revenge porn in particular, the numbers are shocking.

  • According to the report “ The State of Deepfakes” between 2018 and 2019 artificially created videos were nearly 15000, with a doubled and constant growth compared to the previous year. 96% of these videos were pornographic, whereas the remaining 4% were miscellaneous.

  • Again according to the same report 680000 women have been digitally denuded and posted on various Telegram groups and bots that have facilitated the circulation of such images. The original pictures have been obtained from public social media pictures, or private pictures, that then are commented, shared, sometimes even voted, with private information such as their home address, telephone number or username attached.

  • An Amnesty International study found that 911 of the 4000 women surveyed have been victims of sexual harassment or threats online. When it comes to Italy, 59% of the victims endured these from perfect strangers.

  • The Cyber Civil Right Initiative found that 93% of the people affected by revenge porn have had heavy repercussions on their mental state, 82% have had repercussions on their social and working life.

Across communities on Reddit and Twitter, though, commenters have been accusing the women involved of exaggerating the impact of the deepfakes, comparing the fakes to a harmless Photoshop job or even justifying these phenomena since some of the victims were active on OnlyFans.

What are the ethical and legal issues that can derive from their exploitation?

From an ethical point of view, the distribution of deepfake porn, as in revenge porn, can harm people’s reputation, dignity and personal freedom. It is important to note that the people who become a target by deepfakes have few avenues of legal recourse. Legislation is not up-to-date with this recent trend and in fact it is non-existent in many countries, or partially existent when it covers revenge porn of a “real” person and pedopornography. But change is on the way, hopefully, in fact some American states have a law punishing non-consensual deepfakes, the UK has proposed legislation to such abusive behaviours, and the EU is trying to find a common point of view. The main issue at hand is still removing content from the internet and the proliferation of deepfake porn that can contribute to the normalization of non-consensual sexual activity, perpetuation of harmful gender stereotypes and promotion of sexual violence.

In conclusion, it is necessary to:

  • Highlight the importance of consent when it comes to sex, through awareness campaigns, especially in schools, where harmful stereotypes and men, such as the infamous Andrew Tate, are idolized.

  • Inform and raise awareness on these phenomena, in family settings especially, since the majority of people cannot distinguish a real photo from a fake one.

  • Create ad hoc legislation and support systems for victims, for them to be able to limit/reduce the impact of this type of revenge on their careers and families.

We can see that countries are starting to move in the right direction, but it is not enough, at all, as long as abusers don’t get punished for their crimes and victims keep on getting targeted and ostracized, most of the time for doing something human.

Articoli Correlati


Commissione von der Leyen, due anni dopo

E-democracy, e-governance e cybersicurezza: il caso dell’Estonia

E-democracy, e-governance e cybersicurezza: il caso dell’Estonia

E-democracy, e-governance e cybersicurezza: il caso dell’Estonia

Commissione von der Leyen, due anni dopo

Commissione von der Leyen, due anni dopo
bottom of page