By Maria Makurat - Human Rights and Cyber Security Team

Introducing the issue

On February 9th, 2025, French President Emmanuel Macron posted on X a montage of AI deepfake created with videos of himself in various scenarios, including a TV series to promote the “Artificial Intelligence Action Summit ”. Deepfakes are more than ever a ubiquitous technology, very easy for anyone to access and use. Governments are investing more in AI and it seems almost a race by major players such as the EU, China, and the USA. In this context, a main topic remains an issue warranting priority in the discussion: AI and Women’s Rights. Whether it be in the sector of cyber security, modern warfare, or even the arts domain, AI provides as many opportunities as challenges that should be equally analysed to highlight the current benefits and potential damage, what remains to be done, and where we are headed on the matter.

The AI Paris Summit has brought together many specialists and governments to tackle the multiple aspects of AI and “deepfakes”. This term is defined in the recent “International AI Safety Report”: “A type of AI-generated fake content, consisting of audio or visual content, that misrepresents real people as doing or saying something that they did not actually do or say.” AI is affecting our cyber domain by spreading disinformation about women. Violence against women in the cyber domain such as “revenge porn” is not a new phenomenon.

What about the fast-evolving AI technology? What about the “dark net” that has been largely discussed on TikTok, with some individuals warning women not to google themselves in the dark web? What are some of the benefits of AI in all of this? Just recently, in the USA the Bill “TAKE IT DOWN” Act is being pushed to protect women against deepfake-generated content, showing the growing relevance of the issue.

This article aims to explore new aspects of the current debate by looking at the recent AI Paris Summit, what research groups and personal investigations have found up to now, and comparing the pros and cons of AI regarding women’s rights in the cyber domain. Lastly, recommendations and suggestions will be made for further research, and what questions need to be tackled will be discussed.

Photo by Markus Spiske on Unsplash

The negatives of AI and women’s rights

One issue that comes to mind when thinking of women and violence in the cyber domain is non-consensual intimate image distribution (NCIID) related to revenge porn, an issue that goes back as early as 2007. However, now it is the ever-evolving technological landscape that complicates this crime. Celebrities have already been victims of this deepfake technology, resulting in wide scandals and many fans reacting in outrage towards the creators of such images and videos. Recent developments show that not only celebrities or public figures are falling victim, but also private individuals. The victims include women from all socio-economic occupations and backgrounds, regardless of whether they are celebrities or not.

Each victim is affected equally however, it does seem to show a pattern that women who are active in public should be silenced through such actions. Recently surveys and research have shown that since 2018 and 2023 the amount of women falling to “digital violence” has doubled. This development shows that further research remains necessary, alongside debate and exchange of information about this pressing matter.

It is a fact that most deepfakes and AI-generated videos are related to porn and violence towards women. Certain AI programs are specifically designed to undress women from photos. Such developments are concerning since said technologies have become easy to operate and are very accessible, therefore making each woman a possible target by anyone in the world, regardless of whether they know the woman or not. The potential consequences of such actions are known but should still be listed: reputation ruined, work problems, personal relations seriously affected, and mental health problems only to name a few. Some women  had to go to the lengths of moving to another state or country, taking a break from their career, and deleting all their profiles from the internet. Such consequences are dire and should be tackled with stronger sanctions, both in our current times and the future.

Furthermore, in Germany, the debate has been surrounding not only deepfakes affecting women about revenge porn but also child pornography being created by deepfakes. Initiatives such as HateAid are calling for stronger action, like establishing specific laws in Germany and the EU to tackle the creation and dissemination of AI-generated videos that are violent and cause severe harm. In Germany for example, while no law bans the creation or distribution of deepfakes thus far, policymakers currently working on developing laws to regulate this strongly, as can be read in the law draft of September 2024. The EU has issued  laws against AI programs, broadly, in an attempt to protect fundamental human rights from AI-produced harm.

The impact of deepfakes on women is complex, and due to AI the identification of said fake photos and videos becomes more challenging. “It becomes evident that the consequences of digital gender-based violence can extend beyond the cyberspace sphere.”

What about dating apps that use AI to match potential partners? Will there be complications in using this technology in relation to deepfakes? “Dating apps are constantly collecting personal data to improve their matchmaking and interactions. This ongoing data collection raises significant concerns about data privacy and security. Many users may not fully understand (especially young users) the extent of the data being collected or how it is being used, which puts them at risk of data misuse or hacking.” Adding this to our early analysis of the number of deepfakes and the misuse of personal information, the safety of women can take a new turn, not for the good.

Does AI have a positive influence thus far? What is the sentiment?

After having seen that many issues prevail when discussing deepfakes and revenge porn in relation to women’s rights, it becomes relevant to ask, are there any benefits of AI in the cyber domain in combating deepfakes? Another emerging question asks, what can be done? What other laws do we need? What is the current debate on this matter?

The U.S. Government Accountability Office published an article last year about technologies developed to detect AI-altered videos and deepfakes, these applications send a pop-up message to viewers to highlight discretion. “Disinformation can still spread even after deepfakes are identified. And, deepfake creators are finding sophisticated ways to evade detection, so combating them remains a challenge.” 

Other programs have been developed to tackle the issue of identifying deepfakes by using AI, such as Revealense, which, [assembles] “a team of experts in psychology, neuropsychology, nonverbal and cultural-patterns specialists, mathematics, AI, computer vision, and neural networks. Together they have developed an AI platform that can analyse voice-based- and video-based communications to assist decision-makers that encounter AI-generated content." Another platform that aims to fight deepfakes is Weverify, it “ aims to address the advanced content verification challenges through a participatory approach consisting of open source algorithms, low-overhead human-in-the-loop machine learning, and intuitive visualisations.” Meaning, deepfakes of women or revenge porn are being treated as part of disinformation campaigns which should be put into the definition. If such AI technologies can be developed to track easily whether said videos are fake or real, then that is a big step in battling deepfake campaigns. It seems to be a consensus that laws and regulations must be put into place. Some activists (such as Campaign Ban Deepfakes) suggest that deepfakes should be banned,  opinions likely shared with civilians. Also, politicians such as Charlotte Owen in the UK are pushing for a ban or punishment for those using deepfake technology to create sexually explicit images without consent.  Academics (Bart van der Sloot et al)  argue that it might not be right to ban technologies such as deepfakes to inhibit their development; it becomes clear that common ground is yet to be found.

Photo by Sara Kurfeß on Unsplash

What remains evident is that offenders have relatively easy access and means to create, share, and spread deepfakes on social media platforms. Expert Bernard Marr asserts that these companies are heading in the right direction and know how to use AI to combat revenge porn. Concerning long-term solutions, Meta has shared initiatives such as starting a special team to combat sextortion, similarly, StopNCII has developed specific hashtag codes to further stop the spread of compromising photos and videos.

Meta has also been developing some policies and strategies to tackle sextortion on social media platforms, for instance establishing a team that uses an automated system to detect and remove perpetrators’ accounts,  and reporting them to authorities. This team is working with the NCMEC (National Center for Missing and Exploited Children).

This already shows a step in the direction of getting law enforcement involved.  However, research suggests that it remains challenging to find the individuals behind the deepfakes. Even after deepfakes are taken down,  victims need reassurance that some sort of official statement will be made, to clarify that these were fake, also supporting the victim’s innocence. A sense of safety in the future and their present environment to mend the damage to their public image and help them overcome – or at least manage – the imminent long-lasting trauma resulting from this

Meta’s and other platforms’ initiatives help to find ‘if’ videos are deepfakes, but the big question remains: finding those in the first place. What if women are not aware that there are deepfakes of them circulating in the dark net or porn sites? Should women or individuals in general regularly check their names on the internet to find out if falsified images and videos of themselves intervening with AI are being spread or should this add to women’s already straining mental health? Do we continue relying on the ‘lucky factor’ to find such videos or wait till one is confronted by these deepfakes, brought to us by friends, family, or worst, by co-workers? Policymakers should consider implementing this issue more strongly into university studies to train special forces to tackle the spread of deepfakes.

Certainly what needs further advancement is research about these technologies and enforcement laws, or at least suggestions to make sure that corporate companies sign up with tech organisations mandated to preserve a safe workspace. Another possible future development is that organisations should fulfill certain standards of cooperation with said technological institutions, and inform their employees on a regular basis about how to act and report in certain situations. The factor of technological know-how, as well as resources, play a big role, however, as well as big companies should have the means and support to engage in tackling deepfakes and disinformation campaigns since such ‘attacks’ could have long-lasting effects both on the victims and the organisations too. This idea leads us to think if we will have some sort of ‘automated deepfake cyber war’ where deepfakes of women are put out there and AI algorithms are used to automatically track these videos down and delete them. It could become an endless back-and-forth of uploading deepfakes and taking them down.

Additionally, companies in general should consider that this is a new and very present threat that can happen to anyone at any time. Victims should be able to contact law enforcement as well as confidential contacts in order to sort out such matters before it could lead to bigger legal conflicts; once the damage is done it is often irreparable. This is also the initiative and message of lawyers and activists - such as Noelle Martin - that are strongly pushing for action against deepfakes. After asking others their opinion (anonymous interviews held via e-mail) on the matter, it becomes apparent that there is still much to be done and uncertainty about what should be done: “It needs more debate and discussion … It’s easy to dismiss something if you don’t see a personal value in it. I’m not too sure what else could be done other to know that there’s something and being weary and attentive.”(anonymous interview held by the author via e-mail on 20/2/25)  Several interviewees have the opinion that deepfakes should be regulated or banned since those cross the line when they involve someone’s identity without consent. A positive aspect is that many of the interviewees have already heard the term deepfake and agree that it needs to be discussed.

Photo by Ilnur Kalimullin on Unsplash

The Paris AI Action Summit 2025

In the recent Paris AI Action Summit which took place on the 10th and 11th of February 2025. The main themes discussed were the public interest in AI, the Future of Work, Innovation and Culture, Trust in AI, and Global AI Governance. From these, especially “Trust in AI” addresses the issues of malicious intent with AI and how to tackle the many challenges that come with the fast-developing technology. Furthermore, it already shows a big step towards an interdisciplinary approach by having the following representatives involved: “The question we all face – as the world’s citizens and users, start-ups and major corporations, researchers and decision-makers, artists and media outlets.”

Further, it can be taken away from the Summit that the EU is planning to establish the  “AI Gigafactories” to “develop the most advanced very large models needed to make Europe an AI continent.” Contrasting these gigafactories and their goals to the International Safety Report, it becomes apparent that a whole lot of work is still needed. With that many corporative initiatives in place to tackle the abuse of AI,  a potential risk is losing the overview of what nations are doing in terms of policy. Clear and concise strategies need to be put into place to unify institutions, research groups, and governments when it comes to regulating AI and tackling the issue of deepfakes against women.

An argument can be made about the fact that while countries developing strategies and setting up numerous think tanks, research organisations, and summits, AI and technology are developing that fast demand for those strategies to be in vigor at such quick rate as well. A good model is Australia, which government has already taken the big step of criminalising the spread of non-consensual images in 2019, an initiative also pushed by activists such as Martin.

The above resonates with the words of one of our interviewees about deepfakes in relation to women’s rights: “(…) there need to be stricter laws around how you can make use of deepfakes and what could be considered unlawful use of it.”(Anonymous interview held via e-mail 25/2/25).

Conclusion and further thoughts

The purpose of this article was to show the current debate surrounding deepfakes and women’s rights; clearly a vigorous one held by organisations, research groups, private investigators, as well as activists. Policymakers and governments are missed in this debate. Also, several articles and discussions are held too, a great sign in terms of tackling the issue.

The introduction of AI has brought many benefits to companies in terms of automation and faster workflows. The EU has made it clear at the recent Paris AI Summit their determination to invest more into the development of AI for the benefit of everyone, particularly in sectors such as research, environment, and healthcare. Many challenges, however, are yet to be addressed to manage and mitigate AI’s negative impact. It also remains to be seen what the recent Paris AI Summit will bring and how the increased investment will shape the landscape of AI.

If discussing AI programs battling generated AI deepfakes, is the matter a “cyber deepfake war”? A certain cyber war, extensively discussed by scholars Thomas Rid and Joseph S Nye, has been taking place when considering the use of deepfakes against individuals when aiming to harm and cause havoc. Perhaps in the future, we will have an automatic back-and-forth of deepfakes being created and identified, to later be deleted from cyberspace, in an endless loop.

If the use of deepfake against women continues, schools should educate children and young adults on these topics. Of course, there is the risk of making people aware of these technologies in the first place. However, not addressing the issue would perhaps cause greater harm. One interviewee has the opinion to put “a societal shame on people who use deepfakes against women” (anonymous interview held via e-mail 24/2/25) to ensure that these actions are not repeated.

Another major step that started in the US with a bill under discussion called “TAKE IT DOWN,” to ban deepfakes and protect women against revenge porn. It remains to be seen how this bill will develop and if such ban can be put in vigor. How will authorities track the deepfakes and how strongly will perpetrators be punished? These are all questions that remain open for discussion thus far.

Conversely, it should also be asked the other way around what about the impact that deepfakes have – if any – on men? Few studies address whether men also experience deepfake attacks. Thus far, not enough data is available to draw conclusions. However, it is suggested that men do not feel as much affected as women do. The latter must be taken with a grain of salt as some reports have shown that men also experience such attacks. This is an interesting matter for further research.

If these technologies are so easy to access and to spread false videos of women, then it should be equally easy to access an AI program to find them and have them removed. Perhaps, women (and any other people affected by this) could sign up for a platform where they can easily track whether degrading deepfakes are being spread about them, and consequently report them to authorities.

An argument can also be made about this all been said and done before while such technology keeps evolving.

Governments are trying to produce laws to tackle deepfakes, such as the USA’s and Germany’s. The debate therefore needs to be kept as well to find new niches, uphold certain questions and inquiries going, and see what the next major AI Summit will bring. What will the planned AI Gigafactories bring to the table? Will a ban in the USA for example have an impact on the rest of the world? Is it possible to ban deepfakes as opposed to women’s rights worldwide, as a unified strategy? Perhaps the main question and decision most countries will face is the following: Will we ban deepfakes as a whole or will we continue developing AI programmes to try and regulate the deepfake landscape? For now, it seems that rowing back on AI would be a very difficult process since so many governments and countries are invested however, developing AI to protect women could become a powerful tool and should be considered. We need to adapt our strategy and find the ‘chink in the chain’ with the ‘enemy’ in this case the harassers. For this, drawing on Sun Tzu could help which was also used by Chin-Ning Chu to apply the Art of War for Women: “Yet Sun is saying that victory is not in your control but rather the gift of your enemy – in other words, victory is assured when your enemy makes a mistake. Of course, it’s up to you to pinpoint your enemy’s weakness and exploit it.” Time will tell how well we will adapt and find the loopholes.