February 17, 2026No Comments

Germanyโ€™s cyber-strategy through an international relations lens

By Maria Makurat - Human Rights & Cyber Security Desks

Germanyโ€™s critical infrastructures

Germany is facing more cyber-attacks ever since the Russian-Ukrainian conflict and is also forced to deal with potential hybrid warfare scenarios due to drone sightings in 2025 (see drone article and interview by the president of the BKI ). Just in 2025, Germanyโ€™s train system faced cyber-attacks as well as the Berlin airport, causing major disruptions and delays. The Global Cyber Security Outlook report 2026 by the World Economic Forum (WEF) highlights the increasing use and issue of Artificial Intelligence (AI) in relation to cyber-attacks, as well as how countries such as Germany and Denmark are shifting their cloud solutions away from foreign dependency towards regional managed cloud solutions.This shows the continuously changing world order and how countries and states are redefining their priorities and security agenda in recent years of globalisation and complex interdependence.

Will states continue to retreat back to their own methods, or will we see new cooperations and treaties being made? The theory of complex interdependence is strong in international relations theory and is a major driver for states to ensure that the risk of conflict remains low; however, no longer purely financial and trade major factors, since security has become increasingly important in the light of emerging conflicts, which makes the application of said theory difficult. Consequently, it is essential to continuously analyse and reassess the cyber strategies of states such as Germany and examine how the evolving international landscape both shapes and is shaped by these strategies.

Analysing Germanyโ€™s cyber strategy through the international relations lens

Germany has gradually expanded its cyber strategy through institutional development, including the strengthening of the BSI and increased cooperation between civilian and military cyber units, calling it โ€œIntegrated Security for Germanyโ€. These efforts demonstrate a recognition that cyber security is no longer a niche issue but a central component of national security policy.

Many scholars, such as Nye, Thomas Rid, Valeriano and Jagoda, have and are (re)discussing the security of the cyber-domain and how one can apply theories. Can we apply realism and liberalism? Is Clausewitz appropriate in the realm of cybersecurity? An interesting note is the comparison of cyber-attacks and the human body by Allan Friedman and P.W. Singer, saying they bypass our skin like a viral infection.[1] Making a connection between the human body and international affairs theory can also be led back to sociological theories, such as โ€œfunctionalismโ€ by Herbert Spencer and analysts comparing cyber-attacks directly to the human body and viruses. This can also be an interesting point of view to look at Germanyโ€™s cyber-strategy. As the Integrated Security Report states, Germanyโ€™s cyber strategy adopts a whole-of-society approach, encompassing civilians as well as the public and private sectors. Much like the human body, all parts must function together; if one fails, effective defence against external threats becomes impossible. However, with AI increasingly becoming part of our lives, how can we ensure this?

Valeriano and Maness discuss Kelloโ€™s view: โ€œNew theories and new ways of thinking are required, and Kello (2013: 8) asserts that the social science field is ill-equipped to offer anything of value now.โ€[2] They further assess that one must also consider what is โ€˜easierโ€™ in the cyber domain: offence or defence? Nye points out that at the moment (note the analysis was 2010), the offense has an advantage in the cyber domain due to the unpredictable nature of cyber-attacks.[3]Now, in 2026, after having seen many more cyber-attacks and their consequences, it still remains difficult to make a sound statement on this but, one can say that when looking at Germany, the defence needs to be focused on more since cyber-attacks on Germany caused a cost of roughly 300 bln Euros. Furthermore, it is stated that the lines between cyber espionage and cyber-attacks from states such as China and Russia are blurring, making defence increasingly difficult when also speaking of โ€˜information warfareโ€™.

Valeriano and Maness suggest a โ€˜Just Warโ€™ approach: โ€œa system of justice for the use of cyber technologies where states are incentivised to maintain continued restraint.โ€[4] This has also been suggested by Weber by urging Germany and other states to deepen norms in order to ensure that critical infrastructure is not being attacked. Seeing recent developments, this may be complicated since hybrid warfare and changing interdependence between states is having an influence on how states are perceiving world order. The changing order causes much unrest, which could prove problematic to ensure that states practice a constrained cyber practice. One can also bring in some theoretical viewpoints from sociology, such as how states and societies can act โ€˜morally. Sociologists such as Sigmund already raised the question in 2001 about, how much automation, disappearance of institutionally prescribed traditions and values leads to the disappearance of social order.[5] Meaning with now more questions arising whether young people under the age of 16 should even have access to social media due to the dangers of mental health as well as cyber bullying and AI taking over more tasks, states such as Germany see themselves questioning how such a morally and ethically right cyber strategy with other states can take place when other states such as Russia have a โ€œdifferent understanding of warfareโ€ (Oscar Jonsson). 

The developments in the past years show that a multidimensional strategy, where one has to consider different methodologies and schools of thought when analysing these incidents to then make suggestions for further research, seems to still be the best way forward. Realism, liberalism and constructivism all have their place in international relations when analysing cyber domains.

Photo by Nk Ni on Unsplash

Whilst the blackout in Germany was not a direct cyber-attack but a result of physically damaging the cable wires, the consequences were still severe. When developing a โ€˜cyberโ€™ strategy in the light of the Ukraine Russia war and other conflicts, Germany must not only invest in cyber security, but also physically protect the critical infrastructures more strongly.

Other scholars say that the solution to a secure cyber domain is not more privacy but more openness[6]. Jagoda, for instance, argues that a lack of knowledge plays a big role for cyber security and threats. Schneider makes a point which is also discussed by Jagoda, namely that โ€œonly bad security relies on secrecy; good security works even if all the details of it are publicโ€.[7] In other words, security through obscurity, but this still seems contested. Germany is now also considering banning social media apps for individuals under the age of 16 to ensure safety and protection. Closing off access seems to be for now, the strategy forward to try and minimise danger.

Further thoughts?

This article does not claim to find the ultimate solution for a cyber-strategy. When considering many theories, methodologies and having not only an international relations lens but also an interdisciplinary lens, it becomes clear that many questions remain as to how one analyses a cyber-strategy and how can states such as Germany can move forward?

It is a difficult task for Germany and other states, since it seems that, especially for a cyber-strategy, the macro level (states and international relations in this case) is just as important as the micro level (civilians and private sector). A cyber-strategy pushes states to increasingly focus on the private sector since those remain vulnerable if not continuously โ€˜updatedโ€™ and educated about the cyber domain. 

Recent discussions at the WEF reveal that states are increasingly reassessing their strategies and international partnerships, which will have significant implications for security and the cyber domain. Germany must continue to invest in both cyber and physical defences whilst also involving private actors and citizens in resilience-building efforts. Only through a comprehensive and adaptive approach can Germany effectively respond to the evolving challenges of cyber and hybrid warfare. Future developments, such as Germanyโ€™s plan to develop a kind of โ€˜cyber dome against cyber-attacksโ€™will show whether it is possible to prevent such attacks or also the issue of โ€˜information warfareโ€™, which could not be thoroughly discussed in this article. Perhaps states must consider that cyber-attacks cannot be prevented 100% and that a certain cyber-war will, for now, always be part of our society. 


[1] P. W. Singer and Allan Friedman, Cybersecurity and Cyberwar (Oxford: Oxford University Press, 2014), 34โ€“39.  (from Discussion: โ€˜Resilience and is cyber resilience more attainable than cybersecurity?โ€™)

[2] Valeriano, Brandon, and Ryan C. Maness, 'International Relations Theory and Cyber Security: Threats, Conflicts, and Ethics in an Emergent Domain', in Chris Brown, and Robyn Eckersley (eds), The Oxford Handbook of International Political Theory, Oxford Handbooks (2018; online edn, Oxford Academic, 5 Apr. 2018), https://doi.org/10.1093/oxfordhb/9780198746928.013.19, accessed 23 Jan. 2026: pg 264

[3] Valeriano, Brandon, and Ryan C. Maness, 'International Relations Theory and Cyber Security: Threats, Conflicts, and Ethics in an Emergent Domain', in Chris Brown, and Robyn Eckersley (eds), The Oxford Handbook of International Political Theory, Oxford Handbooks (2018; online edn, Oxford Academic, 5 Apr. 2018), https://doi.org/10.1093/oxfordhb/9780198746928.013.19, accessed 23 Jan. 2026: pg 266

[4] Valeriano, Brandon, and Ryan C. Maness, 'International Relations Theory and Cyber Security: Threats, Conflicts, and Ethics in an Emergent Domain', in Chris Brown, and Robyn Eckersley (eds), The Oxford Handbook of International Political Theory, Oxford Handbooks (2018; online edn, Oxford Academic, 5 Apr. 2018), https://doi.org/10.1093/oxfordhb/9780198746928.013.19, accessed 23 Jan. 2026: pg 268.

[5] Sigmund, Steffen (2001): Zwischen Altruismus und symbolischer Anerkennung. รœberlegungen zum stifterischen Handeln in mondernen Gesellschaften. In: Jansen, A. et al (Hg): Eigeninteresse und Gemeinwohlbindung. Frankfurt/M.S. S 213.

[6] โ€œSpeculative Security.โ€ pg 21-36, Patrick Jagoda, โ€œSpeculative Security.โ€ In Cyber Space and National Security: Threats, Opportunities, and Power in a Virtual World edited by Derek S. Reveron (Washington, DC: Georgetown University Press, 2012), 21โ€“36.

[7] โ€œSpeculative Security.โ€ pg 21-36, Patrick Jagoda, โ€œSpeculative Security.โ€ In Cyber Space and National Security: Threats, Opportunities, and Power in a Virtual World edited by Derek S. Reveron (Washington, DC: Georgetown University Press, 2012) pg 31.

December 10, 2025No Comments

The growing issues of violence against women in the cyber domain

By Maria Makurat - Human Rights Desk

Introduction

This year the 16 days of activism end gender-based violence last from the 25thย of November to the 10thย of December 2025. Especially this year, the call toย end specifically violence against women in the cyber domainย is a strong focus. The cyber domain is proving to cause many challenges and hurdles for issues in the areas of womenโ€™s rights, human trafficking, child pornography, international conflicts and many more. Australia for instance has taken the big step of banning social media for youth under the age of 16. The UN and other associated organisations haveย raised the alarmย that the online violence which includes for example deepfakes, sextortion, stalking and bullying against women to name a few are increasing. Furthermore, the World Health Organisation (WHO) has released its landmark report, statingย โ€œNearly 1 in 3 women โ€“ estimated 840 million globally โ€“ have experienced partner or sexual violence during their lifetime, a figure that has barely changed since 2000.โ€ย It is essential to continue collecting the data in order to keep track of trends and where to exactly tackle the issues however as is also pointed out, the danger lies in cutting funds and support for such research whenย โ€œ(โ€ฆ) just as when humanitarian emergencies, technological shifts, and rising socio-economic inequality are further increasing risks for millions of women and girls.โ€

The data is essential when one also thinks about what methods and schools of thought can be applied from international relations in order to analyse these trends. Many factors play a role when tackling this issue such as international conflicts, socio-economic factors and political aspects. How are schools of thought such as realism, constructivism, feminism and liberalism placed? Is there a connection? This article aims to analyse this and suggest possible future questions that need to be asked.

Schools of thought surrounding gender and international relationsย 

In International relations as well as international affairs once has the main theoretical pillars: realism, liberalism and constructivism. These have been shaping the debates, dialogue and analytical methods of scholars in international relations. Feminist scholars have emerged around 1980-1990s by key figures such as Judith Ann Tickner, Cynthia Enloe, Cynthia Cockburn and V. Spike Peterson. Since then, a lot of debate has happened and there seems to be room for more to come.

As early as 2009, scholars have been addressing the issue of cyber and womenโ€™s rights such as Gurumurthy and Menon publishing in Economic and Political Weekly already addressing โ€œViolence against Women via Cyberspace".[1]ย For this work and analysis they focus on the impact of the economy and IT sector on womenโ€™s safety by using a neo-liberalism view shifting the focus strongly towards the institutions and the factors of interdependence. Taking a neo-liberalism view into account when tackling violence against women in the cyber domain would mean that the violence should be reduced since we are all depending on said technology and would be problematic to sabotage it through using it as a medium of violence however, as one can see, the development of violence is still taking place despite the factor of having benefits. Perhaps by applyingย Keohaneโ€™s theory of monitoring, as we see with the banning of social media for youth and making deepfakes as a medium of violence illegal, one could consider analysing violence in the cyber domain from this point of view but that of course leaves room for debate.

Traditional Kenneth Waltz and other scholars are of course the foundation for the international relations and international affairs field and remain fundamental when conducting analysis. However, quite recently in the past years a shift brings about new debates on which theories and schools of thought are appropriate in the using gender in international relations which should then also be applied to addressing the issue if violence against women in the cyber domain. Prรผgl for instance critically asses the landscape of international relations theories pointing out that: โ€œWaltz suggests that treating gender as a cause of war would be reductionist because it pertains to the individual level of analysisโ€[2] On the one hand this makes sense when considering a connection between violence against women in the cyber domain and for instance current international conflicts since one wants to focus on the states however, considering the individual is also of importance since the violence is linked to the gender and is proven to continue to be so when looking at the mentioned reports by the UN. Prรผgl also points out that sociological theories in combination with constructivism provide a bridge to feminist theorizing. This would also speak for an interdisciplinary approach when tackling the issue of violence against women and especially when we consider the cyber domain since one has to combine several theories to tackle the combined matter of states, organisations, individuals, social structures and international conflicts which all play a role with violence in the cyber domain.

Photo by SCARECROW artworks on Unsplash

Sjoberg for instance points out that one cannot simply identify positivists and non-positivists when tackling gender in international relation scholarship.[3]ย ย Furthermore, Sjoberg suggests the notion that gender and international relations is โ€œat itโ€™s best when its and IR research is at its best when it is multimethod, epistemologically pluralist, multisited, and carefully navigates the differences between feminist analyses and large-n statistical studies.โ€[4]ย As the changing landscape of war and violence is moving at a rapid pace, scholars of international relations are almost โ€˜drivenโ€™ to adapt also in terms of analysing and interpreting the data. When thinking about future questions, it is beneficial to have multiple methods and keeping the different schools of thought in mind when tackling the fast-changing landscape of violence in the cyber domain however, one should also strive for a comprehensive data collection and analysis in order to make thorough conclusions. The definitions of for example violence, should be clear for all disciplines and scholars in order to be able to draw overlapping conclusions when taking different schools of thought into account.

Conclusion and what are possible future questions

The issue of violence against women physically and other forms of cyber related conflicts should not be treated as separate incidents. It may be too early to say but there seems to be a correlation between cyber related incidents and international world conflict when one sees for instance South Africa facing the โ€œhighest levels of gender-based violenceโ€ and in Arab states (such as Yemen, Lebanon, Jordan) about 60% of women face online violence.

An interdisciplinary approach by using quantitative as well as qualitative methods (which would speak for a mix of positivist and non-positivist school of thought) allows one to combine several ways in order to tackle issues such as cyber violence and extortion against women (see for instance โ€œImage-Based Sexual Baseโ€ research by Nicola Henry et al.) 

Therefore, applying international relations theories such as constructivism and feminist theories whilst of course taking the traditional way of thinking of Waltz and Clausewitz into consideration may be of use. The multi-method approach is beneficial and should be used however, with certain caution as well.  โ€œInspired by โ€œgender lensesโ€, feminist research knows security differently both in terms of where knowledge is to be found (particularly at global politicsโ€™ margins, and with/in people), what counts as knowledge (including emotion, experience, and pain), and where knowledge can be found (particularly in nontraditional formats and sources).โ€[5] Also scholars in the area of international human law are advocating for a feminist approach and speak of โ€œgendering cyberwarfareโ€.

As international conflicts are changing rapidly, speak hybrid warfare and drone incidents, states are forced to constantly adapt. The same goes for dealing with cyber related conflicts and cyber violence. When looking at women, peace and security, it is well documented that โ€˜rapeโ€™ and โ€˜violenceโ€™ against women is used as a weapon of war Perhaps one should be wary and be careful that the same will not develop in the cyber domain. One already sees an increase of women stepping back being afraid to be a target (especially those working in the public sector). Therefore, one can see great benefit in continuing the debate amongst scholars on how schools of thought can be applied, and which methodologies seem to be fitting. 

By continuing to ask questions and finding new ways to analyse the growing issue of violence against women (in the cyber domain) one can then also find possible solutions on how to achieve the end of violence against women. As can be seen from previous debates, scholars on international relations seem to be a bit โ€˜dividedโ€™ on how to implement feminist theories into the present-day international relations field. When looking at the developing issues of cyber-sex trafficking, cyber violence, grooming in the cyber domain, deepfakes, cyber espionage to name a few problems, it seems to be the case that using several methodologies and combining different theoretical ways of thinking are for now โ€˜the way to goโ€™.


[1] GURUMURTHY, ANITA, and NIVEDITHA MENON. โ€œViolence against Women via Cyberspace.โ€ Economic and Political Weekly, vol. 44, no. 40, 2009, pp. 19โ€“21. JSTOR, http://www.jstor.org/stable/25663650. Accessed 27 Nov. 2025.

[2] Prรผgl, Elisabeth, โ€œGender as a cause of conflict,โ€ International Affairs 99: 5 (2023) 1885โ€“1902; doi: 10.1093/ia/iiad184

[3]ย Reevaluating Gender and IR Scholarship

Author(s): Laura Sjoberg, Kelly Kadera and Cameron G. Thies

Source: The Journal of Conflict Resolution , April 2018, Vol. 62, No. 4 (April 2018), pp. 848-870ย 

[4]ย Reevaluating Gender and IR Scholarship

Author(s): Laura Sjoberg, Kelly Kadera and Cameron G. Thies

Source: The Journal of Conflict Resolution , April 2018, Vol. 62, No. 4 (April 2018), pp. 848-870

[5]ย Laura Sjoberg (2024) Feminist Theories and Thinking Security Otherwise, Security Studies, 33:5, 860-884, DOI: 10.1080/09636412.2024.2449334, page 874

April 30, 2025No Comments

The current debate surrounding AI and its impact on womenโ€™s rights

By Maria Makurat - Human Rights and Cyber Security Team

Introducing the issue

On February 9th, 2025, French President Emmanuel Macron posted on X a montage of AI deepfake created with videos of himself in various scenarios, including a TV series to promote the โ€œArtificial Intelligence Action Summit โ€. Deepfakes are more than ever a ubiquitous technology, very easy for anyone to access and use. Governments are investing more in AI and it seems almost a race by major players such as the EU, China, and the USA. In this context, a main topic remains an issue warranting priority in the discussion: AI and Womenโ€™s Rights. Whether it be in the sector of cyber security, modern warfare, or even the arts domain, AI provides as many opportunities as challenges that should be equally analysed to highlight the current benefits and potential damage, what remains to be done, and where we are headed on the matter.

The AI Paris Summit has brought together many specialists and governments to tackle the multiple aspects of AI and โ€œdeepfakesโ€. This term is defined in the recent โ€œInternational AI Safety Reportโ€: โ€œA type of AI-generated fake content, consisting of audio or visual content, that misrepresents real people as doing or saying something that they did not actually do or say.โ€ AI is affecting our cyber domain by spreading disinformation about women. Violence against women in the cyber domain such as โ€œrevenge pornโ€ is not a new phenomenon.

What about the fast-evolving AI technology? What about the โ€œdark netโ€ that has been largely discussed on TikTok, with some individuals warning women not to google themselves in the dark web? What are some of the benefits of AI in all of this? Just recently, in the USA the Bill โ€œTAKE IT DOWNโ€ Act is being pushed to protect women against deepfake-generated content, showing the growing relevance of the issue.

This article aims to explore new aspects of the current debate by looking at the recent AI Paris Summit, what research groups and personal investigations have found up to now, and comparing the pros and cons of AI regarding womenโ€™s rights in the cyber domain. Lastly, recommendations and suggestions will be made for further research, and what questions need to be tackled will be discussed.

Photo by Markus Spiske on Unsplash

The negatives of AI and womenโ€™s rights

One issue that comes to mind when thinking of women and violence in the cyber domain is non-consensual intimate image distribution (NCIID) related to revenge porn, an issue that goes back as early as 2007. However, now it is the ever-evolving technological landscape that complicates this crime. Celebrities have already been victims of this deepfake technology, resulting in wide scandals and many fans reacting in outrage towards the creators of such images and videos. Recent developments show that not only celebrities or public figures are falling victim, but also private individuals. The victims include women from all socio-economic occupations and backgrounds, regardless of whether they are celebrities or not.

Each victim is affected equally however, it does seem to show a pattern that women who are active in public should be silenced through such actions. Recently surveys and research have shown that since 2018 and 2023 the amount of women falling to โ€œdigital violenceโ€ has doubled. This development shows that further research remains necessary, alongside debate and exchange of information about this pressing matter.

It is a fact that most deepfakes and AI-generated videos are related to porn and violence towards women. Certain AI programs are specifically designed to undress women from photos. Such developments are concerning since said technologies have become easy to operate and are very accessible, therefore making each woman a possible target by anyone in the world, regardless of whether they know the woman or not. The potential consequences of such actions are known but should still be listed: reputation ruined, work problems, personal relations seriously affected, and mental health problems only to name a few. Some women  had to go to the lengths of moving to another state or country, taking a break from their career, and deleting all their profiles from the internet. Such consequences are dire and should be tackled with stronger sanctions, both in our current times and the future.

Furthermore, in Germany, the debate has been surrounding not only deepfakes affecting women about revenge porn but also child pornography being created by deepfakes. Initiatives such as HateAid are calling for stronger action, like establishing specific laws in Germany and the EU to tackle the creation and dissemination of AI-generated videos that are violent and cause severe harm. In Germany for example, while no law bans the creation or distribution of deepfakes thus far, policymakers currently working on developing laws to regulate this strongly, as can be read in the law draft of September 2024. The EU has issued  laws against AI programs, broadly, in an attempt to protect fundamental human rights from AI-produced harm.

The impact of deepfakes on women is complex, and due to AI the identification of said fake photos and videos becomes more challenging. โ€œIt becomes evident that the consequences of digital gender-based violence can extend beyond the cyberspace sphere.โ€

What about dating apps that use AI to match potential partners? Will there be complications in using this technology in relation to deepfakes? โ€œDating apps are constantly collecting personal data to improve their matchmaking and interactions. This ongoing data collection raises significant concerns about data privacy and security. Many users may not fully understand (especially young users) the extent of the data being collected or how it is being used, which puts them at risk of data misuse or hacking.โ€ Adding this to our early analysis of the number of deepfakes and the misuse of personal information, the safety of women can take a new turn, not for the good.

Does AI have a positive influence thus far? What is the sentiment?

After having seen that many issues prevail when discussing deepfakes and revenge porn in relation to womenโ€™s rights, it becomes relevant to ask, are there any benefits of AI in the cyber domain in combating deepfakes? Another emerging question asks, what can be done? What other laws do we need? What is the current debate on this matter?

The U.S. Government Accountability Office published an article last year about technologies developed to detect AI-altered videos and deepfakes, these applications send a pop-up message to viewers to highlight discretion. โ€œDisinformation can still spread even after deepfakes are identified. And, deepfake creators are finding sophisticated ways to evade detection, so combating them remains a challenge.โ€ 

Other programs have been developed to tackle the issue of identifying deepfakes by using AI, such as Revealense, which, [assembles] โ€œa team of experts in psychology, neuropsychology, nonverbal and cultural-patterns specialists, mathematics, AI, computer vision, and neural networks. Together they have developed an AI platform that can analyse voice-based- and video-based communications to assist decision-makers that encounter AI-generated content." Another platform that aims to fight deepfakes is Weverify, it โ€œ aims to address the advanced content verification challenges through a participatory approach consisting of open source algorithms, low-overhead human-in-the-loop machine learning, and intuitive visualisations.โ€ Meaning, deepfakes of women or revenge porn are being treated as part of disinformation campaigns which should be put into the definition. If such AI technologies can be developed to track easily whether said videos are fake or real, then that is a big step in battling deepfake campaigns. It seems to be a consensus that laws and regulations must be put into place. Some activists (such as Campaign Ban Deepfakes) suggest that deepfakes should be banned,  opinions likely shared with civilians. Also, politicians such as Charlotte Owen in the UK are pushing for a ban or punishment for those using deepfake technology to create sexually explicit images without consent.  Academics (Bart van der Sloot et al)  argue that it might not be right to ban technologies such as deepfakes to inhibit their development; it becomes clear that common ground is yet to be found.

Photo byย Sara KurfeรŸ onย Unsplash

What remains evident is that offenders have relatively easy access and means to create, share, and spread deepfakes on social media platforms. Expert Bernard Marr asserts that these companies are heading in the right direction and know how to use AI to combat revenge porn. Concerning long-term solutions, Meta has shared initiatives such as starting a special team to combat sextortion, similarly, StopNCII has developed specific hashtag codes to further stop the spread of compromising photos and videos.

Meta has also been developing some policies and strategies to tackle sextortion on social media platforms, for instance establishing a team that uses an automated system to detect and remove perpetratorsโ€™ accounts,  and reporting them to authorities. This team is working with the NCMEC (National Center for Missing and Exploited Children).

This already shows a step in the direction of getting law enforcement involved.  However, research suggests that it remains challenging to find the individuals behind the deepfakes. Even after deepfakes are taken down,  victims need reassurance that some sort of official statement will be made, to clarify that these were fake, also supporting the victimโ€™s innocence. A sense of safety in the future and their present environment to mend the damage to their public image and help them overcome โ€“ or at least manage โ€“ the imminent long-lasting trauma resulting from this

Metaโ€™s and other platformsโ€™ initiatives help to find โ€˜ifโ€™ videos are deepfakes, but the big question remains: finding those in the first place. What if women are not aware that there are deepfakes of them circulating in the dark net or porn sites? Should women or individuals in general regularly check their names on the internet to find out if falsified images and videos of themselves intervening with AI are being spread or should this add to womenโ€™s already straining mental health? Do we continue relying on the โ€˜lucky factorโ€™ to find such videos or wait till one is confronted by these deepfakes, brought to us by friends, family, or worst, by co-workers? Policymakers should consider implementing this issue more strongly into university studies to train special forces to tackle the spread of deepfakes.

Certainly what needs further advancement is research about these technologies and enforcement laws, or at least suggestions to make sure that corporate companies sign up with tech organisations mandated to preserve a safe workspace. Another possible future development is that organisations should fulfill certain standards of cooperation with said technological institutions, and inform their employees on a regular basis about how to act and report in certain situations. The factor of technological know-how, as well as resources, play a big role, however, as well as big companies should have the means and support to engage in tackling deepfakes and disinformation campaigns since such โ€˜attacksโ€™ could have long-lasting effects both on the victims and the organisations too. This idea leads us to think if we will have some sort of โ€˜automated deepfake cyber warโ€™ where deepfakes of women are put out there and AI algorithms are used to automatically track these videos down and delete them. It could become an endless back-and-forth of uploading deepfakes and taking them down.

Additionally, companies in general should consider that this is a new and very present threat that can happen to anyone at any time. Victims should be able to contact law enforcement as well as confidential contacts in order to sort out such matters before it could lead to bigger legal conflicts; once the damage is done it is often irreparable. This is also the initiative and message of lawyers and activists - such as Noelle Martin - that are strongly pushing for action against deepfakes. After asking others their opinion (anonymous interviews held via e-mail) on the matter, it becomes apparent that there is still much to be done and uncertainty about what should be done: โ€œIt needs more debate and discussion โ€ฆ Itโ€™s easy to dismiss something if you donโ€™t see a personal value in it. Iโ€™m not too sure what else could be done other to know that thereโ€™s something and being weary and attentive.โ€(anonymous interview held by the author via e-mail on 20/2/25) ย Several interviewees have the opinion that deepfakes should be regulated or banned since those cross the line when they involve someoneโ€™s identity without consent. A positive aspect is that many of the interviewees have already heard the term deepfake and agree that it needs to be discussed.

Photo byย Ilnur Kalimullin onย Unsplash

The Paris AI Action Summit 2025

In the recent Paris AI Action Summit which took place on the 10th and 11th of February 2025. The main themes discussed were the public interest in AI, the Future of Work, Innovation and Culture, Trust in AI, and Global AI Governance. From these, especially โ€œTrust in AIโ€ addresses the issues of malicious intent with AI and how to tackle the many challenges that come with the fast-developing technology. Furthermore, it already shows a big step towards an interdisciplinary approach by having the following representatives involved: โ€œThe question we all face โ€“ as the worldโ€™s citizens and users, start-ups and major corporations, researchers and decision-makers, artists and media outlets.โ€

Further, it can be taken away from the Summit that the EU is planning to establish the  โ€œAI Gigafactoriesโ€ to โ€œdevelop the most advanced very large models needed to make Europe an AI continent.โ€ Contrasting these gigafactories and their goals to the International Safety Report, it becomes apparent that a whole lot of work is still needed. With that many corporative initiatives in place to tackle the abuse of AI,  a potential risk is losing the overview of what nations are doing in terms of policy. Clear and concise strategies need to be put into place to unify institutions, research groups, and governments when it comes to regulating AI and tackling the issue of deepfakes against women.

An argument can be made about the fact that while countries developing strategies and setting up numerous think tanks, research organisations, and summits, AI and technology are developing that fast demand for those strategies to be in vigor at such quick rate as well. A good model is Australia, which government has already taken the big step of criminalising the spread of non-consensual images in 2019, an initiative also pushed by activists such as Martin.

The above resonates with the words of one of our interviewees about deepfakes in relation to womenโ€™s rights: โ€œ(โ€ฆ) there need to be stricter laws around how you can make use of deepfakes and what could be considered unlawful use of it.โ€(Anonymous interview held via e-mail 25/2/25).

Conclusion and further thoughts

The purpose of this article was to show the current debate surrounding deepfakes and womenโ€™s rights; clearly a vigorous one held by organisations, research groups, private investigators, as well as activists. Policymakers and governments are missed in this debate. Also, several articles and discussions are held too, a great sign in terms of tackling the issue.

The introduction of AI has brought many benefits to companies in terms of automation and faster workflows. The EU has made it clear at the recent Paris AI Summit their determination to invest more into the development of AI for the benefit of everyone, particularly in sectors such as research, environment, and healthcare. Many challenges, however, are yet to be addressed to manage and mitigate AIโ€™s negative impact. It also remains to be seen what the recent Paris AI Summit will bring and how the increased investment will shape the landscape of AI.

If discussing AI programs battling generated AI deepfakes, is the matter a โ€œcyber deepfake warโ€? A certain cyber war, extensively discussed by scholars Thomas Rid and Joseph S Nye, has been taking place when considering the use of deepfakes against individuals when aiming to harm and cause havoc. Perhaps in the future, we will have an automatic back-and-forth of deepfakes being created and identified, to later be deleted from cyberspace, in an endless loop.

If the use of deepfake against women continues, schools should educate children and young adults on these topics. Of course, there is the risk of making people aware of these technologies in the first place. However, not addressing the issue would perhaps cause greater harm. One interviewee has the opinion to put โ€œa societal shame on people who use deepfakes against womenโ€ (anonymous interview held via e-mail 24/2/25) to ensure that these actions are not repeated.

Another major step that started in the US with a bill under discussion called โ€œTAKE IT DOWN,โ€ to ban deepfakes and protect women against revenge porn. It remains to be seen how this bill will develop and if such ban can be put in vigor. How will authorities track the deepfakes and how strongly will perpetrators be punished? These are all questions that remain open for discussion thus far.

Conversely, it should also be asked the other way around what about the impact that deepfakes have โ€“ if any โ€“ on men? Few studies address whether men also experience deepfake attacks. Thus far, not enough data is available to draw conclusions. However, it is suggested that men do not feel as much affected as women do. The latter must be taken with a grain of salt as some reports have shown that men also experience such attacks. This is an interesting matter for further research.

If these technologies are so easy to access and to spread false videos of women, then it should be equally easy to access an AI program to find them and have them removed. Perhaps, women (and any other people affected by this) could sign up for a platform where they can easily track whether degrading deepfakes are being spread about them, and consequently report them to authorities.

An argument can also be made about this all been said and done before while such technology keeps evolving.

Governments are trying to produce laws to tackle deepfakes, such as the USAโ€™s and Germanyโ€™s. The debate therefore needs to be kept as well to find new niches, uphold certain questions and inquiries going, and see what the next major AI Summit will bring. What will the planned AI Gigafactories bring to the table? Will a ban in the USA for example have an impact on the rest of the world? Is it possible to ban deepfakes as opposed to womenโ€™s rights worldwide, as a unified strategy? Perhaps the main question and decision most countries will face is the following: Will we ban deepfakes as a whole or will we continue developing AI programmes to try and regulate the deepfake landscape? For now, it seems that rowing back on AI would be a very difficult process since so many governments and countries are invested however, developing AI to protect women could become a powerful tool and should be considered. We need to adapt our strategy and find the โ€˜chink in the chainโ€™ with the โ€˜enemyโ€™ in this case the harassers. For this, drawing on Sun Tzu could help which was also used by Chin-Ning Chu to apply the Art of War for Women: โ€œYet Sun is saying that victory is not in your control but rather the gift of your enemy โ€“ in other words, victory is assured when your enemy makes a mistake. Of course, itโ€™s up to you to pinpoint your enemyโ€™s weakness and exploit it.โ€ Time will tell how well we will adapt and find the loopholes.

October 30, 2023No Comments

Violence against women in the cyber domain โ€“ the impact of covid and what still needs to be done?

Author: Maria Makurat (Human Rights Team), with a contribution from Julia Hodgins (Culture, Society & Security Team)

Introduction

Physical violence against women is a topic that is being addressed by several institutions and organizations but what about the cyber domain? Cyber violence is not a new concept but the coronavirus pandemic has brought about new challenges and one has even seen a surge of the issue. This was discussed by UN Women in a report stating that the Covid pandemic had an impact on online harassment. This drew attention to women experiencing online harassment which can have lasting detrimental effects. This article explores the developing issue of violence against women in the cyber-domain by first considering various definitions to then highlighting case studies by looking at reports, literature and case studies in order to suggest possible questions that remain.

Defining violence against women in the cyber domain

Firstly, one needs to define what violence against women in the cyber domain entails. In the past years, there have been several definitions by scholars, institutions and organizations. It makes it challenging since what exactly do โ€œaggressionโ€ and โ€œviolenceโ€ such as โ€œhate speechโ€ in the cyber domain mean? The discourse surrounding finding a definition of online gender-based violence shows that a strong debate exists however, as technology evolves, wider definitions are needed to include all forms of online violence.

When considering violence against women in the cyber-domain, then one automatically wonders what is โ€œviolentโ€ in this case? Traditional international relations theories surrounding violence have been around for a while. Finlay for instance points out that one should not only consider โ€œviolenceโ€ by itself but extend it to โ€œviolent agencyโ€ with the following components: โ€œdefined first by a double intention (1) to inflict harm using a technique chosen (2) to eliminate or evade the targetโ€™s means of escaping it or defending against it. Second, the harms it aims at are destructive (as opposed to appropriative).โ€  

Looking further at โ€œaggressionโ€ and โ€œviolenceโ€ in relation to cyber, defining said terms has its challenges. "Defining โ€œaggressionโ€ is a complex, in and itself controversial endeavour, as it relates to a tense exchange between at least two actors. Complexity grows as, increasingly often, aggressions become invisible - or blurry at the very least. Complications keep growing when the subject is situated in the scope of gender relations. Still now worldwide, at varying degrees, physical violence against women remains officialised, i.e., state violence exerted by the Iranian Moral Police to โ€˜rein inโ€™ female transgressors is legal and inconsequential. Complications exponentially increase when translating gender relations into cyberspace, due to both inherent challenges of cyberspace (obscureness, non-territoriality/territoriality, low threshold for entry and exit, easy concealment) and the assumption of cyber being at least gender-neutral, if not male-dominated by default. Nevertheless, constructivism suggests that security is not neutral as social factors (ethnicity, gender, age, nationality, class, etc.) allocate power, and power between actors underpins exchanges, particularly aggressions. To define aggression, exchanges are often de-constructed, and contrasted to a threshold set under the influence of power stances, perceived vulnerabilities, and mindsets about the actors in question."(contribution by Julia Hodgins).

There seems to be growing concern about online violence against female journalists and a need for guidelines on how to monitor, and evaluate this issue. This can be highlighted by looking at the recent guidelines and a report published by the OSCE in 2023 which provides a definition of what exactly โ€œgender-based online violenceโ€ in relation to female journalists means: โ€œsexist and misogynistic involving frequently threats of physical and/or sexual violence; sexualized abuse and harassment; digital privacy and security breaches that can expose identifying information and exacerbate offline safety threats facing the target; and networked or mob harassment.โ€ (โ€ฆ) often bound with gendered disinformation.โ€ Furthermore, the OSCE identifies eight features of gender-based online violence: misogynistic, frequently networked, it radiates, it is intimate, it can be extreme, behave like โ€˜networked gaslightingโ€™, extreme in intersectional discrimination and contains disinformation.

Looking at definitions discussed by scholars, Lews, Rowe and Wiper looked at the issue from a criminology point of view stating that there are gaps in the literature and a โ€œfailure to develop a robust gendered analysis, a lack of comparative analysis of online and offline VAWG and a lack of victimological examination of online abuse experienced by women and girls.โ€ A press release by the European Council in November 2021 stated again, that one needs clearer definitions of what online gender-based violence means in order to then have more concise laws put in place. The recommendation states that one should define the issue as โ€œthe digital dimension of violence against women.โ€

The latest definition by UN Women defines online-violence against women as follows: โ€œTechnology-facilitated gender-based violence (TF GBV) is any act that is committed, assisted, aggravated or amplified by the use of information communication technologies or other digital tools which results in or is likely to result in physical, sexual, psychological, social, political or economic harm or other infringements of rights and freedoms.โ€ Notably this definition extends the scope in order to include any act in relation to online violence.

As one can see, definitions are still being worked out and this is also an essential process when wanting to put stronger laws in place. States need international definitions in order to also have joint measurements against online violence. In the following, case studies of online violence will be highlighted to discuss the still pressing-issue.

Case studies of online-violence and future concerns

The issue of violence against women in the cyber-domain started very early and continues to be a growing threat and pressing issue today. Gurumurthy and Menon highlighted the said issue in 2009. They point out women (in India for example) having been filmed during rape and then posted on social-media platforms in order to maintain the cycle of violence. Another issue they discuss is that women have committed suicide in Kerala as a result of online harassment causing a stir in discussions.

UN Women released aย report in 2015, stating that urgent action needs to be taken in order to combat violence against women in the cyber domain. The report calls out the failure of implementing sustainable goals and achievements in reducing online violence against women and proposes that one needs better sanctions, a sensitization by implementing trainings and campaigns to change social attitudes as well as a more responsible internet infrastructure. Despite these reports, one has seen a significant impact of the corona pandemic on online violence against women. Reports have shown that women experience an increasing amount of online violence:ย โ€œCyber harassment and cyberbullying have increased by 50% during quarantine in Australia. Simultaneously, the United Kingdom data shows that the number of complaints about visual sexual harassment doubled in March 2020.โ€

Source: Photo by Joshua Gandara on Unsplash

Online violence against women is very complex and many factors play a role which means, tackling the issue needs sustainable goals that also address several factors. There is growing concern about violence against women working in politics or other public sectors. Women who express their opinions online very often receive violent threats and are coerced into retreating from the public sector and keeping a low profile. Articles and reports state that there is even a concern about women retreating from the political sector. Moreover, there seems to be a relation between crisis and gender-based violence and the consummation of online porn. The Government Equalities Office has released research on the relation between pornography use and harmful sexual attitudes and behaviors.  The reports come to the conclusion that pornography is one of the factors that โ€œcontribute to a permissive and conducive context that allows harmful sexual attitudes and behaviours to exist against women and girls.โ€ 

If one has been developing better definitions and implanting debates, then why does the issue continue to be a growing concern? These concerns and trends show that one needs stronger initiatives, sanctions, and focused debates to tackle the issue at hand. In the following, it will be briefly highlighted what projects have been launched to tackle online violence.

What are some initiatives?

UN Women launched 2020 a project called โ€œFireflies Campaign against Gender-Based Cyber Violence.โ€ The campaign specifically addressed the issue of online gender-based violence during the coronavirus pandemic and had the goal to specifically use social media to draw attention to the issue and engage the public in the discourse. One of the key findings was that more women (81%) than men (70%) reported online harassment cases.

One major step that has been taken is the UKโ€™s reform of online violence. A press release by The Government of the UK from the 23rd of June 2023 states: โ€œAbusers who share intimate images without consent to face up to 6 months in prison.โ€ Also, deepfakes were criminalized for the first time which has to be considered for future debate: โ€œFor the first time, sharing of โ€˜deep fakeโ€™ intimate images โ€“ explicit images of videos which have been digitally manipulated to look like someone else โ€“ will also be criminalized.โ€ The reform has the goal to facilitate the prosecution of individuals who publish intimate images without consent. Now it would be the question if other states will follow suit in placing stricter laws against cyber violence. For instance, Germany doesnโ€™t have a specific law against cyber violence yet. They include the offences in the general law of insult or threats. 

In countries such as Rwanda and Tanzania, women increasingly (have to) use the internet for work. This has also increased violence against women in the cyber-domain and calls for the need for better laws and safer realms. An initiative called Women@web helps โ€œjournalists, politicians, and human rights activists, among others, who have been confronted with various forms of gender-based online violence.โ€ It is stated again that ever since the corona pandemic, they have seen an increase in online violence. Furthermore, studies conducted by Women@web have found out that women often censor their own comments online to avoid โ€œcyberbullyingโ€. In order to tackle this, Women@web offers modules on: โ€œdigital rights, digital citizenship, digital platforms, digital security, digital storytelling and digital resilience. Focusing on these topics, regular training sessions are held for women in the four countries. The aim is to increase the overall digital literacy among women and empower them to remain in online spaces.โ€ 

Conclusion

These initiatives already draw a lot of attention to the issue at hand however, many questions remain such as whether the UK reform will bring other states to follow suit. Also, social media platforms such as Instagram and TikTok have started becoming stricter in their policies on what people can comment on and what not. Search engines track whether someone posts explicit language or sends explicit images. These are all measures that show steps in the right direction however the question remains, when a new crisis comes (such as the corona pandemic) will it contribute to another surge of online violence? Online violence against women is not a recent new topic but a steady emergent issue. With growing technology, women on the one hand have more access to online help lines and initiatives but on the other hand, are facing new threats such as AI in relation to โ€˜deepfakesโ€™. This calls for stronger sanctions and perhaps more focused campaigns launched towards a young audience to educate on this issue and its repercussions.

June 26, 2023No Comments

Cultural Question and Cyber Quandary: Making Sense Of TikTok Bans Worldwide

Authors: Maria Makurat (Cyber Security and AI Team) and Anurag Mishra (USA Team)

TikTok and the โ€œBan Hammerโ€

The debate of apps such as TikTok being a security threat to individuals as well as countries has been going on for a while. Several articles, studies and other blog articles have been, and are still being released on this hot topic. One of the main concerns remains: TikTok is collecting data of users against their consent whilst one is also not using the app. Since TikTok is owned by ByteDance, a Chinese-owned company, many Western countries and especially the US are highly sceptical and states such as Montana have even taken the initiative to ban the app altogether. What does this mean for cyber as well as cultural security issues? Many factors and international events surround this debate such as TikTok already being banned in India, the issues of the Chinese state  being seen as a spy and whether one can see TikTok as a surveillance weapon? Cyber security as well as cultural issues tie into the debate where we see theories of whether we have a โ€œcyber warโ€ in relation to social media platforms as well as cultural matters if TikTok is having a negative impact on countries. This article explores the issues highlighted above and opens up possible questions that still need to be asked.

Montana Mounts a โ€œBlackout Challengeโ€ to TikTok

Senate Bill #149 of the 68th legislature of Montana, which was introduced by state senator Shelley Vance makes the offering of the app on any application store illegal and prescribes a fine of $10,000 per day for each time someone accesses TikTok, โ€œis offered the abilityโ€ to access it, or downloads it. Governor Greg Gianforte, a Republican from Montana, had approved the law on anticipating potential legal challenges. Although the law is not set to be enforced until January 1, 2024, there are doubts about the state's ability to implement it effectively. The impact of this new legislation in Montana is expected to be more significant than the existing TikTok bans already implemented on government devices in approximately half of the states and at the federal level in the United States.

From the outside, the one-of-a-kind ban looks like an assault on ByteDanceโ€™s data-gathering exercise but also has a deeper purpose of extinguishing the appโ€™s ability to influence the impressionable youth of America.

One of the major reasons why TikTok became the conservative eyesore and a major cause of worry for parents was the โ€œBlackout Challenge.โ€ Also known as the "choking challenge" or the "pass-out challenge," it involved urging individuals to hold their breath until they lose consciousness as a result of insufficient oxygen. While the Blackout Challenge was the biggest troubling online challenge, causing as many as 20 children to lose their lives, a slew of similar troubling trends made TikTok infamous. "Dry-scooping," climbing on tall stacks of milk crates, removing your own IUD, and eating massive amounts of frozen honey and corn syrup, and the list goes on.

The problem with TikTok does not end there. When juxtaposed with the wider scheme of things, TikTok appears to be one of the many arrows in the Chinese quiver. The issue of Chinese Police Stations coming up on the United Statesโ€™ territory has landed many in the stew and has made the American government restive. Taking a leaf out of India and some European countriesโ€™ books, several states in the US decided to ban TikTok on office/government-issued phones and devices. As of April, 34 American states have banned TikTok on government-issued devices. The idea behind banning the mischievous app has largely been to secure any data leaks. India was the first country to ban TikTok and several other Chinese mobile applications nationwide, citing national security concerns. India banned TikTok as early as June 2020. At first, the ban was seen as a mild yet conclusive response to the PLAโ€™s misadventure across the Sino-Indian border, but as more countries put restraints on the Chinese app, the Indian governmentโ€™s official position on the ban seems to have been vindicated.

Reservations and concerns abound TikTok and have only gone on to grow in the past three years. Not just the adversaries and rivals but even allies like Pakistan and North Korea have blocked TikTok. The question nevertheless remains whether TikTok is just an online pastime or a phisher.ย 

Source: Unsplashย https://unsplash.com/photos/PuNW11NRjI4

Weapon of Mass Surveillance: TikTok and its Cyber Security Issues?

The debate surrounding TikTok being a security issue has been around for a while. Several individuals as well as companies had their doubts but as of around April 2023, one has been seeing a surge in states and countries being serious about banning the popular app. The major concern lies within the fact that TikTok is owned by a Chinese company and several discrepancies have arisen concerning the security of the app. It is being repeatedly โ€œexpressed that TikTok and its parent company, ByteDance, may put sensitive user data, like location information, into the hands of the Chinese government.โ€ This together with political tensions between Russia, China and the West in relation to the Ukraine war add to the TikTok debate with companies being concerned that data is being stolen. Countries find themselves recently in much more complicated relations.

One can link this to traditional international affairs theories such as whether we will even have a โ€œcyber warโ€ (discussion by Thomas Rid)  and how social media is being โ€œweaponizedโ€ (discussed by P.W. singer, Emerson T. Brooking and Dr Andreas Krieg). โ€œIn so doing, social media has evolved from a mere distraction machine into a tool of sociopolitical power, galvanising public awareness and civil-societal activism.โ€ It is being discussed that ever since the 2016 elections in the US with Russian interference, that other social media platforms, where TikTok can possibly also be an instrument, can be used to spread false information and not only be used as a tool by itself to collect data. Countries such as Germanyalso increasingly see the issue of social media platforms being used to spread false information as well as collecting data (BIS: Bundesamt fรผr Sicherheit in der Informationstechnik). The so-called โ€œDigitalbarometer 2020โ€ released by the BIS, stated that in Germany for the year 2020, every fourth individual was affected by some type of cyber-attack and every third was affected financially. Whilst Germany has not released a law that forbids the use of TikTok, it is being discussed by the Federal Minister of the Interior and Community that one needs to stay alert and be aware of the possible consequences.

This issue is also being discussed very intensely by scholars such as Dr Andreas Krieg (recent work โ€œSubversion - The strategic weaponization of narrativesโ€) and Dr Kathleen Hall Jamieson (Cyber-War : How Russian Hackers and Trolls Helped Elect a President). We see both now in the international affairs academic world as well as the communications and cultural disciplines a debate, on how social media platforms are being weaponized (see also this blog article on hate speech on social media). Now perhaps more than ever, interdisciplinary communication between different academic strands is needed to address the issue. So we see it is not only the issue of TikTok being owned by a Chinese company and the possible spread of false information but also the physical issue of collecting data. We have both cultural/ethical and cyber security issues.

To Ban, or Not to Ban?

To mitigate the goodwill loss and the loss of business that TikTok has encountered, it would be wise for the company to make itself more transparent and even sell stakes, as beseeched by the Committee on Foreign Investment in the U.S. The company will also need to come clean on the accusations of data theft and spying. The root of all remains the involvement of the Chinese state in its corporate entities and in the long run, such involvement will not go unnoticed by the countries hosting Chinese businesses. When considering all these factors, open questions remain such as if we will see other countries following suit in banning TikTok and how likely is it that more organisations will take action? Do we see a certain cyber war taking place in the realm of social media or is it more an issue of moral and ethical values? Younger generations still use TikTok in their daily life, especially since this is also linked to businesses (such as Infleuncers as well as big companies) which could prove problematic in the future. Perhaps stronger rules are required that regulate the use of TikTok and its data collection if the app is to be further used. It remains to be seen how this develops and whether individuals will be concerned with the use of the app.

February 7, 2023No Comments

Space Warfare: How are offensive military operations conducted in Space?

This is a transcript of an indepth interview with Paul S. Szymanski who has a 49 year experience conducting military operations research analyses for the United States Air Force and Space Force, Navy, Army and Marines. These include outer space program analysis, management, and development of space warfare theory, policy, doctrine, strategies, tactics and techniques. He has worked with the Air Staff at the Pentagon (Secretary of the Air Force), the Space and Missiles Systems Center (Now SSC) in Los Angeles, and the Air Force Research Labs (AFRL) in Albuquerque, New Mexico, along with experience in operational field testing of missile systems at China Lake, California. He is the author of several publications. This transcript is second in a three-part series of content extracted from an interview with Mr. Szymanski.

Interviewed and Edited by: Danilo delle Fave.

Image Source: pexels.com

What are some of the potential threats in Space?

Cyber Attacks: The most popular means of attack against space systems where the entire spectrum of space systems is vulnerable to these attacks. Each country is vulnerable to such hacking. A smaller/poorer country can purchase a surplus satellite (or critical parts thereof) and then conduct hacker contests against these test satellites with significant cash prizes to the teams that cause the best effects against these space system. The same techniques can be employed to train individuals on how to penetrate space ground systems. All of these space systems, including space jammers, are readily available on the commercial market for installation in your country of choice.

Terrestrial Attacks: Any country on Earth has special forces that can penetrate adversary ground systems (satellite control stations, RADARS, Optical Space Tracking Telescopes, etc.). These special forces can insert cyber codes into critical systems, install mines, etc. In addition, spy networks can be employed to โ€œturnโ€ the loyalties of adversary space technicians to influence them to insert threat cyber codes and sabotage critical adversary space systems on the ground.

Space Surveillance Systems: In order to achieve smart space control, a country needs to better understand the orbits, status and capabilities of their adversary space systems. This can readily be achieved through ground-based RADARS and optical imagery/tracking systems. Optical tracking systems can be assembled using amateur astronomy telescopes, and many amateurs employ these around the World in this role. They do not cost that much, are fully automated, and can be placed on the rooftops of country embassies around the World, particularly in countries that have good weather conditions and visibility to space. With such situational knowledge, a poorer country can attack an adversary satellite at the same time a third country is โ€œvisitingโ€ the targeted space system with an inspector/rendezvous satellite in order to place the attack blame on another country.

Laser Attacks: Currently, consumers can openly purchase hand-held 7.5-watt laser systems. Attach one of these to the above-mentioned astronomical telescope, and with the proper alignment techniques, a poorer country can blind an adversary imaging satellite, or maybe even spoof its Earth limb sensors. With even much more powerful and better collimated industrial lasers, a second world country may even be able to permanently damage these sensors. In the least, one should be able to initiate satellite self-defense mechanisms (close sensor shutters, roll satellite) that will take the attacked satellite systems offline for hours if not days, rendering it ineffective during some critical time during a terrestrial battle.

December 17, 2021No Comments

ITSS Verona 2021/22 Webinars Series: “Cyber Security in Italy” featuring Andrea Rigoni

For its third event of the 2021/22 Webinar Series, ITSS Verona members Ludovica Brambilla, Chiara Aquilino, Sarah Toubman, and Julia Hogdings discuss with world-leading cyber security expert Andrea Rigoni the question of cyber security in Italy, with particular reference to the creation of the new Cyber Security Agency and its current and future implications.