This is an outdated version published on 27-06-2025. Read the most recent version.

Regulating AI as a Cybersecurity Defense: Fighting the Misuse of Generative AI for Cyber Attacks and Cybercrime

Authors

DOI:

https://doi.org/10.71265/23nqtq40

Keywords:

AI, Generative AI, Cybersecurity, Cybercrime, Crime-as-a-Service, Criminal law, Criminology, Actor-Network Theory

Abstract

Looking back at the progress made in cybersecurity regulation in the EU, significant accomplishments have been achieved. Looking to the future, this paper argues that further milestones in cybersecurity regulation could be attained through comprehensive integration with diverse legal frameworks related to ICT technologies, and in particular by combating the proliferation and misuse of tools that have the potential to facilitate cyberattacks and cybercrime. Within this framework, the present research is primarily focused on Generative AI and the need to prevent its malicious use for cyber-attacks and cybercrime: alongside the criminal prosecution of cybercrime and the established “protective” legal framework of cybersecurity regulation, the forward-looking perspective should also encompass a complementary strategy for mitigating cyber risks and cyber threats related to Generative AI, in the evolving landscape of cybersecurity.

Downloads

Download data is not yet available.

Author Biographies

  • Maria Vittoria Zucca, Sant'Anna School of Advanced Studies

    Ph.D. Student in Cybersecurity, affiliated with Sant'Anna School of Advanced Studies in Pisa (Dirpolis Institute - Institute of Law, Politics and Development) and the IMT School for Advanced Studies in Lucca

  • Gaia Fiorinelli, Sant'Anna School of Advanced Studies

    Research Fellow (RTD-A) in Criminal Law at Sant'Anna School of Advanced Studies in Pisa (Dirpolis Institute - Institute of Law, Politics and Development).

References

Agarwal A and Ratha N, “Manipulating Faces for Identity Theft via Morphing and Deepfake: Digital Privacy” in Arni S. R. Srinivasa Rao, Venu Govindaraju and C. R. Rao (eds), Handbook of Statistics, vol 48: Deep Learning (Elsevier 2023) 223–41.

Alotaibi L, Seher S and Nazeeruddin M, “Cyberattacks Using ChatGPT: Exploring Malicious Content Generation through Prompt Engineering” in Proceedings of the 2024 ASU International Conference in Emerging Technologies for Sustainability and Intelligent Systems (ICETSIS) (IEEE 2024) 1304–1311.

Aobo K and others, “Better zero-shot reasoning with role-play prompting” (2023), arXiv preprint arXiv:2308.07702.

Basile E, Consiglio tecnico e responsabilità penale. Il concorso del professionista tramite azioni «neutrali» (Giappichelli 2018) 83–85.

Bradley P, “Risk management standards and the active management of malicious intent in artificial superintelligence” (2019) 35(2) AI & SOCIETY 319.

Brunnstein K and Fischer‐Huebner S, “How far can the criminal law help to control IT misuse?” (1995) 9(1) International Review of Law, Computers & Technology 111.

Burchard C, “Das Pro und Contra fur Chatbots in Rechtpraxis und Rechtsdogmatik” (2023) 2 Computer und Recht 132.

Busch E and Ware J, “The Weaponisation of Deepfakes” (2023) ICCT Policy Brief.

Caldwell M, J. T. A. Andrews, T. Tanay, Lewis Griffin, “AI-enabled future crime” (2020) 9 Crime Science 14.

Creese S, “The Threat from AI” in Woodrow Barfield and Ugo Pagallo (eds), Artificial Intelligence and the Law (Routledge 2020) 151–167.

de Rancourt-Raymond A and Smaili N, “The Unethical Use of Deepfakes” (2023) 30(4) Journal of Financial Crime 1066, 1066–77.

Di Nicola A, ‘Towards Digital Organized Crime and Digital Sociology of Organized Crime’ (2022) Trends in Organized Crime 1–20.

Ferrara E, “GenAI against humanity: nefarious applications of generative artificial intelligence and large language models” (2024) 7 Journal of Computational Social Science 549.

Fiorinelli G, “Il concorrente virtuale: la prevenzione dell'uso di ChatGPT per finalità criminali tra etero- e auto-regolazione” (2023) 2 Rivista Italiana di Diritto e Medicina Legale 361-378.

Feuerriegel S, Hartmann J, Janiesch C, Zschech P, “Generative AI” (2024) 66(1) Business & Information Systems Engineering 111-126.

Giray L, Jomarie J, Daxjhed LG, “Strengths, Weaknesses, Opportunities, and Threats of Using ChatGPT in Scientific Research” (2024) 7(1) International Journal of Technology in Education 40, 40-58.

Giray L, Jomarie J, Daxjhed LG, “Strengths, Weaknesses, Opportunities, and Threats of Using ChatGPT in Scientific Research” (2024) 7(1) International Journal of Technology in Education 40-58.

Glorin S, “Do ChatGPT and Other AI Chatbots Pose a Cybersecurity Risk?” (2023) 15(1) International Journal of Security and Privacy in Pervasive Computing 1.

Goldstein JA and others, “Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations” (arXiv pre-print, 10 January 2023).

Gupta M and others, “From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy” (2023) 11 IEEE, Access 80218–80245.

Hacker P and others, “Regulating ChatGPT and other Large Generative AI Models” (2023).

Hadi MU and others, “Large Language Models: A Comprehensive Survey of its Applications, Challenges, Limitations, and Future Prospects” (TechRxiv preprint, 10 July 2023).

Hyslip TS, “Cybercrime-as-a-Service Operations,” in Thomas J. Holt and Adam M. Bossler (eds), The Palgrave Handbook of International Cybercrime and Cyberdeviance (Springer 2020) 815-846.

King TC and others, “Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions” (2019) 26(1) Science and Engineering Ethics 89.

Kong A and others, “Better Zero-Shot Reasoning with Role-Play Prompting” (arXiv pre-print, 14 March 2024).

Krishnamurthy O, “Enhancing Cyber Security Enhancement Through Generative AI” (2023) 9(1) International Journal of Universal Science and Engineering 35-50.

Lagioia F and Sartor G, “AI Systems Under Criminal Law: a Legal Analysis and a Regulatory Perspective” (2019) 33(3) Philosophy & Technology 433.

Langford T and Payne BR, “Phishing Faster: Implementing ChatGPT into Phishing Campaigns” in Kohei Arai (ed), Proceedings of the Future Technologies Conference (FTC 2023) (Springer 2024) 174–187.

Latour B, “On actor-network theory: A few clarifications” (1996) 47(4) Soziale Welt 369, 369-381.

Latour B, “Reassembling the social: An introduction to actor-network-theory” (OUP Oxford 2007).

Mania K, “Legal Protection of Revenge and Deepfake Porn Victims in the European Union: Findings from a Comparative Legal Study” (2024) 25(1) Trauma, Violence & Abuse 117, 117–129.

Manky D, “Cybercrime as a Service: A Very Modern Business” (2013) 2013(6) Computer Fraud & Security 9-13.

Maras MH & Alexandrou A, “Determining authenticity of video evidence in the age of artificial intelligence and in the wake of Deepfake videos” (2019) 23(3) The International Journal of Evidence & Proof 255, 255-262.

Moreno FR, “Generative AI and deepfakes: a human rights approach to tackling harmful content” (2024) 38(3) International Review of Law, Computers & Technology 1–30.

Nay JJ, “Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans” (2023) 20(3) Northwestern Journal of Technology and Intellectual Property 309 – 392.

Petratos PN, “Misinformation, disinformation, and fake news: Cyber risks to business” (2021) 64(6) Business Horizons 763, 763-774.

Porcedda MG, “Patching the patchwork: appraising the EU regulatory framework on cyber security breaches” (2018) 34(5) Computer Law & Security Review 1077.

Sætra HS, “Generative AI: Here to stay, but for good?” (2023) 75 Technology in Society 102372.

Shoaib MR and others, “Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models” in Proceedings of the 2023 IEEE International Conference on Computer and Applications (ICCA 2023) (IEEE 2023) 1–7 .

Sison AGJ and others, “ChatGPT: More than a «weapon of mass deception» ethical challenges and responses from the human-centered artificial intelligence (HCAI) Perspective’ (advance online publication, 2023) International Journal of Human–Computer Interaction 1.

Teichmann F, “Ransomware attacks in the context of generative artificial intelligence—an experimental study” (2023) 4 International Cybersecurity Law Review 399-414.

Van der Wagen W and Pieters W, “From cybercrime to cyborg crime: Botnets as hybrid criminal actor-networks” (2015) 53(3) British Journal of Criminology 578-595.

Van der Wagen W, “The Significance of ‘Things’ in Cybercrime: How to Apply Actor-network Theory in (Cyber) criminological Research and Why it Matters” (2019) 3(1) Journal of Extreme Anthropology 152, 152-168.

Wall DS, “Cybercrime: The Transformation of Crime in the Information Age” (1st edn, Polity Press 2007).

Westerlund M, “The Emergence of Deepfake Technology: A Review” (2019) 9(11) Technology Innovation Management Review 40, 40–53.

Whyte C, “Deepfake News: AI-Enabled Disinformation as a Multi-Level Public Policy Challenge” (2020) 5(2) Journal of Cyber Policy 199, 199–217.

Downloads

Published

27-06-2025

Versions

Issue

Section

Special Issue: TILTing 2024

How to Cite

Zucca, M. V., & Fiorinelli, G. (2025). Regulating AI as a Cybersecurity Defense: Fighting the Misuse of Generative AI for Cyber Attacks and Cybercrime. Technology and Regulation, 2025, 247-262. https://doi.org/10.71265/23nqtq40

Similar Articles

1-10 of 69

You may also start an advanced similarity search for this article.