Vol. 2025 (2025)

Published: 17-03-2025

Articles

  • Rethinking Safety-by-Design and Techno-Solutionism for the Regulation of Child Sexual Abuse Material

    Andrew Murray, Mark Leiser
    137-171

    This article explores the implications of increased reliance on technological solutions to digital regulatory challenges, particularly in Child Sexual Abuse Material (CSAM). It focuses on the contemporary trend of imposing obligations on private actors, such as platforms and service providers, to mitigate risks associated with their services while ensuring the protection of fundamental rights. This leads to new regulatory designs like "safety-by-design," favoured by European regulators due to their cost-effectiveness and efficiency in assigning responsibilities to online gatekeepers. We examine the European Union’s CSAM Proposal and the United Kingdom’s Online Safety Act, ambitious initiatives to employ technology to combat the dissemination of CSAM. This proposal mandates platforms to perform risk assessments and implement mitigation measures against the hosting or dissemination of CSAM. In cases where these measures fail, a detection order can be issued, requiring platforms to deploy technical measures, including AI, to scan all communications. This approach, while well-intentioned, is scrutinised for its potential over-reliance on technology and possible infringement of fundamental rights. The article examines the theoretical underpinnings of “safety-by-design” and “techno-solutionism,” tracing their historical development and evaluating their application in current digital regulation, particularly in online child safety policy. The rise of safety-by-design and techno-solutionism is contextualised within the broader framework of cyber regulation, examining the benefits and potential pitfalls of these approaches.

    We argue for a balanced approach that considers technological solutions alongside other regulatory modalities, emphasising the need for comprehensive strategies that address the complex and multifaceted nature of CSAM and online child safety. It highlights the importance of engaging with diverse theoretical perspectives to develop effective, holistic responses to the challenges posed by CSAM in the digital environment.

  • Through thick and thin: data commons, community and the struggle for collective data governance

    Tommaso Fia, Gijs van Maanen
    114-136

    Collective data governance mechanisms such as data commons have recently gained traction in both theoretical and policy-oriented discussions as promising alternatives to the shortcomings of individualistic data protection and data markets. Many of these approaches centre around the idea of community as the key social institution overcoming these limitations. Yet, far less attention has been paid to the meaning, features and implications that the language of community can have for data commons.

    This paper investigates the relationship between data commons and the community involved therein, with a focus on the kinds and features of such a community. It argues that analysing its key characteristics and moral-political affordances furnishes key implications for devising and implementing policies on collective data governance.



  • High-risk AI transparency? On qualified transparency mandates for oversight bodies under the EU AI Act

    Kasia Söderlund
    97-113

    The legal opacity of AI technologies has long posed challenges in addressing algorithmic harms, as secrecy enables companies to retain competitive advantages while limiting public scrutiny. In response, ideas such as qualified transparency have been proposed to provide AI accountability within the confidentiality constraints. With the introduction of the EU AI Act, the foundations for human-centric and trustworthy AI have been established. The framework sets regulatory requirements for certain AI technologies and grants oversight bodies broad transparency mandates to enforce the new rules. This paper examines these transparency mandates under the AI Act and argues that it effectively implements qualified transparency, which may potentially mitigate the problem of AI opacity. Nevertheless, several challenges remain in achieving the Act’s policy objectives.

  • Reforming Copyright Law for AI-Generated Content: Copyright Protection, Authorship and Ownership

    Yiheng Lu
    81-95

    With the emergence of disputes over the copyright of AI-generated content (AIGC), academia has extensively discussed relevant issues, including copyright protectability and ownership. However, the copyright law community has not reached an international consensus. Adopting a doctrinal methodology, this paper investigates these issues and proposes reforms, arguing that copyright law should clarify the de facto authorship of AI and determine the originality of AIGC based on minimum creativity at the expression level. It also recommends attributing copyright of AIGC to the AI owner via statutory provision, allowing contractual allocation between parties. The proposed framework would resolve significant academic controversies on fundamental issues surrounding AIGC copyright and provide a reference model for future research.

  • Health Data Access Bodies under the European Health Data Space – A technocratic colossus or rubber stamp forum?

    Paul Quinn
    60-80

    The proposal for a European Health Data Space (EHDS) has sparked extensive discourse, weighing the potential benefits for healthcare and innovation against concerns over privacy and societal impacts. At the heart of this discussion are the Health Data Access Bodies (HDABs), tasked with managing the reuse of secondary health data within the EHDS framework. This article delves into the formidable challenges facing HDABs, suggesting that the complexity and volume of data access requests may overwhelm their capacity. Ensuring compliance with EHDS regulations, GDPR provisions, and ethical standards presents a multifaceted challenge. The author argues that the expertise and efficiency required to navigate these complexities could strain HDAB resources and capabilities. Furthermore, the anticipated surge in data access requests may exacerbate these challenges, potentially compromising HDAB effectiveness. Consequently, there is a pressing need for a pragmatic approach to delineating HDAB responsibilities to ensure their ability to fulfill their role competently. By addressing these concerns, the EHDS can uphold individual rights, promote societal welfare, and foster trust in its overarching objectives.

  • The Inscrutable Code? The Deficient Scrutiny Problem of Automated Government

    Richard Mackenzie-Gray Scott, Lilian Edwards
    37-59

    Public administration in the United Kingdom increasingly features automated decision-making. From predictive policing and prisoner categorisation, to asylum applications and tenancy relationships, automated government exists across various domains. This article examines an underlying issue concerning government automated decision-making systems: the lack of public scrutiny they receive across pre- to post-deployment. Branches of the state tasked with scrutinising government, namely Parliament and the courts, appear outmoded to address this problem. These circumstances prompt a concern of where the public can expect safeguards from government overreach manifested through computer software. Two regulatory solutions are proposed. First, mandating pre-deployment impact assessments of automated decision-making systems intended for use by government, either during their design, or before procurement. Second, incorporating algorithmic auditing as part of reinforcing the duty of candour in judicial review, so as to better inform courts about specific systems and the data underpinning them.

  • Towards Planet Proof Computing: Law and Policy of Data Centre Sustainability in the European Union

    Jessica Commins, Kristina Irion
    1-36

    Our society’s growing reliance on digital technologies such as AI incurs an ever-growing ecological footprint. The EU regulation of the data centre sector aims to achieve climate-neutral, energy-efficient and sustainable data centres by no later than 2030. This article unpacks the EU law and policy which aims on improving energy efficiency, recycling equipment and increasing reporting and transparency obligations. In 2025 the Commission will present a report based on information reported by data centre operators and in light of the new evidence review its policy. Further regulation should aim to translate reporting requirements into binding sustainability targets to contain rebound effects of the data centre industry while strengthening the public value orientation of the industry.

Special Issue: TILTing 2024

  • The EU AI Act: Law of Unintended Consequences?

    Sabrina Kutscher
    316-334

    After long deliberations, the highly anticipated AI Act is almost there. Although it is yet to be seen what the effects of this landmark regulation are going to be, it is important to already prepare for what can be expected and to identify possible dynamics regarding competencies, implementation, and enforcement. Hence, this article provides a novel perspective of the AI Act by answering the question of what regulatory dynamics can be expected with the AI Act entering the “regulatory space” of AI? The assumption here is that he AIA is entering a “regulatory space” that is already somewhat occupied by various public and private actors which have different relevant regulatory resources, both legal competencies and extra-legal capacities. The aim of this article is therefore a more expansive mapping of actors and their resources, where power and influence is both contingent upon legal competencies and extra-legal capacities, by combining regulation of technology literature with the regulatory space framework.  

  • The Use of Facial Recognition Technologies by Law Enforcement Authorities in the US and the EU Towards a Convergence on Regulation?

    Xavier Tracol
    289-315

    Law enforcement authorities have been using facial recognition technologies for many years in both the US and the EU. Some US legislators adopted bans and/or moratoriums whilst other US legislators adopted nuanced regulations about such use. The EU legislature considered adopting a ban of the use of live or real-time facial recognition technologies by law enforcement authorities in publicly accessible spaces. The EU legislature however ended up adopting a partial ban which provides for many broad exceptions. In this context, the US and the EU share a common interest in sharing experience about regulating the use of facial recognition technologies by law enforcement authorities.

     

  • Beyond the prompt: The Role of User Behavior in Shaping AI-misalignment and Societal Knowledge

    Morraya Benhammou
    263-288

    This research paper explores the concept of AI-misalignment, differentiating between perspectives of AI model designers and users. It delves into the notion of user-centric AI misalignment, breaking it down into three categories: user responsibility, user intent, and user influence. The focus is on how users influence misalignment in both the large models and the outputs generated through AI. The research examines how user behavior may inadvertently contribute to the spread of disinformation through AI-generated hallucinations, or deliberately use AI to propagate misinformation for propaganda purposes. Additionally, it discusses the concept of user accountability as part of this behavior, highlighting that from a user’s perspective, the only controllable aspect is their acceptance through ignorance. Furthermore, it explores how user behavior can shape AI models through reinforcement learning from human feedback (RLHF) and how they can influence the models through model collapse. This kind of misalignment can significantly affect knowledge integrity, possibly resulting in knowledge erosion.

    The research incorporates evidence from a survey designed to assess user awareness in the context of user behavior and knowledge generation. The survey gathered insights to better understand current perceptions and knowledge about generative AI technologies, and the role users play in them.

  • Regulating AI to Combat Tech-Crimes: Fighting the Misuse of Generative AI for Cyber Attacks and Digital Offenses

    Maria Vittoria Zucca, Gaia Fiorinelli
    247-262

    Looking back at the progress made in cybersecurity regulation in the EU, significant accomplishments have been achieved. Looking to the future, this paper argues that further milestones in cybersecurity regulation could be attained through comprehensive integration with diverse legal frameworks related to ICT technologies, and in particular by combating the proliferation and misuse of tools that have the potential to facilitate cyberattacks and cybercrime. Within this framework, the present research is primarily focused on Generative AI and the need to prevent its malicious use for cyber-attacks and cybercrime: alongside the criminal prosecution of cybercrime and the established “protective” legal framework of cybersecurity regulation, the forward-looking perspective should also encompass a complementary strategy for mitigating cyber risks and cyber threats related to Generative AI, in the evolving landscape of cybersecurity.

  • The Commodification of Attention, Distrust & Resentment: a Threat to (Rawlsian) Justice

    Paige Benton
    232-246

    There is a growing body of scholarship on how AI technology can undermine democratic institutions. I present a novel contribution to this literature, by accounting for how and why recommendation algorithms for engagement optimisation undermine the necessary conditions for Rawlsian justice. For Rawls’s political theory, the ability to form bonds of trust with fellow citizens is a necessary condition for citizens to develop their sense of justice; and their sense of justice is a necessary condition for the attainment of justice. Recommendation algorithms amplify the space given to hateful, violent, extremist, false, and discriminatory content, I argue. This content undermines the development of mutual trust between citizens necessary for a sense of justice. If citizens can only trust their like-minded members and have a distrust of their fellow citizens, then the possibility for Rawlsian reciprocity in liberal society to be realised is not possible. Without reciprocity, liberal political systems will be inherently unstable, as citizens would not have formed the adequate affectionate ties needed for mutual cooperation, which is a precondition for a just society.

  • Mitigating Digital Discrimination in Dating Apps – The Dutch Breeze case

    Tim de Jonge, Frederik Zuiderveen Borgesius
    214-231

    In September 2023, the Netherlands Institute for Human Rights, the Dutch non-discrimination authority, decided that Breeze, a Dutch dating app, was justified in suspecting that their algorithm discriminated against non-white. Consequently, the Institute decided that Breeze must prevent this discrimination based on ethnicity. This paper explores two questions. (i) Is the discrimination based on ethnicity in Breeze's matching algorithm illegal? (ii) How can dating apps mitigate or stop discrimination in their matching algorithms? We illustrate the legal and technical difficulties dating apps face in tackling discrimination and illustrate promising solutions. We analyse the Breeze decision in-depth, combining insights from computer science and law. We discuss the implications of this judgment for scholarship and practice in the field of fair and non-discriminatory machine learning.

  • Mitigating Generative AI’s negative impact on indigenous knowledge from international and Vietnamese laws perspectives

    Duong Thuy Pham, Tronel Joubert
    194-213

    Indigenous knowledge, which has been developed over generations and possesses a unique understanding of local environments, offers precious responses to sustainable development challenges, for instance, climate change, biodiversity loss and pollution. Despite its pivotal role, indigenous knowledge of various ethnic minorities and indigenous peoples is in danger of disappearing due to centuries of history of colonization, discrimination and racism. The emergence of GenAI will complicate the knowledge preservation effort as GenAI models constitute a threat, via content created by those models, to perpetuating and even amplifying inaccurate information related to indigenous knowledge. This paper aims to discuss solutions to alleviate GenAI’s adverse impact on indigenous knowledge from international and Vietnamese laws perspectives, with the ultimate goal being to propose a feasible answer to protect indigenous knowledge of 53 ethnic minorities in Vietnam from GenAI’s threats. To arrive at the eventual outcome, this paper identifies the significance of indigenous knowledge to sustainable development, and the vulnerability of indigenous knowledge under GenAI’s drawbacks. The paper, additionally, applies experience in addressing this issue from an international perspective to the context of Vietnam. By doing so,  the paper furthermore raises the need for research to provide solutions to preserve and promote indigenous knowledge that suits the socio-economic condition of each country, as there is no one-size-fits-all answer for ethnic minorities and indigenous peoples on a global scale.

  • ‘Slow libraries’ and ‘Cultural AI': Reassessing technology regulation in the context of digitalised cultural heritage data

    Vicky Breemen, Kelly Breemen
    175-193

    Cultural heritage institutions (galleries, libraries, archives, and museums; CHIs or GLAM) increasingly experiment with the use of artificial intelligence (AI) in epistemological tools for unlocking their collections. The use of AI poses both opportunities and risks, a notable risk being bias and silencing non-dominant perspectives. It is therefore time to rethink the design and regulation of AI. With the input of histories of, and developments in, collecting and unlocking cultural heritage, and various theories on cultural AI, regulation by design, and value alignment, this paper applies a law & humanities perspective to examine ‘cultural AI’ and ‘slow archives’ approaches in view of our envisaged output: the contours of a conceptual framework for the value-based regulation by design of culturally sensitive, fair and insightful AI in GLAM practice.

  • TILTing 2024 Special Issue introduction

    dr. Sunimal Mendis, dr. Marco Bassini, dr. Friso Bostoen, dr. Max Baumgart, Shweta Degalahal, dr. Brenda Espinosa Apráez, dr. Aviva de Groot
    172-174

    The 8th edition of the TILTing Perspectives Conference took place over three days in July 2024, with the theme “Looking back, moving forward: Re-assessing technology regulation in digitalized worlds”. The conference was organized by a team of academics (TILTies) at the Tilburg Institute for Law, Technology, and Society (TILT) comprising Sunimal Mendis as academic lead, Friso Bostoen as co-academic lead and six Track Leaders. Aviva de Groot led Track A (AI as a Knowledge-making Power in the Majority Worlds) and the Deep-Dive Panel on Teaching about AI and Society. Gijs van Maanen led Track B (Problematizing ‘Data Governance’) and Brenda Espinosa Apráez led Track C (Regulation and Innovation in Digital Markets). Track D (Regulating Sectors in Transition: Energy, Finance & Health) was led by Max Baumgart and Track E (AI and Data Protection) by Marco Bassini. Shweta Degalahal was the leader of Track F (The Evolving Cybersecurity Landscape and Regulatory Approaches in Cybersecurity).

    As the conference coincided with the 30th anniversary of TILT, we considered this a fitting moment to take stock of decades of technology regulation and how it impacts our lives and the digitalized worlds around us. We specifically aimed to explore the following questions as part of our mission to “look back, move forward”:

    What has been accomplished? By whom? And where? What is missing? Who is missing? What can we say about the relations between technology-focused regulation and other regulatory foci and modes of standard-setting?