Announcements

new editor

18-05-2025

We are happy to announce that Dr. Cristiana Teixeira Santos will strengthen our editorial team. She has written extensively on the ePDirective, GDPR, DSA, and DMA and she will help navigate the increasing number of contributions we receive with EU digital agenda topics. 

Read more about new editor

Current Issue

Vol. 2025 (2025)
Published: 17-03-2025

Articles

  • Rethinking Safety-by-Design and Techno-Solutionism for the Regulation of Child Sexual Abuse Material

    Andrew Murray, Mark Leiser
    137-171

    This article explores the implications of increased reliance on technological solutions to digital regulatory challenges, particularly in Child Sexual Abuse Material (CSAM). It focuses on the contemporary trend of imposing obligations on private actors, such as platforms and service providers, to mitigate risks associated with their services while ensuring the protection of fundamental rights. This leads to new regulatory designs like "safety-by-design," favoured by European regulators due to their cost-effectiveness and efficiency in assigning responsibilities to online gatekeepers. We examine the European Union’s CSAM Proposal and the United Kingdom’s Online Safety Act, ambitious initiatives to employ technology to combat the dissemination of CSAM. This proposal mandates platforms to perform risk assessments and implement mitigation measures against the hosting or dissemination of CSAM. In cases where these measures fail, a detection order can be issued, requiring platforms to deploy technical measures, including AI, to scan all communications. This approach, while well-intentioned, is scrutinised for its potential over-reliance on technology and possible infringement of fundamental rights. The article examines the theoretical underpinnings of “safety-by-design” and “techno-solutionism,” tracing their historical development and evaluating their application in current digital regulation, particularly in online child safety policy. The rise of safety-by-design and techno-solutionism is contextualised within the broader framework of cyber regulation, examining the benefits and potential pitfalls of these approaches.

    We argue for a balanced approach that considers technological solutions alongside other regulatory modalities, emphasising the need for comprehensive strategies that address the complex and multifaceted nature of CSAM and online child safety. It highlights the importance of engaging with diverse theoretical perspectives to develop effective, holistic responses to the challenges posed by CSAM in the digital environment.

  • Through thick and thin: data commons, community and the struggle for collective data governance

    Tommaso Fia, Gijs van Maanen
    114-136

    Collective data governance mechanisms such as data commons have recently gained traction in both theoretical and policy-oriented discussions as promising alternatives to the shortcomings of individualistic data protection and data markets. Many of these approaches centre around the idea of community as the key social institution overcoming these limitations. Yet, far less attention has been paid to the meaning, features and implications that the language of community can have for data commons.

    This paper investigates the relationship between data commons and the community involved therein, with a focus on the kinds and features of such a community. It argues that analysing its key characteristics and moral-political affordances furnishes key implications for devising and implementing policies on collective data governance.



  • High-risk AI transparency? On qualified transparency mandates for oversight bodies under the EU AI Act

    Kasia Söderlund
    97-113

    The legal opacity of AI technologies has long posed challenges in addressing algorithmic harms, as secrecy enables companies to retain competitive advantages while limiting public scrutiny. In response, ideas such as qualified transparency have been proposed to provide AI accountability within the confidentiality constraints. With the introduction of the EU AI Act, the foundations for human-centric and trustworthy AI have been established. The framework sets regulatory requirements for certain AI technologies and grants oversight bodies broad transparency mandates to enforce the new rules. This paper examines these transparency mandates under the AI Act and argues that it effectively implements qualified transparency, which may potentially mitigate the problem of AI opacity. Nevertheless, several challenges remain in achieving the Act’s policy objectives.

  • Reforming Copyright Law for AI-Generated Content: Copyright Protection, Authorship and Ownership

    Yiheng Lu
    81-95

    With the emergence of disputes over the copyright of AI-generated content (AIGC), academia has extensively discussed relevant issues, including copyright protectability and ownership. However, the copyright law community has not reached an international consensus. Adopting a doctrinal methodology, this paper investigates these issues and proposes reforms, arguing that copyright law should clarify the de facto authorship of AI and determine the originality of AIGC based on minimum creativity at the expression level. It also recommends attributing copyright of AIGC to the AI owner via statutory provision, allowing contractual allocation between parties. The proposed framework would resolve significant academic controversies on fundamental issues surrounding AIGC copyright and provide a reference model for future research.

  • Health Data Access Bodies under the European Health Data Space – A technocratic colossus or rubber stamp forum?

    Paul Quinn
    60-80

    The proposal for a European Health Data Space (EHDS) has sparked extensive discourse, weighing the potential benefits for healthcare and innovation against concerns over privacy and societal impacts. At the heart of this discussion are the Health Data Access Bodies (HDABs), tasked with managing the reuse of secondary health data within the EHDS framework. This article delves into the formidable challenges facing HDABs, suggesting that the complexity and volume of data access requests may overwhelm their capacity. Ensuring compliance with EHDS regulations, GDPR provisions, and ethical standards presents a multifaceted challenge. The author argues that the expertise and efficiency required to navigate these complexities could strain HDAB resources and capabilities. Furthermore, the anticipated surge in data access requests may exacerbate these challenges, potentially compromising HDAB effectiveness. Consequently, there is a pressing need for a pragmatic approach to delineating HDAB responsibilities to ensure their ability to fulfill their role competently. By addressing these concerns, the EHDS can uphold individual rights, promote societal welfare, and foster trust in its overarching objectives.

  • The Inscrutable Code? The Deficient Scrutiny Problem of Automated Government

    Richard Mackenzie-Gray Scott, Lilian Edwards
    37-59

    Public administration in the United Kingdom increasingly features automated decision-making. From predictive policing and prisoner categorisation, to asylum applications and tenancy relationships, automated government exists across various domains. This article examines an underlying issue concerning government automated decision-making systems: the lack of public scrutiny they receive across pre- to post-deployment. Branches of the state tasked with scrutinising government, namely Parliament and the courts, appear outmoded to address this problem. These circumstances prompt a concern of where the public can expect safeguards from government overreach manifested through computer software. Two regulatory solutions are proposed. First, mandating pre-deployment impact assessments of automated decision-making systems intended for use by government, either during their design, or before procurement. Second, incorporating algorithmic auditing as part of reinforcing the duty of candour in judicial review, so as to better inform courts about specific systems and the data underpinning them.

  • Towards Planet Proof Computing: Law and Policy of Data Centre Sustainability in the European Union

    Jessica Commins, Kristina Irion
    1-36

    Our society’s growing reliance on digital technologies such as AI incurs an ever-growing ecological footprint. The EU regulation of the data centre sector aims to achieve climate-neutral, energy-efficient and sustainable data centres by no later than 2030. This article unpacks the EU law and policy which aims on improving energy efficiency, recycling equipment and increasing reporting and transparency obligations. In 2025 the Commission will present a report based on information reported by data centre operators and in light of the new evidence review its policy. Further regulation should aim to translate reporting requirements into binding sustainability targets to contain rebound effects of the data centre industry while strengthening the public value orientation of the industry.

View All Issues

Technology and Regulation (TechReg) is a new interdisciplinary journal of law, technology and society. TechReg provides an open-access platform for disseminating original research on the legal and regulatory challenges posed by existing and emerging technologies.

The Editor-in-Chief is Professor Ronald Leenes of the Tilburg Law School. Our Editorial Board Committee comprises a distinguished panel of international experts in law, regulation, technology and society across different disciplines and domains.

TechReg aspires to become the leading outlet for scholarly research on technology and regulation topics, and has been conceived to be as accessible as possible for both authors and readers.