We are happy to welcome professor dr. Sofia Ranchordas to the editorial board. She works at the intersection of public law and technology.Read more about New editorial board member: prof. Sofia Ranchordas
Article 25(1) of the General Data Protection Regulation (“GDPR”) is the first provision that comes to mind when discussing data protection by design. Yet, the origins of that concept can be traced back to an idea that was already solidly established in the software engineering community before its adoption. Besides, the GDPR is not the first binding piece of legislation that incorporates such an obligation. This paper unravels the history of data protection by design by delving into its technical roots and outlining the national and EU initiatives that have preceded the GDPR. Such a retrospective provides the necessary background to understand the implications and scope of its current manifestation in the text of the Regulation.
The paper critically assesses the regulation of social media recommendations in the EU’s 2022 Digital Services Act (DSA), drawing on Sarah Banet-Weiser’s economies of visibility theory. Banet-Weiser calls attention not only to injustices in the distribution of visibility between users, but also to the political implications of organising online media as an economy, in which individuals compete for visibility in a market structured by corporate platforms. DSA provisions on recommendations focus on enhancing user choice, protecting creators’ market access, and encouraging technocratic responses to particular negative externalities, such as promotion of disinformation. Ultimately, then, the DSA aims to enhance the functioning of existing economies of visibility, rather than more fundamentally reforming a social media market in which visibility is allocated based on commercial value.
The EU has proposed harmonized rules on artificial intelligence (AI Act) and a directive on adapting non-contractual civil liability rules to AI (AI liability directive) due to increased demand for trustworthy AI. However, the concept of trustworthy AI is unspecific, covering various desired characteristics such as safety, transparency, and accountability. Trustworthiness requires a specific contextual setting that involves human interaction with AI technology, and simply involving humans in decision processes does not guarantee trustworthy outcomes. In this paper, the authors argue for an informed notion of what is meant for a system to be trustworthy and examine the concept of trust, highlighting its reliance on a specific relationship between humans that cannot be strictly transmuted into a relationship between humans and machines. They outline a trust-based model for a cooperative approach to AI and provide an example of what that might look like.
There is a rich literature on the challenges that AI poses to the legal order. But to what extent might such systems also offer part of the solution? China, which has among the least developed rules to regulate conduct by AI systems, is at the forefront of using that same technology in the courtroom. This is a double-edged sword, however, as its use implies a view of law that is instrumental, with parties to proceed- ings treated as means rather than ends. That, in turn, raises fundamental questions about the nature of law and authority: at base, whether law is reducible to code that can optimize the human condition, or if it must remain a site of contestation, of politics, and inextricably linked to institutions that are themselves account- able to a public. For many of the questions raised, the rational answer will be sufficient; but for others, what the answer is may be less important than how and why it was reached, and whom an affected population can hold to account for its consequences.
This contribution is follwed by comments by Lyria Bennett Moses and Ugo Pagallo
Data law and policy assume that harms to individuals can result only from personal data processing. Conversely, generation and use of non-personal data supposedly create new value while presenting no risk to individual interests or fundamental rights. Consequently, the law treats these two categories differently, constraining generation, use, and sharing of the former while incentivizing the latter. This article challenges this assumption. It proposes to divide data-related harms into two high-level categories: unwanted disclosure and detrimental use. It demonstrates how personal/non-personal data distinction prevents unwanted disclosure but fails to capture, and unintendedly enables, detrimental use of data. As a remedy, the article proposes a new concept – data about humans – and illustrates how it could advance data law and policy.
Ample past research highlighted that privacy problems are widespread in mobile apps and can have disproportionate impacts on individuals. However, doing such research, especially through automated methods, remains hard and has become an arms race with those who engage in invasive data practices. This paper analyses how decisions by Apple and Google, the makers of the two primary app ecosystems (iOS and Android), currently hold back (automated) app privacy research and thereby create systemic risks that have previously not been systematically documented. Such an analysis is timely and pertinent since the newly enacted EU Digital Services Act (DSA) obliges Very Large Online Platforms to enable ‘vetted researchers’ to study systemic risks (Article 40) and to put in place reasonable, proportionate and effective mitigation measures against systemic risks (Article 35).
An emerging tool in the movement for platform workers’ rights is the right not to be subject to automated decision-making. In its most advanced formulation to date in art 22 of the EU General Data Protection Regulation 2016, this right includes ‘the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision’. Among other things, art 22 forms part of the groundwork of the December 2021 European Commission Proposal for a Directive on Improving Working Conditions in Platform Work, with its mantra of promotion of ‘social dialogue on algorithmic management’. In this article, we argue that art 22 and now the Directive offer an important tool for responding to the mechanistic working conditions of platform work. More broadly, we suggest that a right of social dialogue regarding automated decision-making, which art 22 represents, has the potential to serve as a signal achievement in the history of data rights developing to allow democratic involvement in decisions that affect people’s lives under modern industrial conditions.
Technology and Regulation (TechReg) is a new interdisciplinary journal of law, technology and society. TechReg provides an open-access platform for disseminating original research on the legal and regulatory challenges posed by existing and emerging technologies.
The Editor-in-Chief is Professor Ronald Leenes of the Tilburg Law School. Our Editorial Board Committee comprises a distinguished panel of international experts in law, regulation, technology and society across different disciplines and domains.
TechReg aspires to become the leading outlet for scholarly research on technology and regulation topics, and has been conceived to be as accessible as possible for both authors and readers.