Vol. 2023 (2023)

Published: 23-06-2023


  • A Brief History of Data Protection by Design From multilateral security to Article 25(1) GDPR

    Pierre Dewitte

    Article 25(1) of the General Data Protection Regulation (“GDPR”) is the first provision that comes to mind when discussing data protection by design. Yet, the origins of that concept can be traced back to an idea that was already solidly established in the software engineering community before its adoption. Besides, the GDPR is not the first binding piece of legislation that incorporates such an obligation. This paper unravels the history of data protection by design by delving into its technical roots and outlining the national and EU initiatives that have preceded the GDPR. Such a retrospective provides the necessary background to understand the implications and scope of its current manifestation in the text of the Regulation.

  • The Law and Political Economy of Online Visibility Market Justice in the Digital Services Act

    Rachel Griffin

    The paper critically assesses the regulation of social media recommendations in the EU’s 2022 Digital Services Act (DSA), drawing on Sarah Banet-Weiser’s economies of visibility theory. Banet-Weiser calls attention not only to injustices in the distribution of visibility between users, but also to the political implications of organising online media as an economy, in which individuals compete for visibility in a market structured by corporate platforms. DSA provisions on recommendations focus on enhancing user choice, protecting creators’ market access, and encouraging technocratic responses to particular negative externalities, such as promotion of disinformation. Ultimately, then, the DSA aims to enhance the functioning of existing economies of visibility, rather than more fundamentally reforming a social media market in which visibility is allocated based on commercial value.

  • Trustworthy AI a cooperative approach

    Jacob Livingston Slosser, Birgit Aasa, Henrik Palmer Olsen

    The EU has proposed harmonized rules on artificial intelligence (AI Act) and a directive on adapting non-contractual civil liability rules to AI (AI liability directive) due to increased demand for trustworthy AI. However, the concept of trustworthy AI is unspecific, covering various desired characteristics such as safety, transparency, and accountability. Trustworthiness requires a specific contextual setting that involves human interaction with AI technology, and simply involving humans in decision processes does not guarantee trustworthy outcomes. In this paper, the authors argue for an informed notion of what is meant for a system to be trustworthy and examine the concept of trust, highlighting its reliance on a specific relationship between humans that cannot be strictly transmuted into a relationship between humans and machines. They outline a trust-based model for a cooperative approach to AI and provide an example of what that might look like.

  • All Rise for the Honourable Robot Judge? Using Artificial Intelligence to Regulate AI: a debate

    Simon Chesterman, Lyria Bennett Moses, Ugo Pagallo

    There is a rich literature on the challenges that AI poses to the legal order. But to what extent might such systems also offer part of the solution? China, which has among the least developed rules to regulate conduct by AI systems, is at the forefront of using that same technology in the courtroom. This is a double-edged sword, however, as its use implies a view of law that is instrumental, with parties to proceed- ings treated as means rather than ends. That, in turn, raises fundamental questions about the nature of law and authority: at base, whether law is reducible to code that can optimize the human condition, or if it must remain a site of contestation, of politics, and inextricably linked to institutions that are themselves account- able to a public. For many of the questions raised, the rational answer will be sufficient; but for others, what the answer is may be less important than how and why it was reached, and whom an affected population can hold to account for its consequences.

    This contribution is follwed by comments by Lyria Bennett Moses and Ugo Pagallo

  • Cookies and EU Law: History, Future Regulation and Critique

    Jan Tomisek
    Cookies and similar technologies can be used to track the online behaviour of internet users and can pose risks to their privacy and other fundamental rights. The use of cookies and similar technologies is therefore regulated by EU law. The article describes the history of EU law regulating cookies, analyses its current form and application to different technologies, and describes the proposals for the ePrivacy Regulation. Based on the analysis, it provides a critique of both the current law and the proposals and suggests ways forward in the regulation of cookies and similar technologies.
  • Harmed While Anonymous Beyond the Personal/Non-Personal Distinction in Data Governance

    Przemysław Pałka

    Data law and policy assume that harms to individuals can result only from personal data processing. Conversely, generation and use of non-personal data supposedly create new value while presenting no risk to individual interests or fundamental rights. Consequently, the law treats these two categories differently, constraining generation, use, and sharing of the former while incentivizing the latter. This article challenges this assumption. It proposes to divide data-related harms into two high-level categories: unwanted disclosure and detrimental use. It demonstrates how personal/non-personal data distinction prevents unwanted disclosure but fails to capture, and unintendedly enables, detrimental use of data. As a remedy, the article proposes a new concept – data about humans – and illustrates how it could advance data law and policy.

  • How Decisions by Apple and Google obstruct App Privacy

    Konrad Kollnig, Nigel Shadbolt

    Ample past research highlighted that privacy problems are widespread in mobile apps and can have disproportionate impacts on individuals. However, doing such research, especially through automated methods, remains hard and has become an arms race with those who engage in invasive data practices. This paper analyses how decisions by Apple and Google, the makers of the two primary app ecosystems (iOS and Android), currently hold back (automated) app privacy research and thereby create systemic risks that have previously not been systematically documented. Such an analysis is timely and pertinent since the newly enacted EU Digital Services Act (DSA) obliges Very Large Online Platforms to enable ‘vetted researchers’ to study systemic risks (Article 40) and to put in place reasonable, proportionate and effective mitigation measures against systemic risks (Article 35).

  • A Right of Social Dialogue on Automated Decision-Making: From Workers’ Right to Autonomous Right

    Damian Clifford, Jake Goldenfein, Aitor Jimenez, Megan Richardson

    An emerging tool in the movement for platform workers’ rights is the right not to be subject to automated decision-making. In its most advanced formulation to date in art 22 of the EU General Data Protection Regulation 2016, this right includes ‘the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision’. Among other things, art 22 forms part of the groundwork of the December 2021 European Commission Proposal for a Directive on Improving Working Conditions in Platform Work, with its mantra of promotion of ‘social dialogue on algorithmic management’. In this article, we argue that art 22 and now the Directive offer an important tool for responding to the mechanistic working conditions of platform work. More broadly, we suggest that a right of social dialogue regarding automated decision-making, which art 22 represents, has the potential to serve as a signal achievement in the history of data rights developing to allow democratic involvement in decisions that affect people’s lives under modern industrial conditions.