Technology and Regulation <p>An interdisciplinary journal of law, technology, and society</p> Department of Law, Technology, Markets, and Society (LTMS) | Tilburg Law School | Tilburg University en-US Technology and Regulation 2666-139X <p>Technology and Regulation (TechReg) is an open access journal which means that all content is freely available without charge to the user or his or her institution. Users are permitted to read, download, copy, distribute, print, search, or link to the full texts of the articles, or to use them for any other lawful purpose, without asking prior permission from the publisher or the author.&nbsp;Submissions are published under a <a href="">Creative Commons BY-NC-ND license</a>.</p> Keeping up with cryptocurrencies <p><span lang="EN-GB">Invented in 2008 with Bitcoin, cryptocurrencies represent a radical technological innovation in finance and banking; one which threatened to disrupt the existing regulatory regimes governing those sectors. This article examines, from a reputation management perspective, how regulatory agencies framed their response. Through a content analysis, we compare communications from financial conduct regulators in the UK, US, and Australia. </span><span lang="EN-US">Despite the risks, challenges, and uncertainties involved</span><span lang="EN-GB"> in cryptocurrency supervision, we find regulators treat the technology as an opportunity to bolster their reputation in the immediate wake of the Global Financial Crisis. <a name="_Hlk59614435"></a>Regulators frame their response to cryptocurrencies in ways which reinforce the agency’s ingenuity and societal importance. We discuss differences in framing between agencies, illustrating how historical, political, and legal differences between regulators can shape their responses to radical innovations. </span></p> Lauren Fahy Scott Douglas Judith van Erp Copyright (c) 2021 Lauren Fahy, Scott Douglas, Judith van Erp 2021-03-31 2021-03-31 2021 1 16 Not Hardcoding but Softcoding Data Protection <p><span lang="EN-US">The delegation of decisions to machines has revived the debate on whether and how technology should and can embed fundamental legal values within its design. While these debates have predominantly been occurring within the philosophical and legal communities, the computer science community has been eager to provide tools to overcome some challenges that arise from ‘hardwiring’ law into code. What emerged is the formation of different approaches to code that adapts to legal parameters. Within this article, we discuss the translational, system-related, and moral issues raised by implementing legal principles in software. While our findings focus on data protection law, they apply to the interlinking of code and law across legal domains. These issues point towards the need to rethink our current approach to design-oriented regulation and to prefer ‘soft’ implementations, where decision parameters are decoupled from program code and can be inspected and modified by users, over ‘hard’ approaches, where decisions are taken by opaque pieces of program code. </span></p> Aurelia Tamò-Larrieux Simon Mayer Zaïra Zihlmann Copyright (c) 2021 Aurelia Tamò-Larrieux, Simon Mayer, Zaïra Zihlmann 2021-05-06 2021-05-06 2021 17 34 On the legal responsibility of artificially intelligent agents <p>This paper tackles three misconceptions regarding discussions of the legal responsibility of artificially intelligent entities: these are that they</p> <p>(a) <em>cannot </em>be held legally responsible for their actions, because they do not have the prerequisite characteristics to be ‘real agents’ and therefore cannot ‘really’ act.</p> <p>(b)<em> should not</em> be held legally responsible for their actions, because they do not have the prerequisite characteristics to be ‘real agents’ and therefore cannot ‘really’ act.</p> <p>(c)<em> should not</em> be held legally responsible for their actions, because to do so would allow other (human or corporate) agents to ‘hide’ behind the AI and escape responsibility that way, while they are the ones who should be held responsible.</p> <p>(a) is a misconception not only because (positive) law is a social construct, but also because there is no such thing as ‘real’ agency. The latter is also the reason why (b) is misconceived. The arguments against misconceptions a and b imply that legal responsibility can be constructed in different ways, including those that hold <em>both</em> artificially intelligent and other (human or corporate) agents responsible (misconception c). Accordingly, this paper concludes that there is more flexibility in the construction of responsibility of artificially intelligent entities than is at times assumed. This offers more freedom to law- and policymakers, but also requires openness, creativity, and a clear normative vision of the aims they want to achieve.</p> Antonia Waltermann Copyright (c) 2021 Antonia Waltermann 2021-07-12 2021-07-12 2021 35 43 Reviving Purpose Limitation and Data Minimisation in Data-Driven Systems <div class="page" title="Page 1"> <div class="section"> <div class="layoutArea"> <div class="column"> <p>This paper determines whether the two core data protection principles of data minimi- sation and purpose limitation can be meaningfully implemented in data-driven systems. While contemporary data processing practices appear to stand at odds with these prin- ciples, we demonstrate that systems could technically use much less data than they currently do. This observation is a starting point for our detailed techno-legal analysis uncovering obstacles that stand in the way of meaningful implementation and compliance as well as exemplifying unexpected trade-offs which emerge where data protection law is applied in practice. Our analysis seeks to inform debates about the impact of data protec- tion on the development of artificial intelligence in the European Union, offering practical action points for data controllers, regulators, and researchers.</p> </div> </div> </div> </div> Michele Finck Asia J. Biega Copyright (c) 2021 Michele Finck, Asia J. Biega 2021-08-18 2021-08-18 2021 44 61 The right of access to personal data: A genealogy <p style="margin-bottom: 0in; line-height: 100%;" align="justify"><span style="font-family: Calibri, serif;">In this paper, I analyze several traditions of data protection to uncover the theoretical justification they provide for the right of access to <span style="font-size: medium;">personal</span> data. Contrary to what is argued in most recent literature, I do not find support for the claim that the right follows from the German tradition of “informational self-determination” or Westin’s idea of “privacy as control”. Instead, there are two other less known theories of data protection which do offer a direct justification for the right of access. First, American scholars Westin and Baker developed the “due process” view according to which access helps to expose error and bias in decision-<span style="font-size: medium;">making</span>, thereby contributing to correct decisions and allowing the people who are affected to be involved in the decision making. Second, in what I call the “power reversal” view of access, Italian legal scholar Rodotà argues that, in particular when seen from a collective point of view, the right enables social control over the processing of personal data and serves as a counterbalance to the centers of power by placing them under the control of democratic accountability.</span></p> René Mahieu Copyright (c) 2021 René Mahieu 2021-08-20 2021-08-20 2021 62 75