Technology and Regulation 2021-07-12T04:59:14-07:00 Ronald Leenes Open Journal Systems <p>An interdisciplinary journal of law, technology, and society</p> Keeping up with cryptocurrencies 2021-05-05T03:44:36-07:00 Lauren Fahy Scott Douglas Judith van Erp <p><span lang="EN-GB">Invented in 2008 with Bitcoin, cryptocurrencies represent a radical technological innovation in finance and banking; one which threatened to disrupt the existing regulatory regimes governing those sectors. This article examines, from a reputation management perspective, how regulatory agencies framed their response. Through a content analysis, we compare communications from financial conduct regulators in the UK, US, and Australia. </span><span lang="EN-US">Despite the risks, challenges, and uncertainties involved</span><span lang="EN-GB"> in cryptocurrency supervision, we find regulators treat the technology as an opportunity to bolster their reputation in the immediate wake of the Global Financial Crisis. <a name="_Hlk59614435"></a>Regulators frame their response to cryptocurrencies in ways which reinforce the agency’s ingenuity and societal importance. We discuss differences in framing between agencies, illustrating how historical, political, and legal differences between regulators can shape their responses to radical innovations. </span></p> 2021-03-31T01:05:35-07:00 Copyright (c) 2021 Lauren Fahy, Scott Douglas, Judith van Erp Not Hardcoding but Softcoding Data Protection 2021-05-15T08:09:40-07:00 Aurelia Tamò-Larrieux Simon Mayer Zaïra Zihlmann <p><span lang="EN-US">The delegation of decisions to machines has revived the debate on whether and how technology should and can embed fundamental legal values within its design. While these debates have predominantly been occurring within the philosophical and legal communities, the computer science community has been eager to provide tools to overcome some challenges that arise from ‘hardwiring’ law into code. What emerged is the formation of different approaches to code that adapts to legal parameters. Within this article, we discuss the translational, system-related, and moral issues raised by implementing legal principles in software. While our findings focus on data protection law, they apply to the interlinking of code and law across legal domains. These issues point towards the need to rethink our current approach to design-oriented regulation and to prefer ‘soft’ implementations, where decision parameters are decoupled from program code and can be inspected and modified by users, over ‘hard’ approaches, where decisions are taken by opaque pieces of program code. </span></p> 2021-05-06T02:34:12-07:00 Copyright (c) 2021 Aurelia Tamò-Larrieux, Simon Mayer, Zaïra Zihlmann On the legal responsibility of artificially intelligent agents 2021-07-12T04:59:14-07:00 Antonia Waltermann <p>This paper tackles three misconceptions regarding discussions of the legal responsibility of artificially intelligent entities: these are that they</p> <p>(a) <em>cannot </em>be held legally responsible for their actions, because they do not have the prerequisite characteristics to be ‘real agents’ and therefore cannot ‘really’ act.</p> <p>(b)<em> should not</em> be held legally responsible for their actions, because they do not have the prerequisite characteristics to be ‘real agents’ and therefore cannot ‘really’ act.</p> <p>(c)<em> should not</em> be held legally responsible for their actions, because to do so would allow other (human or corporate) agents to ‘hide’ behind the AI and escape responsibility that way, while they are the ones who should be held responsible.</p> <p>(a) is a misconception not only because (positive) law is a social construct, but also because there is no such thing as ‘real’ agency. The latter is also the reason why (b) is misconceived. The arguments against misconceptions a and b imply that legal responsibility can be constructed in different ways, including those that hold <em>both</em> artificially intelligent and other (human or corporate) agents responsible (misconception c). Accordingly, this paper concludes that there is more flexibility in the construction of responsibility of artificially intelligent entities than is at times assumed. This offers more freedom to law- and policymakers, but also requires openness, creativity, and a clear normative vision of the aims they want to achieve.</p> 2021-07-12T04:58:44-07:00 Copyright (c) 2021 Antonia Waltermann