Vol 2019 (2019)
Technology and Regulation (TechReg), Volume 2019 contains 4 papers:
Of Horses and Other Animals of Cyberspace, an Editorial , Technology and Regulation, 2019, pp. 1-9 PDF
In this introductory article to the new journal Technology and Regulation, I give a somewhat personal account of the history of cyberlaw and technology law and the ‘struggles’ some scholars have finding their spot in the more general legal realm. It will recount some of the classic discussions in the field, such as whether cyberlaw is just a form of the ‘Law of the Horse’. It also outlines the contours of the field of technology regulation, some of the open questions in defining this field and some of its constituent elements. Finally, questions that I hope will be addressed in future articles in the journal are provided.
Law Disrupted, Law Re-Imagined, Law Re-Invented, Technology and Regulation, 2019, pp. 10-30, PDF
This article describes the technological disruption of law and legal reasoning, suggests how law might be re-imagined, and then proposes four key elements in the re-invention of law. Two waves of disruption are identified. One wave impacts on the content of legal rules and the way in which we perceive the deficiency of those rules. A second wave impacts on our appreciation of technological instruments as tools to be used for regulatory purposes in support of, or even in place of, legal rules. The suggested re-imagination of law centres on the idea of the regulatory environment, broadly conceived to include both normative and non-normative signals. The proposed re-invention of law has four strands. It starts with (i) a fresh understanding of the range of regulatory responsibilities. This understanding then shapes (ii) the articulation of the Rule of Law and it informs both (iii) a renewal of traditional coherentist thinking and (iv) a reshaping of legal and regulatory institutions.
- Mark Coeckelbergh, Artificial Intelligence: Some ethical issues and regulatory challenges, Technology and Regulation, 2019, pp. 31-34 PDF This article offers a brief overview of some of the ethical challenges raised by artificial intelligence (AI), in particular machine learning and data science, and summarizes and discusses a number of challenges for near-future regulation in this area. This includes the difficulties of moving from principles to more concrete measures and problems with implementing ethics by design and responsible innovation.
- Markus Naarttijärvi, Legality and Democratic Deliberation in Black Box Policing, Technology and Regulation, 2019, pp. 35-48 PDF The injection of emerging technologies into policing implies that policing mandates in law may become mediated and applied through opaque machine learning algorithms, artificial intelligence, or surveillance tools – contributing to a form of ‘black box policing’ challenging foreseeability and clarity and expanding discretionary legal spaces. In this paper, this issue is explored from a constitutional and rule of law perspective, using the requirements of qualitative legality elaborated by the European Court of Human Rights and the implicit democratic values that they serve. Placing this concept of legality into a wider theoretical framework allows legality to be translated into a context of emerging technology to maintain the connections between rule of law, democracy, and individual autonomy.
- Ronald Leenes, Of Horses and Other Animals of Cyberspace, an Editorial , Technology and Regulation, 2019, pp. 1-9 PDF
Vol 2020 (2020)
Technology and Regulation (TechReg), Volume 2020 contains 10 papers:ARTICLES
Linnet Taylor, Lina Dencik,
Constructing Commercial Data Ethics
Technology and Regulation, 2020, pp. 1-10 PDF
The ethics of big data and AI have become the object of much public debate. Technology firms around the world have set up ethics committees and review processes, which differ widely in their organisation and practice. In this paper we interrogate these processes and the rhetoric of firm-level data ethics. Using interviews with industry, activists and scholars and observation of public discussions, we ask how firms conceptualise the purposes and functions of data ethics, and how this relates to core business priorities. We find considerable variation between firms in the way they use ethics. We compare strategies and rhetoric to understand how commercial data ethics is constructed, its political and strategic dimensions, and its relationship to data ethics more broadly.
Róisín Áine Costello,
The Impacts of AdTech on Privacy Rights and the Rule of Law
Technology and Regulation, 2020, pp. 11-23 PDF
This article argues that the AdTech market has undermined the fundamental right to privacy in the European Union and that current legislative and fundamental rights protections in the EU have been unsuccessful in restraining these privacy harms. The article further argues that these privacy consequences have imported additional reductions in individual autonomy and have the capacity to harm the Rule of Law.
Paving the Way Forward for Data Governance: a Story of Checks and BalancesEditorial
Technology and Regulation, 2020, pp. 24-28 PDF
Data governance is a phenomenon that brings many interests and considerations together. This editorial argues that active involvement of various stakeholders is vital to advance discussions about how to create value from data as a means to stimulate societal progress. Without adequate checks and balances, each stakeholder group on its own will not have sufficient incentives to do its utmost to achieve this common goal. Policymakers and regulators need to be stimulated to look beyond short-term results to ensure that the design of their initiatives is fit for purpose. Industry players have to be transparent about their practices to prevent strategic behaviour that may harm society. And researchers must inform their findings with real-world evidence and proper terminology.
- Michael Madison, Tools for Data Governance Technology and Regulation, 2020, pp. 29-43 PDF This article describes the challenges of data governance in terms of the broader framework of knowledge commons governance, an institutional approach to gov- erning shared knowledge, information, and data resources. Knowledge commons governance highlights the potential for effective community- and collective-based governance of knowledge resources. The article focuses on key concepts within the knowledge commons framework rather than on specific law and public pol- icy questions, directing the attention of researchers and policymakers to critical inquiry regarding relevant social groups and relevant data “things.” Both concepts are key tools for effective data governance.
- Teresa Scassa, Designing Data Governance for Data SharingLessons from Sidewalk Toronto Technology and Regulation, 2020, pp. 44–56 PDF Data governance for data sharing is becoming an important issue in the rapidly evolving data economy and society. In the smart cities’ context, data sharing may be particularly important, but is also complicated by a diverse array of interests in data collected, as well as significant privacy and public interest considerations. This paper examines the data governance body proposed by Sidewalk Labs as part of its Master Innovation Development Plan for a smart city development on port lands in Toronto, Canada. Using Sidewalk Lab’s Urban Data Trust as a use case, this paper identifies some of the challenges in designing an effective and appropriate data governance structure for data sharing, and analyzes the normative issues underlying these challenges. In this example, issues of data ownership and control are contested from the outset. The proposed model also raises interesting issues about the role and relevance of the public sector in managing the public interest; and the need to design data governance from the ground up. While the paper focuses on a particular use case, the goal is to distil useful knowledge about the design and implementation of data governance structures.
- Charlotte Ducuing, Beyond the Data Flow ParadigmGoverning Data Requires to Look Beyond Data Technology and Regulation, 2020, pp. 57–64 PDF The paper aims to contribute to the discussion on how to regulate and govern data as an economic asset. It critically discusses the ‘data flow paradigm’, defined here as the regulatory focus on data (transactions) with the purpose to enhance data exchange by establishing data markets. Based on the examples of the electricity and the automotive sectors with respect to data governance, the paper finds that the data flow paradigm alone is too narrow. This paradigm seems to bear the idea that there should be well-operating data markets, possibly by the operation of the law, and that such markets alone would deliver the grand policy expectations, such as ‘AI’ or ‘data-driven innovations’. Yet, fostering data exchange is not an end in itself and should be regarded with respect to the sectoral objectives and constraints. As the study of the examples shows, the quest for appropriate mechanisms to govern data often leads to rediscovering old concepts, such as (data) commons or (data) platform. Finally, the paper discusses future possible regulatory intervention.
Alina Wernick, Christopher Olk, Max von Grafenstein,
Defining Data IntermediariesA Clearer View Through the Lens of Intellectual Property Governance
Technology and Regulation, 2020, pp. 65–77 PDF
Data intermediaries may foster data reuse, thus facilitating efficiency and innovation. However, research on the subject suffers from terminological inconsistency and vagueness, making it difficult to convey to policymakers when data governance succeeds and when data sharing requires regulatory intervention. The paper describes what distinguishes data intermediaries from other data governance models. Building on research on intellectual property governance, we identify two distinct types of data intermediaries, data clearinghouses and data pools. We also discover several governance models that are specific to data and not present in the context of intellectual property. We conclude that the use of more refined terminology to describe data intermediaries will facilitate more accurate research and informed policy-making on data reuse.
Mark Leiser, Edina Harbinja,
CONTENT NOT AVAILABLE – Why The United Kingdom's Proposal For A “Package Of Platform Safety Measures” Will Harm Free Speech
Technology and Regulation, 2020, pp. 78-90 PDF
This article critiques key proposals of the United Kingdom’s “Online Harms” White Paper; in particular, the proposal for new digital regulator and the imposition of a “duty of care” on platforms. While acknowledging that a duty of care, backed up by sanctions works well in some environments, we argue is not appropriate for policing the White Paper’s identified harms as it could result in the blocking of legal, subjectively harmful content. Furthermore, the proposed regulator lacks the necessary independence and could be subjected to political interference. We conclude that the imposition of a duty of care will result in an unacceptable chilling effect on free expression, resulting in a draconian regulatory environment for platforms, with users’ digital rights adversely affected.
Cristiana Santos, Nataliia Bielova, Célestin Matte,
Are cookie banners indeed compliant with the law? Deciphering EU legal requirements on consent and technical means to verify compliance of cookie banners
Technology and Regulation, 2020, pp. 91-135 PDF
In this paper, we describe how cookie banners, as a consent mechanism in web applications, should be designed and implemented to be compliant with the ePrivacy Directive and the GDPR, defining 22 legal requirements. While some are provided by legal sources, others result from the domain expertise of computer scientists. We perform a technical assessment of whether technical (with computer science tools), manual (with a human operator) or user studies verification is needed. We show that it is not possible to assess legal compliance for the majority of requirements because of the current architecture of the web. With this approach, we aim to support policy makers assessing compliance in cookie banners, especially under the current revision of the EU ePrivacy framework.
Jef Ausloos, Michael Veale,
Researching with Data Rights
Technology and Regulation, 2020, pp. 136-157 PDF
The concentration and privatization of data infrastructures has a deep impact on independent research. This article positions data rights as a useful tool in researchers’ toolbox to obtain access to enclosed datasets. It does so by providing an overview of relevant data rights in the EU’s General Data Protection Regulation, and describing different use cases in which they might be particularly valuable. While we believe in their potential, researching with data rights is still very much in its infancy. A number of legal, ethical and methodological issues are identified and explored. Overall, this article aims both to explain the potential utility of data rights to researchers, as well as to provide appropriate initial conceptual scaffolding for important discussions around the approach to occur.
- Linnet Taylor, Lina Dencik, Constructing Commercial Data Ethics Technology and Regulation, 2020, pp. 1-10 PDF
Vol 2021 (2021)
Technology and Regulation (TechReg), Volume 2021 so far contains 3 papers:
- Keeping up with cryptocurrenciesHow financial regulators used radical innovation to bolster agency reputation Lauren Fahy, Scott Douglas, Judith van Erp, pp. 1-16 PDF Invented in 2008 with Bitcoin, cryptocurrencies represent a radical technological innovation in finance and banking; one which threatened to disrupt the existing regulatory regimes governing those sectors. This article examines, from a reputation management perspective, how regulatory agencies framed their response. Through a content analysis, we compare communications from financial conduct regulators in the UK, US, and Australia. Despite the risks, challenges, and uncertainties involved in cryptocurrency supervision, we find regulators treat the technology as an opportunity to bolster their reputation in the immediate wake of the Global Financial Crisis. Regulators frame their response to cryptocurrencies in ways which reinforce the agency’s ingenuity and societal importance. We discuss differences in framing between agencies, illustrating how historical, political, and legal differences between regulators can shape their responses to radical innovations.
- Not Hardcoding but Softcoding Data Protection Aurelia Tamò-Larrieux, Simon Mayer, and Zaïra Zihlmann, pp. 17-34 PDF The delegation of decisions to machines has revived the debate on whether and how technology should and can embed fundamental legal values within its design. While these debates have predominantly been occurring within the philosophical and legal communities, the computer science community has been eager to provide tools to overcome some challenges that arise from ‘hardwiring’ law into code. What emerged is the formation of different approaches to code that adapts to legal parameters. Within this article, we discuss the translational, system-related, and moral issues raised by implementing legal principles in software. While our findings focus on data protection law, they apply to the interlinking of code and law across legal domains. These issues point towards the need to rethink our current approach to design-oriented regulation and to prefer ‘soft’ implementations, where decision parameters are decoupled from program code and can be inspected and modified by users, over ‘hard’ approaches, where decisions are taken by opaque pieces of program code.
- On the legal responsibility of artificially intelligent agents – Addressing three misconceptions Antonia Waltermann, pp. 35-43 PDF This paper tackles three misconceptions regarding discussions of the legal responsibility of artificially intelligent entities: these are that they (a) cannot be held legally responsible for their actions, because they do not have the prerequisite characteristics to be ‘real agents’ and therefore cannot ‘really’ act. (b) should not be held legally responsible for their actions, because they do not have the prerequisite characteristics to be ‘real agents’ and therefore cannot ‘really’ act. (c) should not be held legally responsible for their actions, because to do so would allow other (human or corporate) agents to ‘hide’ behind the AI and escape responsibility that way, while they are the ones who should be held responsible. (a) is a misconception not only because (positive) law is a social construct, but also because there is no such thing as ‘real’ agency. The latter is also the reason why (b) is misconceived. The arguments against misconceptions a and b imply that legal responsibility can be constructed in different ways, including those that hold both artificially intelligent and other (human or corporate) agents responsible (misconception c). Accordingly, this paper concludes that there is more flexibility in the construction of responsibility of artificially intelligent entities than is at times assumed.