Vol. 2021 (2021)
Keeping up with cryptocurrencies How financial regulators used radical innovation to bolster agency reputation
Invented in 2008 with Bitcoin, cryptocurrencies represent a radical technological innovation in finance and banking; one which threatened to disrupt the existing regulatory regimes governing those sectors. This article examines, from a reputation management perspective, how regulatory agencies framed their response. Through a content analysis, we compare communications from financial conduct regulators in the UK, US, and Australia. Despite the risks, challenges, and uncertainties involved in cryptocurrency supervision, we find regulators treat the technology as an opportunity to bolster their reputation in the immediate wake of the Global Financial Crisis. Regulators frame their response to cryptocurrencies in ways which reinforce the agency’s ingenuity and societal importance. We discuss differences in framing between agencies, illustrating how historical, political, and legal differences between regulators can shape their responses to radical innovations.
The delegation of decisions to machines has revived the debate on whether and how technology should and can embed fundamental legal values within its design. While these debates have predominantly been occurring within the philosophical and legal communities, the computer science community has been eager to provide tools to overcome some challenges that arise from ‘hardwiring’ law into code. What emerged is the formation of different approaches to code that adapts to legal parameters. Within this article, we discuss the translational, system-related, and moral issues raised by implementing legal principles in software. While our findings focus on data protection law, they apply to the interlinking of code and law across legal domains. These issues point towards the need to rethink our current approach to design-oriented regulation and to prefer ‘soft’ implementations, where decision parameters are decoupled from program code and can be inspected and modified by users, over ‘hard’ approaches, where decisions are taken by opaque pieces of program code.
This paper tackles three misconceptions regarding discussions of the legal responsibility of artificially intelligent entities: these are that they
(a) cannot be held legally responsible for their actions, because they do not have the prerequisite characteristics to be ‘real agents’ and therefore cannot ‘really’ act.
(b) should not be held legally responsible for their actions, because they do not have the prerequisite characteristics to be ‘real agents’ and therefore cannot ‘really’ act.
(c) should not be held legally responsible for their actions, because to do so would allow other (human or corporate) agents to ‘hide’ behind the AI and escape responsibility that way, while they are the ones who should be held responsible.
(a) is a misconception not only because (positive) law is a social construct, but also because there is no such thing as ‘real’ agency. The latter is also the reason why (b) is misconceived. The arguments against misconceptions a and b imply that legal responsibility can be constructed in different ways, including those that hold both artificially intelligent and other (human or corporate) agents responsible (misconception c). Accordingly, this paper concludes that there is more flexibility in the construction of responsibility of artificially intelligent entities than is at times assumed. This offers more freedom to law- and policymakers, but also requires openness, creativity, and a clear normative vision of the aims they want to achieve.
This paper determines whether the two core data protection principles of data minimi- sation and purpose limitation can be meaningfully implemented in data-driven systems. While contemporary data processing practices appear to stand at odds with these prin- ciples, we demonstrate that systems could technically use much less data than they currently do. This observation is a starting point for our detailed techno-legal analysis uncovering obstacles that stand in the way of meaningful implementation and compliance as well as exemplifying unexpected trade-offs which emerge where data protection law is applied in practice. Our analysis seeks to inform debates about the impact of data protec- tion on the development of artificial intelligence in the European Union, offering practical action points for data controllers, regulators, and researchers.
In this paper, I analyze several traditions of data protection to uncover the theoretical justification they provide for the right of access to personal data. Contrary to what is argued in most recent literature, I do not find support for the claim that the right follows from the German tradition of “informational self-determination” or Westin’s idea of “privacy as control”. Instead, there are two other less known theories of data protection which do offer a direct justification for the right of access. First, American scholars Westin and Baker developed the “due process” view according to which access helps to expose error and bias in decision-making, thereby contributing to correct decisions and allowing the people who are affected to be involved in the decision making. Second, in what I call the “power reversal” view of access, Italian legal scholar Rodotà argues that, in particular when seen from a collective point of view, the right enables social control over the processing of personal data and serves as a counterbalance to the centers of power by placing them under the control of democratic accountability.
Special Issue: Should Data Drive Private Law?
People differ with respect to their preferences, personalities, cognitive abilities, or attitudes. Yet the way in which private law has evolved in the past centuries sacrifices heterogeneity for the sake of the legal certainty that flows out of generalizations and typifications. Law distinguishes between different groups of individuals such as consumers and professionals, or even between average and vulnerable consumers. These groups are however based on conspicuous features that would justify differential treatment. For instance, determining a profile of the average consumer requires a context, such as a given industry or age group, and reflects specific considerations such as the consumer’s skills in retrieving information about a transaction.
Decades of research in psychology and behavioral economics generated a tremendous amount of knowledge about people’s behavior, creating typologies with regard to their personality traits, intertemporal or social preferences as well as cognitive skills. Later use of Big Data analysis showed that these types can be predictive of people’s behavior, as well as informational needs or other specific characteristics. Recently, legal scholars proposed that insights generated by this research on granular legal rules could be embedded in private law by, for instance, introducing different default rules or privacy disclosures depending on people’s personality traits or preferences.
This special issue tackles the question of whether and how data shapes private law. The development of new technologies enables the collection and processing of both personal and non-personal data at an unprecedented scale. The implications of this phenomenon for private law are twofold. On the one hand, the use of data in interactions between individuals may require adjustments or reconceptualization of private law rules and principles. On the other hand, data might be also used by legislators to help create new private law rules as well as to develop consumer empowerment tools to balance their position when transacting with businesses.
Taking these different perspectives, the papers included in this special issue explore the implications of data for private law. The first article by Antonio Davola addresses the question of how the law deals with the use of data by businesses in their interactions with consumers. Davola analyzes existing private law rules on defective consent and argues that these rules could offer potential protection to consumers when they are targeted by businesses’ personalized commercial practices. He juxtaposes this solution with those provided by consumer law and data protection regulation, as well as competition law.
Exploring the second perspective – how data can be used in the development of private law – Fabiana di Porto, Tatjana Grote, Gabriele Volpi, Riccardo Invernizzi demonstrate how data can be relied on in the legislative process. Di Porto et al. propose an automated text analysis method to extract information from contributions submitted by stakeholders in the process of public consultation. Specifically, the authors compare the use and understanding of core terms by various stakeholder groups consulted when developing proposals of Digital Markets Act and Digital Services Act.
Further papers in the special issue will be announced soon.
Fostering Consumer Protection in the Granular Market: the Role of Rules on Consent, Misrepresentation and Fraud in Regulating Personalized Practices
In the e-Commerce, companies increasingly employ data-driven technologies for the allocation and display of offers and advertising. Tailored and targeted commercial strategies incorporate data mining from artificial intelligence, self-tuning algorithms, social network and neuroscience analyses to achieve different degrees of personalization. These innovations provide companies with new ways to gain market advantage, as it is possible to study consumers broadly and personalize every aspect of the consumption experience.
Consumers exposed to such practices could fail to recognize the manipulation of their set of choices, as they may be unaware of the way in which product offers and advertisements use their habits, mental models, and biases to influence their behaviours. The result of these and related trends is not only that firms may take advantage of consumers’ lack of understanding due to cognitive limitations, but also that consumer frailty at an individual level could be revealed and triggered.
Against this backdrop, private law rules could provide meaningful normative guidance in regulating personalized commercial practices.The article examines the role and characteristics of provisions regulating defective consent and misrepresentation to evaluate whether these rules could incorporate emerging findings on personalized practices and operate as viable instruments for the modernization of consumer protection.
Talking at Cross Purposes? A computational analysis of The debate on informational duties in the digital services and the digital markets acts
Since the opaqueness of algorithms used for rankings, recommender systems, personalized advertisements, and content moderation on online platforms opens the door to discriminatory and anti-competitive behavior, increasing transparency has become a key objective of EU lawmakers.
In the latest Commission proposals, the Digital Markets Act and Digital Services Act, transparency obligations for online intermediaries, platforms and ‘gatekeepers’ figure prominently. This paper investigates whether key concepts of competition law and transparency on digital markets are used in the same way by different stakeholders. Leveraging the power of computational text analysis, we find significant differences in the employment of terms like ‘gatekeepers’, ‘simple’, and ‘precise’ in the position papers that informed the drafting of the two latest Commission proposals. This finding is not only informative for the Commission and legal scholars, it might also affect the effectiveness of transparency duties, for which it is often simply assumed that phrases like ‘precise information’ are understood the same way by those implementing said obligations. Hence, it may explain why they fail so often to reach their goal. We conclude by sketching out how different computational text analysis tools, like topic modeling, sentiment analysis and text similarity, could be combined to provide many helpful insights for both rulemakers and the legal scholarship.