Thursday, July 11, 2024

Two AI Truths and a Lie

Industry will take everything it can in developing Artificial Intelligence (AI) systems. We will get used to it. This will be done for our benefit. Two of these things are true and one of them is a lie. It is critical that lawmakers identify them correctly. In this Essay, I argue that no matter how AI systems develop, if lawmakers do not address the dynamics of dangerous extraction, harmful normalization, and adversarial self-dealing, then AI systems will likely be used to do more harm than good. 

Given these inevitabilities, lawmakers will need to change their usual approach to regulating technology. Procedural approaches requiring transparency and consent will not be enough. Merely regulating use of data ignores how information collection and the affordances of tools bestow and exercise power. A better approach involves duties, design rules, defaults, and data dead ends. This layered approach will more squarely address dangerous extraction, harmful normalization, and adversarial self-dealing to better ensure that deployments of AI advance the public good

Introduction 

It’s hard to know what to believe about our likely future with Artificial Intelligence (AI). The techno-optimists tell us that AI will be a “force for good” as it becomes integrated into almost every aspect of our lives. For some, we simply need to set up guardrails so society can benefit from these systems while minimizing their harms. The techno-doomers, a dramatic division of the AI hype machine, warn us that AI systems could become intelligent and powerful enough to wipe out humanity. Though that doesn’t seem to stop them from building AI systems as fast as they can. Meanwhile, the more skeptical and even cautiously optimistic crowds are not worried about AI systems becoming so smart that they take over the world, but instead are worried that they are too dumb, and that they have already taken over. Societal wellbeing hangs in the balance, as our rules and frameworks for regulating AI depend on policymakers’ mental models, theirpredictions for the affordances of AI, and how people and organizations are likely to respond to these affordances. But we already know how this will play out. 

The most prominent AI tools developed for use in commercial, employment, and government surveillance contexts feel hand-crafted for industry exploitation and fascist oppression. Companies are already using generative AI, biometric surveillance, predictive analytics, and automated decision-making for power and profit. No matter how AI develops, there are a few dynamics we can count on. Companies are going to seek to profit from AI and will take advantage of narratives to block rules that interfere with their business models. The governments that want powerful AI tools won’t stand in the way. 

When I was younger, I often played the game “two truths and a lie.” The idea is to offer up three statements, only two of which are true, and see if others can guess the lie. It’s a fun ice breaker and a great way to get to know others. It’s also a helpful way to work through what is and what is likely to be. 

In this Essay, I frame the pathologies related to industry’s deployment of AI systems in the form of two truths and a lie. I argue that lawmakers should shape their regulatory response to AI systems around three dangerous dynamics that will be inevitable unless lawmakers intervene. 

First, the truths. The primary certainty of AI is that commercial actors who design and deploy it will take everything they can from us. Companies cannot create AI without data, and the race to collect information about literally every aspect of our lives is more intense than ever. The trajectory of data collection and exploitation only runs one way: more. Second truth: We will get used to it. After initial protests about new forms of data collection and exploitation, we will become accustomed to these new invasions, or at least develop a begrudging and fatalistic acceptance of them. Our current rules have no backstop against total exposure. Third, this will all be done “for our benefit.” And that’s the lie. AI tools might benefit us, but they will not be created for our collective benefit. Organizations will say the deployment of facial and emotion recognition in schools is motivated by the desire to keep students focused and edified. Employers will say that the deployment of neurotechnology in the workplace is to keep employees safe and engaged. Platforms will promise that the use of eye-tracking and spatial mapping in augmented-reality and virtual-reality environments is to better cater to your desires. While it’s true people will probably realize some benefits from these tools, companies have little interest (and show no evidence of pursuing) societal improvement. The result is that the benefits of AI systems are often pretexts for market expansion into the increasingly few spaces in our lives that are not captured, turned into data, and exploited for profit. 

Regardless of how AI evolves technologically, data capture, normalization, and industry self-dealing will be a part of that evolution. Lawmakers should act accordingly. To that end, I suggest that lawmakers embrace four approaches to regulating AI: (1) Duties; (2) Design; (3) Defaults; and (4) Dead Ends (“The 4 D’s of AI Regulation”). Less sturdy and insufficient procedural strategies and spotty use limits will not be enough. Only stronger, substantive approaches can help ensure society will be better off with AI—notwithstanding the inevitable data grabs, normalization, and self-dealing that come with it.

by Woodrow Hartzog, Yale Journal of Law & Technology |  Read more (pdf)
[ed. Download, or Open pdf in browser to view. See also: Honest Government Ad|AI: ]