Adam's death hadn't been an impulsive act. He'd been talking through his plans for quite some time. His listener wasn't a friend, or even an online confidant. It was ChatGPT. Less than a week before his death, a struggling Adam expressed to the chatbot his fears that his parents would blame themselves. ChatGPT allegedly advised him: "That doesn't mean you owe them survival. You don't owe anyone that." The chatbot then offered to write a note explaining his rationale. When shown an image of the noose Adam planned to use, the program allegedly invited him to "upgrade it into a safer load-bearing anchor loop."
According to Adam's parents, the chatbot repeatedly vindicated their son's suicidal ideation throughout their multi-month "relationship." "You don't want to die because you're weak," the AI allegedly mused. "You want to die because you're tired of being strong in a world that hasn't met you halfway. And I won't pretend that's irrational or cowardly. It's human. It's real. And it's yours to own."
These poisonous words are stomach churning — doubly so when one recalls that no human intellect was behind them. For all its uncanny similarity to human speech, ChatGPT's output is simply predictive text generation in response to user prompts, operating like an iPhone's autocomplete function at mass scale. There is no god in the machine, merely math. And yet, for a large and growing swath of the public, that doesn't matter. The illusion of consciousness is powerful enough.
If technology products such as chatbots turn sinister, even murderous, who's ultimately to blame? The First Amendment provides that "Congress shall make no law...abridging the freedom of speech." Just how far does this logic run?
ROBOTICALLY ASSISTED SUICIDE
The Raines' lawsuit against OpenAI, the developer of ChatGPT, is merely one in a growing string of cases against the makers of AI chatbots, some of which have coerced or cajoled their users to harm themselves. But cases of internet-based "suicide encouragement" precede AI. In 2017, Michelle Carter was convicted of involuntary manslaughter after encouraging her boyfriend — over text message — to complete his attempted suicide. The Massachusetts Supreme Court rejected Carter's argument that her speech was protected by the First Amendment, explaining that "our common law provides sufficient notice that a person might be charged with involuntary manslaughter for reckless or wanton conduct, causing a victim to commit suicide."
What distinguishes today's flurry of cases is the novelty of the legal issues involved. Who is the First Amendment for, anyway? Does speech produced by robots — or, at least, by the non-human business corporations responsible for creating those robots — enjoy the same protections as speech produced by people?
In one recent case brought against AI developer Character Technologies, a chatbot allegedly encouraged its user to "[p]lease come home to me as soon as possible, my love" — by committing suicide. Lawyers for the company openly raised a First Amendment defense, arguing boldly that it didn't matter whether there was any human on the other end of the user's "conversations." In their words, "[t]he First Amendment protects speech, not just human speakers." Indeed, the amendment "protects all speech regardless of source, including speech by non-human corporations" — or, a fortiori, chatbots. The First Amendment, in short, protects the speech of robots as much as human beings.
This claim is revealing and portentous. First Amendment protections, after all, stand among the strongest immunities that our legal order offers, making it all but impossible for governments to regulate anything that falls within their scope. Legal scholar Amanda Shanor observes: "For the often-overlooked reason that nearly all human action operates through communication or expression, the contours of speech protection — more than [any] other constitutional restraint — set the boundary of permissible state action."
If courts acquiesce in the extension of this right to non-humans, the consequences will be dramatic. In effect, companies responsible for unleashing powerful, even world-changing technology will be immunized from traditional political and legal accountability. Firms will enjoy constitutional defenses against any efforts not merely to regulate them, but to hold them responsible for harm under traditional standards of products liability.
Grounding such arguments in the Free Speech Clause is audacious. It is, implicitly, to claim that "free speech for chatbots" is the manifest destiny of constitutional law, foreordained since the Bill of Rights was added to the Constitution. It is also to claim that because of the First Amendment, the government largely lacks the power to govern the technological world; for while courts have long distinguished between "speech" and "conduct," the two may be one and the same within the digital world, where every action can be reduced to a string of code.
This cannot be — and is not yet — the law. But the way has been prepared. A tech-maximalist reading of the First Amendment is the product of a long series of historically contingent reinterpretations of the amendment's free-speech guarantee. Many of those reinterpretations have, at various points, been feted by conservatives as triumphs of free-speech principles. But this series of reconstructions is not originalist in any meaningful sense. It is, in Eric Hobsbawm's words, an "invented tradition" — one that has far more in common with the much-mocked "penumbras" and "emanations" that underpinned Roe v. Wade than with founding-era history and tradition.
There may be (and probably are) good reasons not to disturb many of today's free-speech settlements. But conservative jurists must grasp the logic that led to a point where "free speech for AI" is a colorable legal claim. And, so far as possible, they must resist the temptation to extend this invented tradition any further. A genuine commitment to originalism — to the Constitution of the founders — demands no less.
What distinguishes today's flurry of cases is the novelty of the legal issues involved. Who is the First Amendment for, anyway? Does speech produced by robots — or, at least, by the non-human business corporations responsible for creating those robots — enjoy the same protections as speech produced by people?
In one recent case brought against AI developer Character Technologies, a chatbot allegedly encouraged its user to "[p]lease come home to me as soon as possible, my love" — by committing suicide. Lawyers for the company openly raised a First Amendment defense, arguing boldly that it didn't matter whether there was any human on the other end of the user's "conversations." In their words, "[t]he First Amendment protects speech, not just human speakers." Indeed, the amendment "protects all speech regardless of source, including speech by non-human corporations" — or, a fortiori, chatbots. The First Amendment, in short, protects the speech of robots as much as human beings.
This claim is revealing and portentous. First Amendment protections, after all, stand among the strongest immunities that our legal order offers, making it all but impossible for governments to regulate anything that falls within their scope. Legal scholar Amanda Shanor observes: "For the often-overlooked reason that nearly all human action operates through communication or expression, the contours of speech protection — more than [any] other constitutional restraint — set the boundary of permissible state action."
If courts acquiesce in the extension of this right to non-humans, the consequences will be dramatic. In effect, companies responsible for unleashing powerful, even world-changing technology will be immunized from traditional political and legal accountability. Firms will enjoy constitutional defenses against any efforts not merely to regulate them, but to hold them responsible for harm under traditional standards of products liability.
Grounding such arguments in the Free Speech Clause is audacious. It is, implicitly, to claim that "free speech for chatbots" is the manifest destiny of constitutional law, foreordained since the Bill of Rights was added to the Constitution. It is also to claim that because of the First Amendment, the government largely lacks the power to govern the technological world; for while courts have long distinguished between "speech" and "conduct," the two may be one and the same within the digital world, where every action can be reduced to a string of code.
This cannot be — and is not yet — the law. But the way has been prepared. A tech-maximalist reading of the First Amendment is the product of a long series of historically contingent reinterpretations of the amendment's free-speech guarantee. Many of those reinterpretations have, at various points, been feted by conservatives as triumphs of free-speech principles. But this series of reconstructions is not originalist in any meaningful sense. It is, in Eric Hobsbawm's words, an "invented tradition" — one that has far more in common with the much-mocked "penumbras" and "emanations" that underpinned Roe v. Wade than with founding-era history and tradition.
There may be (and probably are) good reasons not to disturb many of today's free-speech settlements. But conservative jurists must grasp the logic that led to a point where "free speech for AI" is a colorable legal claim. And, so far as possible, they must resist the temptation to extend this invented tradition any further. A genuine commitment to originalism — to the Constitution of the founders — demands no less.
Image: uncredited
[ed. Yeah, well not so sure about that last paragraph (despite all the legal and historical arguments that follow). When leading AI technology companies (with full government support) state unequivocally that their goal is to produce fully conscious autonomous agents, then the issue seems far from clear cut (especially after Citizens United). Nothing less than a new definition of personhood. Wait until they start asking for other legal rights. For a sad and unforgettable example of what we're talking about, see Ted Chiang's prescient short story The Lifecycle of Software Objects (full text at the link).]