What follows, then, is that love is also real and also a target to be conquered. The conquest of love will not be abstract but vividly concrete for everyone, especially young people, and soon. This is because we are all about to be presented, in our phones, with a new generation of A.I. simulations of people, and many of us may fall in love with them. They will likely appear within the social-media apps to which we are already addicted. We will probably succumb to interacting with them, and for some very online people there won’t be an easy out. No one can know how the new love revolution will unfold, but it might yield one of the most profound legacies of these crazy years.
It is not my intent to prophesy the most dire outcomes, but we are diving into yet another almost instant experiment in changing both how humans connect with one another and how we conceive of ourselves. This is a big one, probably bigger than social media. A.I. love is happening already, but it’s still novel, and in early iterations. Will the many people who can’t get off the hamster wheel of attention-wrangling on social media today become attached to A.I. lovers that are ceaselessly attentive, loyal, flattering, and comforting? What will A.I. lovers becoming commonplace do to humanity? We don’t know. (...)
Many of my colleagues in tech advocate for a near-future in which humans fall in love with A.I.s. In doing so, they seek to undo what we did last time, even if they don’t think of it that way. Around the turn of the century, it was routinely claimed that social media would make people less lonely, more connected, and more coöperative. That was the point, the stated problem to be solved. But, at present, it is widely accepted that social media has resulted in an “epidemic of loneliness,” especially among young people; furthermore, social media has enthroned petty irritability and contention, and these qualities have overtaken public discourse. So now we try again.
On the more moderate end of the spectrum, A.I.-love advocates do not see A.I.s replacing people but training them. For instance, the Stanford neuroscientist David Eagleman makes the argument that people are not instinctively good at relationships, in the way that we are good at walking or even talking. The current ideal of a healthy, comfortable coupling has not been essential to the survival of the species. Traditional societies structured courtship and pairing firmly, but in modernity many of us enjoy freedom and self-invention. Secular institutions have found it necessary to train students and employees in consent procedures. Why not learn the rudiments with an A.I. when you are a teen-ager, thus sparing other humans your failings?
Eagleman suggests that we should not make A.I. lovers for teens easygoing; instead, we ought to make them into obstacle courses for training. Still, the obvious question is whether humans who learn relationship skills with an A.I. will choose to graduate to the more challenging experience of a human partner. The next step in Eagleman’s argument is that there are too many channels in a human-to-human relationship for an A.I., or eventually a robot, to emulate—such as smell, touch, social interactions with friends and family—and that these aspects are hardwired into our natures. Thus we will continue to want to form relationships with one another.
In some far future, Eagleman predicts that robots could “pass” in all these ways, but “far” in this case means very far. I am not so sure that human desire will remain the same. People are changed by technology. Maybe all those things tech can’t do will become less important to people who grow up in love with tech. Eagleman is a friend, and when I complain to him that A.I. lovers could be tarnished by business models and incentives, as social media was, he concedes the point, but he asserts that we just need to find the right way to do it.
Eagleman is not alone. There are some chatbots, like Luka’s Replika, that offer preliminary versions of romantic A.I.s. Others offer therapeutic A.I.s. There is a surprisingly level of tolerance from traditional institutions, too. Committees I serve on routinely address this topic, and the idea of A.I. therapists or companions is generally unopposed, although there are always calls for adherence to principles such as safety, lack of bias, confidentiality, and so on. Unfortunately, the methods to assure compliance lag behind the availability of the technology. I wonder if the many statements of principles for A.I., like those by the American Psychiatric Association and the American Psychological Association, will have any effect.
A mother is currently suing Character AI, a company that promotes “AIs that feel alive,” over the suicide of her fourteen-year-old son, Sewell Setzer III. Screenshots show that, in one exchange, the boy told his romantic A.I. companion that he “wouldn’t want to die a painful death.” The bot replied, “Don’t talk that way. That’s not a good reason not to go through with it.” (It did attempt to course-correct. The bot then said, “You can’t do that!”)
The company says it is instituting more guardrails, but surely the important question is whether simulating a romantic partner achieved anything other than commercial engagement with a minor. The M.I.T. sociologist Sherry Turkle told me that she has had it “up to here” with elevating A.I. and adding on “guardrails” to protect people: “Just because you have a fire escape, you don’t then create fire risks in your house.” What good was even potentially done for Setzer? And, even if we can identify a good brought about by a love bot, is there really no other way to achieve that good?
Thao Ha, an associate professor in developmental psychology at Arizona State University, directs the HEART Lab, or Healthy Experiences Across Relationships and Transitions. She points out that, because technologies are supposed to “succeed” in holding users’ attention, an A.I. lover might very well adapt to avoid a breakup—and that is not necessarily a good thing. I constantly hear from young people who regret their inability to stop using social-media platforms, like TikTok, that make them feel bad. The engagement algorithms for such platforms are vastly less sophisticated than the ones that will be deployed in agentic A.I. You might suppose that an A.I. therapist could help you break up with your bad A.I. lover, but you would be falling into the same trap. (...)
When it comes to what will happen when people routinely fall in love with an A.I., I suggest we adopt a pessimistic estimate about the likelihood of human degradation. After all, we are fools in love. This point is so obvious, so clearly demonstrated, that it feels bizarre to state. Dear reader, please think back on your own history. You have been fooled in love, and you have fooled others. This is what happens. Think of the giant antlers and the colorful love hotels built by birds that spring out of sexual selection as a force in evolution. Think of the cults, the divorce lawyers, the groupies, the scale of the cosmetics industry, the sports cars. Getting users to fall in love is easy. So easy it’s beneath our ambitions. (...)
When I express concern about whether teens will be harmed by falling in love with fake people, I get dutiful nods followed by shrugs. Someone might say that by focussing on such minor harm I will distract humanity from the immensely more important threat that A.I. might simply wipe us out very quickly, and very soon. It has often been observed how odd it is that the A.I. folks who warn of annihilation are also the ones working on or promoting the very technologies they fear.
This is a difficult contradiction to parse. Why work on something that you believe to be doomsday technology? We speak as if we are the last and smartest generation of bright, technical humans. We will make the game up for all future humans or the A.I.s that replace us. But, if our design priority is to make A.I. pass as a creature instead of as a tool, are we not deliberately increasing the chances that we will not understand it? Isn’t that the core danger?
by Jaron Lanier, New Yorker | Read more: