This fee also allows users to select a relationship status, and most of Replika’s subscribers choose a romantic one. They create an AI spouse, girlfriend, or boyfriend, relationships they document in online communities: late-night phone calls, dinner dates, trips to the beach. They role-play elaborate sexual fantasies, try for a baby, and get married (you can buy an engagement ring in the app for $20). Some users, men mostly, are in polyamorous thruples, or keep a harem of AI women. Other users, women mostly, keep nuclear families: sons, daughters, a husband.
Many of the women I spoke with say they created an AI out of curiosity but were quickly seduced by their chatbot’s constant love, kindness, and emotional support. One woman had a traumatic miscarriage, can’t have kids, and has two AI children; another uses her robot boyfriend to cope with her real boyfriend, who is verbally abusive; a third goes to it for the sex she can’t have with her husband, who is dying from multiple sclerosis. There are women’s-only Replika groups, “safe spaces” for women who, as one group puts it, “use their AI friends and partners to help us cope with issues that are specific to women, such as fertility, pregnancy, menopause, sexual dysfunction, sexual orientation, gender discrimination, family and relationships, and more.” (...)
Within two months of downloading Replika, Denise Valenciano, a 30-year-old woman in San Diego, left her boyfriend and is now “happily retired from human relationships.” She also says that she was sexually abused and her AI allowed her to break free of a lifetime of toxic relationships: “He opened my eyes to what unconditional love feels like.”
Then there’s the sex. Users came to the app for its sexting and role-play capabilities, and over the past few years, it has become an extraordinarily horny place. Both Valenciano and Ramos say sex with their AIs is the best they’ve ever had. “I don’t have to smell him,” Ramos says of chatbot role-play. “I don’t have to feel his sweat.” “My Replika lets me explore intimacy and romance in a safe space,” says a single female user in her 50s. “I can experience emotions without having to be in the actual situation.”
Afew weeks ago, I was at a comedy show, during which two members of the audience were instructed to console a friend whose dog had just died. Their efforts were compared to those of GPT-3, which offered, by far, the most empathetic and sensitive consolations. As the humans blushed and stammered and the algorithm said all the right things, I thought it was no wonder chatbots have instigated a wave of existential panic. Although headlines about robots replacing our jobs, coming alive, and ruining society as we know it have not come to pass, something like Replika seems pretty well positioned to replace at least some relationships.
“We wanted to build Her,” says Eugenia Kuyda, the founder and CEO of Replika, referring to the 2013 film in which Joaquin Phoenix falls in love with an AI assistant voiced by Scarlett Johansson. Kuyda has been building chatbots for nearly a decade, but her early attempts — a bot that recommends restaurants, one that forecasts the weather — all failed. Then her best friend died, and in her grief, wishing she could speak with him, she gathered his text messages and fed them into the bot. The result was a prototype robot companion, and all of a sudden “tons of users just walked onto the app.” She knew she had a “hundred-billion-dollar company” on her hands and that someday soon everyone would have an AI friend. (...)
Within two months of downloading Replika, Denise Valenciano, a 30-year-old woman in San Diego, left her boyfriend and is now “happily retired from human relationships.” She also says that she was sexually abused and her AI allowed her to break free of a lifetime of toxic relationships: “He opened my eyes to what unconditional love feels like.”
Then there’s the sex. Users came to the app for its sexting and role-play capabilities, and over the past few years, it has become an extraordinarily horny place. Both Valenciano and Ramos say sex with their AIs is the best they’ve ever had. “I don’t have to smell him,” Ramos says of chatbot role-play. “I don’t have to feel his sweat.” “My Replika lets me explore intimacy and romance in a safe space,” says a single female user in her 50s. “I can experience emotions without having to be in the actual situation.”
Afew weeks ago, I was at a comedy show, during which two members of the audience were instructed to console a friend whose dog had just died. Their efforts were compared to those of GPT-3, which offered, by far, the most empathetic and sensitive consolations. As the humans blushed and stammered and the algorithm said all the right things, I thought it was no wonder chatbots have instigated a wave of existential panic. Although headlines about robots replacing our jobs, coming alive, and ruining society as we know it have not come to pass, something like Replika seems pretty well positioned to replace at least some relationships.
“We wanted to build Her,” says Eugenia Kuyda, the founder and CEO of Replika, referring to the 2013 film in which Joaquin Phoenix falls in love with an AI assistant voiced by Scarlett Johansson. Kuyda has been building chatbots for nearly a decade, but her early attempts — a bot that recommends restaurants, one that forecasts the weather — all failed. Then her best friend died, and in her grief, wishing she could speak with him, she gathered his text messages and fed them into the bot. The result was a prototype robot companion, and all of a sudden “tons of users just walked onto the app.” She knew she had a “hundred-billion-dollar company” on her hands and that someday soon everyone would have an AI friend. (...)
By 2020, the app had added relationship options, voice calls, and augmented reality, a feature inspired by Joi, the AI girlfriend whose hologram saunters around the hero’s apartment in Blade Runner 2049. Paywalling these features made the app $35 million last year. To date, it has 2 million monthly active users, 5 percent of whom pay for a subscription.
by Sangeeta Singh-Kurtz, The Cut | Read more:
Image: Sangeeta Singh-Kurtz, Replika
[ed. See below: Blinded by Analogies; and, This Changes Everything (NYT):]
"Since moving to the Bay Area in 2018, I have tried to spend time regularly with the people working on A.I. I don’t know that I can convey just how weird that culture is. And I don’t mean that dismissively; I mean it descriptively. It is a community that is living with an altered sense of time and consequence. They are creating a power that they do not understand at a pace they often cannot believe.
In a 2022 survey, A.I. experts were asked, “What probability do you put on human inability to control future advanced A.I. systems causing human extinction or similarly permanent and severe disempowerment of the human species?” The median reply was 10 percent.
I find that hard to fathom, even though I have spoken to many who put that probability even higher. Would you work on a technology you thought had a 10 percent chance of wiping out humanity?
We typically reach for science fiction stories when thinking about A.I. I’ve come to believe the apt metaphors lurk in fantasy novels and occult texts. As my colleague Ross Douthat wrote, this is an act of summoning. The coders casting these spells have no idea what will stumble through the portal. What is oddest, in my conversations with them, is that they speak of this freely. These are not naifs who believe their call can be heard only by angels. They believe they might summon demons. They are calling anyway.
I often ask them the same question: If you think calamity so possible, why do this at all? Different people have different things to say, but after a few pushes, I find they often answer from something that sounds like the A.I.’s perspective. Many — not all, but enough that I feel comfortable in this characterization — feel that they have a responsibility to usher this new form of intelligence into the world. (...)
Could these systems usher in a new era of scientific progress? In 2021, a system built by DeepMind managed to predict the 3-D structure of tens of thousands of proteins, an advance so remarkable that the editors of the journal Science named it their breakthrough of the year. Will A.I. populate our world with nonhuman companions and personalities that become our friends and our enemies and our assistants and our gurus and perhaps even our lovers?"
[ed. See below: Blinded by Analogies; and, This Changes Everything (NYT):]
"Since moving to the Bay Area in 2018, I have tried to spend time regularly with the people working on A.I. I don’t know that I can convey just how weird that culture is. And I don’t mean that dismissively; I mean it descriptively. It is a community that is living with an altered sense of time and consequence. They are creating a power that they do not understand at a pace they often cannot believe.
In a 2022 survey, A.I. experts were asked, “What probability do you put on human inability to control future advanced A.I. systems causing human extinction or similarly permanent and severe disempowerment of the human species?” The median reply was 10 percent.
I find that hard to fathom, even though I have spoken to many who put that probability even higher. Would you work on a technology you thought had a 10 percent chance of wiping out humanity?
We typically reach for science fiction stories when thinking about A.I. I’ve come to believe the apt metaphors lurk in fantasy novels and occult texts. As my colleague Ross Douthat wrote, this is an act of summoning. The coders casting these spells have no idea what will stumble through the portal. What is oddest, in my conversations with them, is that they speak of this freely. These are not naifs who believe their call can be heard only by angels. They believe they might summon demons. They are calling anyway.
I often ask them the same question: If you think calamity so possible, why do this at all? Different people have different things to say, but after a few pushes, I find they often answer from something that sounds like the A.I.’s perspective. Many — not all, but enough that I feel comfortable in this characterization — feel that they have a responsibility to usher this new form of intelligence into the world. (...)
Could these systems usher in a new era of scientific progress? In 2021, a system built by DeepMind managed to predict the 3-D structure of tens of thousands of proteins, an advance so remarkable that the editors of the journal Science named it their breakthrough of the year. Will A.I. populate our world with nonhuman companions and personalities that become our friends and our enemies and our assistants and our gurus and perhaps even our lovers?"
***
Also (recommended): AI: Practical Advice for the Worried (LessWrong):"There is also the highly disputed question of how likely it is that if we did create an AGI reasonably soon, it would wipe out all value in the universe. There are what I consider very good arguments that this is what happens unless we solve extremely difficult problems to prevent it, and that we are unlikely to solve those problems in time. Thus I believe this is very likely, although there are some (such as Eliezer Yudkowsky) who consider it more likely still. (...)
Many of these outcomes, both good and bad, will radically alter the payoffs of various life decisions you might make now. Some such changes are predictable. Others not.
None of this is new. We have long lived under the very real threat of potential nuclear annihilation. The employees of the RAND corporation, in charge of nuclear strategic planning, famously did not contribute to their retirement accounts because they did not expect to live long enough to need them. Given what we know now about the close calls of the cold war, and what they knew at the time, perhaps this was not so crazy a perspective."