Wednesday, January 28, 2026

On the Falsehoods of a Frictionless Relationship


To love is to be human. Or is it? As human-chatbot relationships become more common, the Times Opinion culture editor Nadja Spiegelman talks to the psychotherapist Esther Perel about what really defines human connection, and what we’re seeking when we look to satisfy our emotional needs on our phones.

Spiegelman: ...I’m curious about how you feel, in general, about people building relationships with A.I. Are these relationships potentially healthy? Is there a possibility for a relationship with an A.I. to be healthy?

Perel: Maybe before we answer it in this yes or no, healthy or unhealthy, I’ve been trying to think to myself, depending on how you define relationships, that will color your answer about what it means when it’s between a human and A.I.

But first, we need to define what goes on in relationships or what goes on in love. The majority of the time when we talk about love in A.I. or intimacy in A.I., we talk about it as feelings. But love is more than feelings.

Love is an encounter. It is an encounter that involves ethical demands, responsibility, and that is embodied. That embodiment means that there is physical contact, gestures, rhythms, gaze, frottement. There’s a whole range of physical experiences that are part of this relationship.

Can we fall in love with ideas? Yes. Do we fall in love with pets? Absolutely. Do children fall in love with teddy bears? Of course. We can fall in love and we can have feelings for all kinds of things.

That doesn’t mean that it is a relationship that we can call love. It is an encounter with uncertainty. A.I. takes care of that. Just about all the major pieces that enter relationships, the algorithm is trying to eliminate — otherness, uncertainty, suffering, the potential for breakup, ambiguity. The things that demand effort.

Whereas the love model that people idealize with A.I. is a model that is pliant: agreements and effortless pleasure and easy feelings.

Spiegelman: I think that’s so interesting — and exactly also where I was hoping this conversation would go — that in thinking about whether or not we can love A.I., we have to think about what it means to love. In the same way we ask ourselves if A.I. is conscious, we have to ask ourselves what it means to be conscious.

These questions bring up so much about what is fundamentally human about us, not just the question of what can or cannot be replicated.

Perel: For example, I heard this very interesting conversation about A.I. as a spiritual mediator of faith. We turn to A.I. with existential questions: Shall I try to prolong the life of my mother? Shall I stop the machines? What is the purpose of my life? How do I feel about death?

This is extraordinary. We are no longer turning to faith healers, but we are turning to these machines for answers. But they have no moral culpability. They have no responsibility for their answer.

If I’m a teacher and you ask me a question, I have a responsibility in what you do with the answer to your question. I’m implicated.

A.I. is not implicated. And from that moment on, it eliminates the ethical dimension of a relationship. When people talk about relationships these days, they emphasize empathy, courage, vulnerability, probably more than anything else. They rarely use the words accountability and responsibility and ethics. That adds a whole other dimension to relationships that is a lot more mature than the more regressive states of “What do you offer me?”

Spiegelman: I don’t disagree with you, but I’m going to play devil’s advocate. I would say that the people who create these chatbots very intentionally try and build in ethics — at least insofar as they have guide rails around trying to make sure that the people who are becoming intimately reliant on this technology aren’t harmed by it.

That’s a sense of ethics that comes not from the A.I. itself, but from its programmers — that guides people away from conversations that might be racist or homophobic, that tries to guide people toward healthy solutions in their lives. Does that not count if it’s programmed in?

Perel: I think the “programming in” is the last thing to be programmed.

I think that if you make this machine speak with people in other parts of the world, you will begin to see how biased they are. It’s one thing we should really remember. This is a business product.

When you say you have fallen in love with A.I., you have fallen in love with a business product. That business product is not here to just teach you how to fall in love and how to develop deeper feelings of love and then how to transmit them and transport them onto other people as a mediator, a transitional object.

Children play with their little stuffed animal and then they bring their learning from that relationship onto humans. The business model is meant to keep you there. Not to have you go elsewhere. It’s not meant to create an encounter with other people.

So, you can tell me about guardrails around the darkest corners of this. But fundamentally, you are in love with a business product whose intentions and incentives are to keep you interacting only with them — except they forget everything and you have to reset them.

Then you suddenly realize that they don’t have a shared memory with you, that the shared experience is programmed. Then, of course, you can buy the next subscription and then the memory will be longer. But you are having an intimate relationship with a business product.

We have to remember that. It helps.

Spiegelman: That’s so interesting.

Perel: That’s the guardrail...

Spiegelman: Yeah. This is so crucial, the fact that A.I. is a business product. They’re being marketed as something that’s going to replace the labor force, but instead, what they’re incredibly good at isn’t necessarily being able to problem solve in a way where they can replace someone’s job yet.

Instead, they’re forming these very intense, deep human connections with people, which doesn’t even necessarily seem like what they were first designed to do — but just happens to be something that they’re incredibly good at. Given all these people who say they’re falling in love with them, do you think that these companions highlight our human yearning? Are we learning something about our desires for validation, for presence, for being understood? Or are they reshaping those yearnings for us in ways that we don’t understand yet?

Perel: Both. You asked me if I use A.I — it’s a phenomenal tool. I think people begin to have a discussion when they ask: How does A.I. help us think more deeply on what is essentially human? In that way, I look at the relationship between people and the bot, but also how the bot is changing our expectations of relationships between people.

That is the most important piece, because the frictionless relationship that you have with the bot is fundamentally changing something in what we can tolerate in terms of experimentation, experience with the unknown, tolerance of uncertainty, conflict management — stuff that is part of relationships.

There is a clear sense that people are turning to A.I. with questions of love — or quests of love, more importantly — longings for love and intimacy, either because it’s an alternative to what they actually would want with a human being or because they bring to it a false vision of an idealized relationship — an idealized intimacy that is frictionless, that is effortless, that is kind, loving and reparative for many people...

Then you go and you meet a human being, and that person is not nearly as unconditional. That person has their own needs, their own longings, their own yearnings, their own objections, and you have zero preparation for that.

So, does A.I. inform us about what we are seeking? Yes. Does A.I. amplify the lack of what we are seeking? Yes. And does A.I. sometimes actually meet the need? All of it.

But it is a subjective experience, the fact that you feel certain things. That’s the next question: Because you feel it, does that makes it real and true?

We have always understood phenomenology as, “It is my subjective experience, and that’s what makes it true.” But that doesn’t mean it is true.

We are so quick to want to say, because I feel close and loved and intimate, that it is love. And that is a question. (...)

Spiegelman: This is one of your fundamental ideas that has been so meaningful for me in my own life: That desire is a function of knowing, of tolerating mystery in the other, that there has to be separation between yourself and the other to really feel eros and love. And it seems like what you’re saying is that with an A.I., there just simply isn’t the otherness.

Perel: Well, it’s also that mystery is often perceived as a bug, rather than as a feature.

by Esther Perel and Nadja Spiegelman, NY Times | Read more:
Video: Cartoontopia/Futurama via