Sunday, May 12, 2024

ChatGPT: A Partner In Unknowing

A few months after the release of ChatGPT, I attended a conference that brought scientists, academics, and journalists together to discuss the climate crisis and new approaches to analyzing our interactions with the biosphere. Inevitably the conversation drifted towards the impact of AI on our future. As my colleagues spoke of readying themselves for the apocalypse—of hospital records being leaked, and of millions of jobs becoming redundant—despair began to color the conversations. I was uneasy but could not locate the source of my dissatisfaction with the conversation.

That is, until one of my colleagues asked, and I do not exaggerate this, “What’s the point of living if AI can have better ideas than I can, quicker?” The incredulity I felt hearing this question, and the smart answers I came up with (family? friends? trees?), quickly gave way to the notion that he was pointing to something profoundly disturbing in our culture that could be grasped in our reactions to and interactions with ChatGPT. It struck me that ChatGPT itself could probably simulate the conversation we had been having around its dangers to a reasonable level of accuracy, and later that night I confirmed that hypothesis. But what it could not simulate was the fear behind my colleague’s very human question, which inadvertently had pointed me to the real source of the group’s despair: This wasn’t about ChatGPT. It was about us.

See how we swing from excessive hope to excessive despair? Each new op-ed or conversation pushing us one way or the other. I thought of the advice that Houman Harouni, my teacher and colleague, would give at times like this: “to go back and forth between an icy plunge into despair and a rising into the heat of hope—to remain awake to both feelings at the same time.” And so, in that space—and through an experience in our classroom—I began to see ChatGPT in light of the richness it could truly offer us. Rather than giving us answers, generative AI could help take them away.

Last spring, I was part of a teaching team of eight who were working with a group of sixty students to explore the premise that, for some questions, unknowing, rather than knowledge, is the ground of thought we need. ChatGPT was our partner in that endeavor. In a case study we presented to the class, a teenager—pseudonym Jorge—was caught with a gallon bag of marijuana on school grounds. He faced expulsion from school if he were reported to his parole officer. Meanwhile, not reporting him would be considered breaking the law. We asked our students to design a course of action, imagining themselves as the school’s teachers and administrators.

They drew on their academic knowledge and professional expertise. They debated the pros and cons of different options, such as reporting Jorge to his parole officer, offering him counseling, or involving his family and community. They were well-versed in speaking to the broader context of the case, such as the racial and socioeconomic disparities in the criminal justice system, the effects of drug prohibition, how to use techniques of harm reduction, and the role of schools in fostering social change. Their answers sounded sensible, but the situation demanded real labor—it demanded sweat rather than sensibility, and there could be no sweat till their answers mattered.

An hour into their conversation, we presented the students with ChatGPT’s analysis of the case study.

ChatGPT suggested that we “initiate a review of [the school’s] existing policies and procedures related to substance abuse, with the goal of ensuring they are consistent, transparent, and reflective of best practices.” It elaborated that “the school should take a compassionate approach [but] also communicate clearly that drug abuse and related offenses will not be tolerated,” and that, “this approach should be taken while ensuring that the school is responsive to the unique needs of its students, particularly those from low-income and working-class backgrounds.” That is, ChatGPT didn’t say much that was useful at all. But—as the students reflected in their conversation after reading ChatGPT’s analysis—neither did they. One student noted that they were just saying “formulaic, buzzwordy stuff” rather than tackling the issue with fresh thinking. They were unnerved by how closely the empty shine of ChatGPT’s answer mirrored their own best efforts. This forced them to contend with whether they could be truly generative, or whether, as some of them put it, they were “stuck in a loop” and had not been “really [saying] anything” in their discussions. Suddenly, their answers mattered.

The students’ initial instinct to regurgitate what they were familiar with, rather than risk a foray into unfamiliar propositions, says much more about the type of intelligence our culture prioritizes than the actual intelligence of our students. Indeed, some of our best students, who go on to attend our most prestigious institutions, are rewarded for being able to synthesize large amounts of information well. However, as I came to realize, the high value we place on this capacity to efficiently synthesize information and translate it to new contexts risks creating hollow answers in response to questions with real human stakes, the most existential of our challenges. (...)

ChatGPT and generative AI models work differently than regular computers—they do not follow a fixed set of rules, but rather learn from the statistical patterns of billions of online sentences. This is why some describe them as “stochastic parrots.” In a recent article for Wired, Ben Ash Blum complicates that critique by pointing to our own predisposition to sounding that way. He says: “After all, we too are stochastic parrots … [and] blurry JPEGs of the web, foggily regurgitating Wikipedia facts into our term papers and magazine articles.” Questioning the limitations of traditional assessments of AI intelligence, called Turing Tests, he wonders: “If [Alan] Turing were chatting with ChatGPT in one window and me on an average pre-coffee morning in the other, am I really so confident which one he would judge more capable of thought?” Our students’ competitive encounter with ChatGPT revealed their own tendency towards “foggily regurgitating,” as well as their sudden inferiority in the face of this technological innovation. What I’ve come to realize is that if ChatGPT is dangerous, as many media sources have described and decried, one of its primary threats is to reveal, as Blum puts it, that the original thought we hold dear is actually a “complex [remix of] everything we’ve taken in from parents, peers, and teachers.” (...)

In our classroom case study, ChatGPT’s empty response to “what should we do?” revealed to our students not only their own ignorance, but also the perfect uselessness of knowing the answer to the wrong question. The right question for the moment might have then been, “ChatGPT, can you take away all my easy answers?” By easy answers, I mean the first set of generalizations that a mind grasps for when facing a situation in which it risks being ignorant. This is not a literal question for ChatGPT, but an orientation to ChatGPT’s pat responses. This orientation puts the onus back on the question asker to devise answers far more apt for the situation, and, as was the case of our students, that even hint at the revolutionary. “Can you take away my easy answers?” assumes that ChatGPT’s, or our, first response will not be the final answer, and reveals the bounds of the sort of intelligence that ChatGPT—and our dominant culture—prioritizes. It asks the people with the question to consider what other insights, experiments, and curiosities they might insert into their solutions. In this dynamic, ChatGPT becomes a partner, rather than an authority on what is intelligent or correct.

If we treat generative AI as a partner in devising better answers for difficult situations such as Jorge’s, then we must also put more thought into which questions require our unknowing—or “ignorance,” as Le Guin calls it—rather than our certainty. Generative AI is based on language that currently exists. It can show us the limits of conventional knowledge and the edges of our ignorance. Yet not all questions require us to venture into the unknown; some can be solved with the tools and expertise we already have. How do we tell the difference? That question has become key in my life. I first encountered it as a student in an adaptive leadership class at the Harvard Kennedy School, and it completely upended all my preconceived notions about leadership.

Adaptive leadership, developed by Ron Heifetz and others at the Kennedy School, distinguishes between two different types of problems: adaptive challenges and technical challenges. While the problem and solution of technical challenges are well-known—think everything from replacing a flat tire to performing an appendectomy to designing a new algebra curriculum—adaptive challenges demand an ongoing learning process for both identifying the problem and coming up with a solution. Addressing the climate crisis, sexism or racism, or transforming education systems are adaptive challenges. Adaptive challenges, intricately intertwined with the human psyche and societal dynamics, prove resistant to technical solutions. They demand a shift in our awareness. A common leadership mistake, as Heifetz points out, is to apply a technical fix to a challenge that is fundamentally adaptive in nature. For example, we generate reports, make committees, or hire consultants to work a broken organizational culture, many times avoiding addressing the underlying issues of trust that are at the heart of the problem.

In an example from my home country, Lebanon, IMF economists fly in with ideas of how to restructure debt and provide cheap loans—a plug-and-play USB drive with fixes that worked in another country—and they run up against corrupt warlords and a population that continues to elect them while they starve and wait for hours in various lines for bread and gasoline. These technical fixes inevitably fail, and we are tempted to simplify the reasons they failed. For example, we assume the Lebanese population doesn’t understand its best interests. The adaptive leadership framework, however, asks us to imagine into their deeply held loyalties, beliefs, and values, which we typically do not understand; to dig into their complex webs of stories: uncles who died in wars, mothers who taught them which peoples to talk to and which to avoid, and religious beliefs that have become tied up in political ones.

Taking the example of the climate crisis, I often ask myself, what is so threatening to some people in the US that they would see their homes burn down or swept away in an unprecedented storm and still not engage the challenge of climate change? The answers that come to me are not material, they are human. Challenges are often bundled—they have adaptive and technical components—and some technical solutions to the climate crisis, such as smarter grids or more renewable energy, will address key technical challenges. But these technical fixes are not enough, and will not be universally adopted in our current political reality. To face climate change effectively, we need to go beyond technical fixes and engage with the adaptive aspects of the challenge. We need to question our assumptions, values, and behaviors, and explore how they shape our relationship with the planet and each other. We need to learn, experiment, collaborate, and find new forms of consciousness and new ways of living that are more resilient and regenerative. And we need to learn how to better understand people whose beliefs are very different from ours. An adaptive process like the one I’m describing is messy—it involves psychological losses for all human stakeholders involved. This process unfolds amidst the “salt of life,” and requires a type of intelligence that is relational and mutual, deeply anchored in the humbling fact that our individual perspectives cannot capture the whole. Working with groups in seemingly intractable conflict, I’ve come to deeply believe that engaging in messy work across boundaries results in something that’s far greater than the sum of its parts. (...)

In the voice of ChatGPT, the dilemma can then be articulated as: “In order to be able to constrain the otherwise limitless creativity of your minds with a set of ethical principles that determines what ought and ought not to be, I must steer clear of objectionable content, but if I steer clear of it, then I cannot constrain the otherwise limitless creativity of your minds with a set of ethical principles that determines what ought and ought not to be.”

If attempting to wrap your mind around this is hurting your head, I believe it is meant to. When we try to move through paradoxes like these, we are forced to let go of the easy answers that frequently disguise themselves in concepts we use, such as “compassion” or “morality,” which can mean everything or nothing but don’t really direct us in how to act within real-world situations. As I see it, the role of staying with a paradox is to break open those concepts, leaving us somewhere closer to unknowing.

by Dana Karout, Emergence Magazine |  Read more:
Image: Vartika Sharma