Thursday, June 27, 2024

An Age of Hyperabundance

I was in a room of men. Every man was over-groomed: checked shirt, cologne behind the ears, deluxe beard or clean-shaven jaw. Their conversations bounced around me in jolly rat-a-tats, but the argot evaded interpretation. All I made out were acronyms and discerning grunts, backslaps, a mannered nonchalance.

I was at the Chattanooga Convention Center for Project Voice, a major gathering for software developers, venture capitalists, and entrepreneurs in conversational AI. The conference, now in its eighth year, was run by Bradley Metrock, an uncommonly tall man with rousing frat-boy energy who is, per his professional bio, a “leading thought leader” in voice tech. “I’m a conservative guy!” he said to me on a Zoom call some weeks prior. “I was like, ‘What kind of magazine is this? Seems pretty out there.’”

The magazine in question was this one. Bradley had read my essay “HUMAN_FALLBACK” in n+1’s Winter 2022 issue in which I described my year impersonating a chatbot for a real estate start-up. A lonely year, a depressing charade; it had made an impression on Bradley. He asked if I’d attend Project Voice as the “honorary contrarian speaker,” a title bestowed each year on a public figure, often a journalist, who has expressed objections to conversational AI. As part of my contrarian duties, I was to close out the conference with a thirty-minute speech to an audience of five hundred — a sort of valedictory of grievances, I gathered.

So that what? So that no one could accuse the AI pioneers of ignoring existential threats to culture? To facilitate a brief moment of self-flagellation before everyone hit the bars? I wasn’t sure, but I sensed my presence had less to do with balance and more to do with sport. Bradley kept using the word “exciting.” A few years ago, he said, the contrarian speaker stormed onstage, visibly irate. As she railed against the wickedness of the Echo Dot Kids, Amazon’s voice assistant for children, a row of Amazon executives walked out. Major sponsors! That, said Bradley, was very exciting.

I wondered if I should be offended by my contrarian designation, which positioned AI as the de facto orthodoxy and framed any argument I could make as the inevitable expression of my antagonistic pathology. The more I thought about it, the more I became convinced I was being set up for failure. Recent discussion of conversational AI has tended to treat the technology as a monolithic force synonymous with ChatGPT, capable of both cultural upheaval and benign comedy. But conversational AI encloses a vast, teeming domain. The term refers to any technology you can talk to the way you would talk to a person, and also includes any software that uses large language models to modify, translate, interpret, or forge written or spoken words. The field is motley and prodigious, with countless companies speculating in their own little corners. There are companies that make telemarketing tools, navigation systems, speech-to-text software for medical offices, psychotherapy chatbots, and essay-writing aids; there are conversational banking apps, avatars that take food orders, and virtual assistants for every industry under the sun; there are companies cloning celebrity voices so that an American actor can, for example, film a commercial in Dutch. The field is so crowded and the hype is so loud that to offset a three-day circus with thirty minutes of counterpoint is to practically coerce the critic into abstractions. Still, I accepted the invitation for the same reason I took the job with the real estate start-up: it was a paid opportunity and seemed like something I could write about.

On the first afternoon of the conference I took a lap around the floor and tried to make sense of what I was seeing. Tech companies had arrived with their sundries: bowls of wrapped candies, ballpoint pens, PopSockets, and other bribes; brochures fanned on tables; iPads with demos at the ready. The graphics, curiously alike across the displays, were a combination of Y2K screen saver abstractions and the McGraw Hill visual tradition. Many companies had erected tall, vertical banners adorned with hot-air balloons, city skylines at dusk, dark-haired women on call-center headsets, and circular flowcharts with no discernible content. If conversational AI had a heraldic color, that color would be blue — a dusty Egyptian blue, chaste and masculine, more Windows 2000 than Giotto. It’s a tedious no-color, the color of abdicating choice, and on the exhibition floor it was ubiquitous in calm, flat abysses backgrounding white text.
***
The only booth that stood out was at the far end of the exhibition hall. A company had tented its little patch of real estate with an inflatable white cube that looked like a large, quivering marshmallow. Inside the cube was Keith, a soft-spoken man whose earnest features and round physique conveyed a gnome-like benevolence. Beside Keith was a large screen. On the screen was a woman. The woman had dark hair, dark eyes, and purple lips that endeavored a smile. Her shoulders rose and fell, as if to suggest the act of breathing, and though she looked toward me, her gaze was elsewhere.

“This is Chatty,” Keith shouted over the roar of the blowers keeping his enclosure erect.

Keith worked for SapientX, a company that makes photorealistic conversational avatars powered by ChatGPT. SapientX had custom-built Chatty for Project Voice. Chatty could answer questions about the conference agenda and show you a map of the exhibition floor, except she couldn’t do it just then, said Keith, because they couldn’t seem to connect her to the wi-fi.

Keith was happy enough to walk me through the visuals. Chatty’s face was the collaborative effort of fifty different companies. A company in Toronto did the eyes. “There’s like eight guys and all they do is eyes all day,” he said.

Chatty’s face was a composite of several different races. Her voice was a composite of several different women. Her voice still needed some work, he admitted. “Right now she’s kinda mean.”

I picked up a brochure that featured a roster of “digital employees,” complete with their names, headshots, and “personality scores.” I wondered what industries might hire them.

“They’re mostly for kiosks,” Keith responded with a tone of defeat. “Like at a mall or a museum. Also military training. Stuff like that.”

Keith directed my attention to the exterior of the cube. A large banner depicted an older male, prosaically handsome, with a square jaw, a custardy dollop of silver hair, and pale, limpid eyes. This was Chief, said Keith. “He’s a navy guy. And he talks like a navy guy. We work in forty different languages. So if you’re training someone in Ukraine how to operate an American tool, we have that language built in.”

Keith went back inside to rustle me up a T-shirt. He told me that the company was also breaking into health care — nursing homes, to be precise. Keith explained the vision. Your mom is old, and you’re constantly reminding her to take her medicine. Why not leave that to an avatar? The avatar can converse with your mom, keep her company, fill up the idle hours of the day. Plus, you can incorporate a retina scanner to check her blood pressure and a motion sensor to make sure she isn’t lying dead on the floor.

“Say there’s an elderly woman with dementia,” he said. “Her avatar will look like she did when she was younger. So she has someone to identify with. Does that make sense?”

I imagined a future geriatric Keith, lying in a nursing home bed, conversing with his younger self. Would such an arrangement appeal to him?

“There’s not going to be a choice,” he said. “A lot of old people are going to be talking to avatars in ten years, and they won’t even know it. When I was touring facilities in San Francisco for people with dementia and stuff, those places are like insane asylums. But some patients still have some cognitive function, and that’s who the technology would be for. It’s definitely not going to apply to the guys that are comatose.”

We stood in silence for a moment, and he faced Chatty, who hovered before us, drifting in her strange, waking trance.

“I wish they could fix the internet,” said Keith. “I swear, she gets nasty. She like, looks at me bad.”

by Laura Preston, N+1 |  Read more:
Image: Dana Lok, Typist. 2023. Miguel Abreu Gallery, New York.

Ray Kurzweil: Three Technologies Will Define Our Future

Over the last several decades, the digital revolution has changed nearly every aspect of our lives.

The pace of progress in computers has been accelerating, and today, computers and networks are in nearly every industry and home across the world.

Many observers first noticed this acceleration with the advent of modern microchips, but as Ray Kurzweil wrote in his book The Singularity Is Near, we can find a number of eerily similar trends in other areas too.

According to Kurzweil’s law of accelerating returns, technological progress is moving ahead at an exponential rate, especially in information technologies.

This means today’s best tools will help us build even better tools tomorrow, fueling this acceleration.

But our brains tend to anticipate the future linearly instead of exponentially. So, the coming years will bring more powerful technologies sooner than we imagine.

As the pace continues to accelerate, what surprising and powerful changes are in store? This post will explore three technological areas Kurzweil believes are poised to change our world the most this century.

Genetics, Nanotechnology, and Robotics

Of all the technologies riding the wave of exponential progress, Kurzweil identifies genetics, nanotechnology, and robotics as the three overlapping revolutions which will define our lives in the decades to come. In what ways are these technologies revolutionary?
  • The genetics revolution will allow us to reprogram our own biology.
  • The nanotechnology revolution will allow us to manipulate matter at the molecular and atomic scale.
  • The robotics revolution will allow us to create a greater than human non-biological intelligence.
While genetics, nanotechnology, and robotics will peak at different times over the course of decades, we’re experiencing all three of them in some capacity already. Each is powerful in its own right, but their convergence will be even more so. Kurzweil wrote about these ideas in The Singularity Is Near over a decade ago.

Let’s take a look at what’s happening in each of these domains today, and what we might expect in the future.

by Sveta McShane, Singularity Hub |  Read more:
Image: uncredited
[ed. I believe this is right. Note: 'Robotics' also means strong AI in various forms.]

Grief Guides

Among the death doulas

Angie wanted to die in a cabin at the base of a snow-covered mountain, with warm drinks to go around. Stacey wanted to die in a cool room with a down comforter, battery-operated candles, chapstick on her lips, and absolutely no cellphones. Sarah wanted to die at her fifty-acre ranch in southern Indiana, lying on her patio, as the grandkids caught lightning bugs. Once dead, she wanted her body to be washed, rubbed with frankincense oil, and wrapped in white gauze. I lived in a small house with roommates. An awkward place to die. I opted instead for a destination vigil at my parents’ home in California.

Mine was, like the others’, a hypothetical death story: We’d each been asked to imagine the final days of our own terminal diagnosis. In real life, I was healthy and young. If I were to die, it would likely be sudden. I’d be killed in a car crash, or fall down a staircase, and I’d have no time to make arrangements. But to complete my beginner training with the International End of Life Doula Association (INELDA), to be qualified to guide another person through the process of dying, I first had to plan for my own death. (...)

What is a good death? Medical literature shows that people generally prefer simplicity. They want to die at home, with loved ones near, and have relief from physical pain and emotional distress. They want to know what to expect and how to make their own decisions. The things people value in death are the same things they value in life: community, open conversation, purposefulness. But only 14 percent of people who need palliative care—which involves not just specialized medical care but spiritual, social, and emotional nurturing—receive it.

Before the mid-1800s, it was common for people to die at home, surrounded by their family, and receive a local burial. But as cities and their cemeteries grew crowded, coffin makers started offering body relocation for burial in rural cemeteries, turning death into a more public affair. The Civil War, with its mass death at a distance, marked an inflection point for the death industry. Undertakers and surgeons at battle sites injected the corpses with chemicals to preserve them against putrefaction so they could be shipped back to their families. Embalming became so common that in 1865, the War Department required its practitioners to obtain licenses, in effect establishing embalmers as a professional class—a reality that was cemented when, in 1882, undertakers created the National Funeral Directors Association and the first school of mortuary science opened. These shifts gave way to the modern death industry as we know it, where the caretaking of the dying and their bodies is no longer the domain of families and instead is outsourced to professionals like hospital and funeral home workers.

Death doulas emerged in response to this defamiliarization of dying. Today, most people will probably serve as unofficial death doulas for friends and family at some point in their lives, caring for terminally ill loved ones or stepping up to make burial and funeral arrangements when other family members are too overwhelmed. Yet the rise of the death doula as a quasi-professional handler is a relatively new phenomenon, going back only a few decades. One early practitioner was Henry Fersko-Weiss, a social worker, who learned about the work of birth doulas and wondered if their philosophy—of treating birth as an emotional and familial process and not merely as a medical procedure—could be applied to death. In 2003, he established an end-of-life-doula program at a hospice in New York, and in 2015, he created INELDA to offer these teachings to the public. The organization has so far trained 6,500 doulas, about 90 percent of whom are women.

I learned about death doulas not long ago, from an Instagram post. A writer I admired had received her end of life doula certification from a program at the University of Vermont and posted that the experience was “life-changing and -affirming.” I was curious, and soon scoured TikTok, Ted Talks, and podcasts for information, which was abundant but opaque. I couldn’t tell what a death doula actually did, but the death doulas I saw online seemed like people I’d want to be friends with. Spiritual but not religious. Well-read and health-conscious, skewing New Age. Most important, they appeared to have fully digested the fact of their own mortality: They were going to die eventually, and they liked to talk about it.

As a group, the death doulas I was seeing online spoke often about guiding the dying towards an elusive “good death.” The idea is that planning for a good death can help us live better lives, and death doulas encourage people to live knowing that dying awaits us all. That means saying I love you to someone who doesn’t know it, going on a perpetually delayed vacation, writing a long-marinating novel—before it’s too late. The death doulas encouraged “death positivity,” or an embrace of our mortality: death, they urge us to understand, is a natural process of the human body.

The death positivity movement was once niche, but it became especially visible during the pandemic, when many people saw firsthand that there are many ways to have a bad death—and thought perhaps it would be worth trying to provide for a good one. They use terms like YODO (You Only Die Once) and organize public Death Cafes to talk with strangers about dying over tea and cookies. To death doulas, dying doesn’t mean you have to submit passively to death. It can be creative, almost like art. They tend to dispense similar knowledge and wisdom, arguing that America’s culture of “fighting” death, which is bound up in the way we talk about illness and extending life expectancies, makes us more susceptible to suffering “bad deaths”—deaths that take place in hospitals, away from our families, with forced feeding tubes or without painkillers. Deaths that happen alone.

But there are so many other kinds of bad deaths—the violent, the sudden, the shocking. What, I wondered, could a death doula do for these? (...)

The training’s first event took place in a wood-paneled conference room that Friday night. Nicole Heidbreder, a Washington DC nurse and one of our two instructors over the weekend, welcomed all of us, who were sitting in groups at round tables, from a small stage. Her voice was airy and hypnotic. As doulas, she told us, we would “gentle the journey” into death. We’d encourage people to talk about dying before it was too late. “You’re probably the kind of people who’ve had an allergy to small talk your whole life,” she said. She would know. Over the course of the weekend, she casually referred to her father’s death from renal cancer and her struggles to get pregnant. Both she and the other instructor, Omni Kitts Ferrara, a cheerful yoga teacher and nursing student in New Mexico, had a welcoming, witchy aura. Both worked as birth doulas, too. “Being around those threshold spaces,” Omni said, “feels really auspicious.”

Nicole and Omni explained what work we’d be learning to do. A good death doula acts like a personal assistant to the dying. She sorts out funeral, insurance, and legal logistics; she keeps a binder of contacts at hospices, medical facilities, and massage therapists; she serves as a neutral liaison to spouses or children. She helps a dying person carry out their final projects, whether completing a memoir or making a video to show their children how to use power tools. She helps them create advanced directives, legal documents that outline medical decisions, and vigil plans for the moment they die: who they’d like at their bedside, what atmosphere they’d like to create. During the death, she watches over the family to make sure everyone has what they need, because it’s easy to forget to eat, and drink water, and rest. After the death, the doula helps family close social media and bank accounts, transfer car titles, hire people to clean a vacated apartment, tying all the loose ends of the recently living. She guides them through their grief.

These services cost money. One death doula I spoke with charged $175 an hour, and she sometimes did pro bono work; Pat McClendon, a death doula in my hometown, would tell me she usually charges around $3,000 for an initial phone consultation, ten or so sessions to finish up final projects and plan for the death, and the vigil itself. Death doulas are not covered by insurance. They also compose an unregulated industry. There’s no supervisory body for end-of-life doulas; no official diploma. Dozens of online and in-person training organizations offer death doula certification—some appearing less than reputable, often for hefty fees. INELDA would theoretically prepare us to do death doula work, but since there are no official death doula regulations, we technically could have been practicing before the training even began.

As an icebreaker, Nicole asked us to share with our tablemates a “favorable” death we’d experienced, and a less favorable one. Ours was a group of death-obsessed people, I quickly gathered. We agreed it was subversive, and even fun, to speak openly about mortality when many people in our lives frowned upon that kind of talk, as though discussing death makes you more likely to die. (...)

Over the course of the weekend, our group learned tools for helping a dying person. We reviewed a guide of helpful conversation starters to coax a dying person’s deepest thoughts into the open air. “How do you hope to be remembered?” we asked one another. “Do you have any worries for the days ahead?” We paired off to practice deep active listening, taking turns talking about our hopes and regrets while our partners tried to refrain from interjecting their own opinions.

At the end of our second day, we practiced a meditation called guided imagery, which can help people visualize a relaxing scenario at the moment of death. The meditation is meant to soothe the dying person’s fears of pain, or of not making it to their desired afterlife, by instead conjuring a special place from life. I paired with Karen, a middle-aged, self-described shamanic healer who wore a T-shirt reading “I am safe, I am grounded.” She was from Newtown, Connecticut, where, she told me, she helped community members confront the trauma of the Sandy Hook shooting. We settled onto the floor, and I described to her, in as much detail as I could muster, a physical place where I felt comfortable: my friends’ house in Las Vegas during monsoon season. While I closed my eyes, she guided me through their living room and onto their back patio, and in my mind I saw trash bags flying in the wind, heard sirens wailing from the fire station across the street. I could sense my friends nearby but I couldn’t see them; instead, there was lightning and a barking dog and the smell of wet dirt.

When it was my turn to guide, Karen lay on her back and said, “My favorite place is Peru.” Specifically, a medicine man’s hut in Peru. It didn’t matter which hut, I could just make it up. More important were the rainforest and the stars, which seemed close enough to grab from the sky. I confessed that I’d never guided anyone through a meditation before, but I tried on an ethereal voice and told her to breathe the clear mountain air, watch the shadows of leafy trees shift as the sun set.

All this was, I suppose, how we “gentled the journey” of dying. But I found my peaceful thoughts interrupted by the memory of my friend Haley. A decade ago, the summer after my first year in college, Haley drowned during a trip to Germany. We’d messaged each other every day, imagining what our friendship would look like as soon as we returned to campus; we’d only met a few months before, and were going to live together in the fall. “The next years are going to be so good,” she texted me in June, and in August I was sitting in a church, staring at her face projected onto a screen. One of the last things I texted her was, “Be safe.”

Before Haley, death was abstract and removed, rarely crossing my mind. I knew, intellectually, that life doesn’t go on forever, but I was a teenager, and the years ahead seemed not just good, but guaranteed. If I did consider mortality, it was to assume I’d someday die in my sleep, or “of old age.”

For a while, after she died, I became fixated on drowning. I avoided submerging myself in water, and when I did, I was aware of all the water on top of me, aware that it could kill me if I wasn’t careful. Now I fixate on cars. A car accident seems the most likely way I would die young, so my mind strays while I’m driving: to cars striking me as I cross the street, to cars slamming into me as I merge onto the freeway. I often imagine what it will be like to die, and sometimes, though not often, this fixation becomes an expectation: I will die, imminently.

The kinds of dying we discussed at INELDA, however, didn’t look anything like that. A typical scenario: In the months beforehand, we grow weaker. We stumble and fall down. In the weeks before, our wounds stop healing and our skin becomes mottled. We grow confused, sleep more. A few days and hours before, we start to smell bad. Our faces turn blue. Our mouths gape, like fish out of water. Right before we die, our breathing slows to shallow rasps. Some people experience terminal lucidity, also known as the “death rally,” a phenomenon of sudden cognitive improvement. Omni told a story she’d heard of a woman who had never liked beer, but in her dying hours regained consciousness and requested one. “She chugged it, the entire beer,” Omni said. A few hours later, she died. (...)

A spokesperson for INELDA told me the workshops were “transformative” for some attendees, and it did seem that way for Kim and others. But I knew by now that wouldn’t be my experience—and maybe I’d known that all along. Each night after the training finished, I’d sit in darkness in my rental car, rain pattering on the windshield, and try to process the intensity of our discussions—the procession of death stories, one after the other. But each night I was exhausted from the overload of information and stories, so I’d drive back to my hotel through the misty forest, swerving to avoid frogs, and collapse onto the bed. During partner activities, I tried to make myself feel something, but mostly I felt dehydrated, and sore from sitting all day. I sensed, too, that I’d grown cynical from losing a friend so young. I bristled during a workshop when a chatty psychotherapist asked what meaning I could find in Haley’s death. Nothing, I told her, irritated at the assumption that I had learned something from her loss. I was skeptical of the idea that death might be beautiful and I was frustrated when people suggested that pain had something to teach, that loss made us better people.

I wondered how my hardness had quietly shaped the way I respond to other people’s losses. If people believed their experience of loss was beautiful, perhaps I should let it be. But personally, I knew that I wouldn’t find catharsis in repeating the same stories I’d told about Haley a hundred times. I missed my friend, and it had been nine years since she died. No single workshop, no single weekend, could change much.

by Meg Bernhard, N+1 |  Read more:
Image: Theodoor Verstraete, To the Vigil

Jazz Remains the Sound of Modernism

Always he appeared immaculate, always elegant—when Duke Ellington took the stage at Carnegie Hall in January of 1943 for the premier of his Black, Brown and Beige symphony it was in white tie and tailed black tuxedo. Fastidious as a musician and uncompromising as a band leader, Ellington expected nothing less than polish when it came to his appearance and comportment, especially as the United States’ greatest composer making his debut in its greatest concert hall at the belated age of 43. Ellington was perhaps most responsible for extending jazz’s reach beyond juke joints and uptown clubs, of establishing it as what the trumpeter Wynton Marsalis has termed “America’s classical music.”

European classical music influenced by jazz had been played here before—George Gershwin, Dmitri Shostakovich—but nothing quite of the magnitude of Ellington’s new composition. Molding the syncopated sound of American jazz into the movements of European symphonic music, Ellington desired “an authentic record of my race written by a member of it.” By the time he took the stage at Carnegie Hall, Ellington was already either the composer or consummate performer of jazz standards like “It Don’t Mean a Thing (If It Ain’t Got that Swing” or “Take the A Train,” a music that conveyed American modernism, the sonic equivalent of a William Carlos Williams poem or a Jackson Pollock painting, compositions that were to music what the Chrysler Building is to architecture. (...)

Because of my dad, I first heard not just Ellington and Armstrong, but the rough velvet of Billie Holiday’s voice and the vermouth smoothness of Ella Fitzgerald, the incomparably cool trumpet of Miles Davis on Kind of Blue and the ecstatic, sacred keening of John Coltrane’s alto sax on A Love Supreme, the blessed quantum cacophony of Charlie Parker’s saxophone on “The Bird” and the jittery puffed-cheek caffeination of a Gilespie trumpet solo from “Salt Peanuts,” the mathematical precision of Dave Brubeck’s piano from Time Out and the apophatic transcendence of Thelonious Sphere Monk’s on Misterioso, Charles Mingus’s strangely raucous bass and Art Blakey with his jazz messengers pounding out the avant-garde drum. And, throughout it all, no matter how sophisticated or complicated, how abstract or difficult, that human message which Nina Simone sung out in her deep, wide, prophetic voice: “The world is big / Big and bright and round / And it’s full of folks like me.”

Born from the main branch of the 12-bar blues (also the progenitor of soul and rock, funk, and hip-hop), jazz was first an amalgamation of ragtime and spirituals, marching band music and Dixieland, performed in democratic collaboration and mediated through the still remarkably experimental method of improvisation. This is the potted-history that sees jazz as a mélange of Africa and Europe. “America is a land of synthesis in which every ethnic or religious group tends, over time, to become a part of every other,” writes critic Stanley Crouch in Considering Jazz: Writings on Genius. Despite Crouch’s tendency to smooth over jazz history so as to make it palatable to his own pet theories, there’s much that’s factual here. It’s true that no nation other than the U.S. could have created jazz for the simple reason that the historical traumas and ruptures that brought disparate groups together happened most acutely here. If jazz is the sound of modernism, it’s because it was born from the fertile but bloody soil of the American continent. In this context, the Vivaldi of jazz was Armstrong, which is to say the genius at the beginnings of the genre, but Ellington was its Bach, the polymath who supplied a rigor that most fully marks a before and after.

At Ellington’s Carnegie Hall debut, where audiences listened to weekly concerts of Bach and Beethoven, Mozart and Mahler, Ellington and his musicians performed a 45-minute symphony dedicated to Black America, expressing the history of his people from enslavement to emancipation, the talented tenths of the Harlem Renaissance and into the future. The music itself is as uncompromising as Ellington, relentlessly forward-pushing and soaring, grounded in history, but hopeful. The shape is classical, but the sound is jazz. The critics—classist, racist—were not effusive. Douglass Watt at the Daily News wrote about “the concert, if that’s what you’d call it,” while Paul Bowles at the New-York Herald Tribune called Ellington’s composition “formless and meaningless.” Duke, with his slicked back hair and pencil-thin mustache, simply responded to the pans by saying, “Well, I guess they didn’t dig it.” Ellington would perform 20 more times at Carnegie Hall, until a few years before his death, and he’d repurpose large portions of Black, Brown and Beige in a 1958 collaboration with Mahalia Jackson, but he’d never again conduct the entirety of the symphony.

There are two irrefutable axioms that can be made about jazz. The first is that jazz is America’s most significant cultural contribution to the world; the second is that jazz was mostly, though not entirely, a contribution born from the experience and brilliance of America’s Black populace who have rarely been treated as full citizens. Regarding the first claim, if the genre is not America’s “classical music,” for there is a bit of a category mistake in Wynton Marsalis’s contention which judges the music by such standards, then jazz is certainly the most indispensable and quintessential of American creations, surpassing in significance other novelties, from comic books to Hollywood films. Crouch describes Ellington, and by proxy jazz, as “maybe the most American of Americans,” even while the conservative critic was long an advocate for the music as being fundamentally our native “classical” (a role for which he was influential as Marsalis’s adviser as director of jazz at Lincoln Center). The desire to transform jazz into classical music—even my own comparison of Ellington to Bach—is an insulting reduction of the music’s innovation. Jazz doesn’t need to be classical music, it’s already jazz.

by Ed Simon, The Millions |  Read more:
Image: uncredited

Googie René


- Smokey Joe's La La

[ed. A soundtrack has to be pretty good to be central to a movie's theme. This one is (full album).]

Wednesday, June 26, 2024

Chatbots and the Problems of Life

With increasing availability and sophistication of chatbots, we teachers are seeing a drastic decline in the cost of what in Great Britain is called “commissioning”—that is, getting someone else to do your academic work for you. There are many forms of academic cheating, at various levels of schooling, but commissioning by university students is the one I want to discuss today.

Long, long ago, in a pre-Internet galaxy far away, commissioning was costly and therefore rare. It was a bespoke commodity: Typically you’d find someone smart and pay him or her to write an essay for you, or even (this could be done only in large lecture classes whose students were anonymous to their professors) take an exam for you. The talent was almost always local; in a large university, cynical or broke graduate students could supplement their meager stipends quite significantly by catering to the anxieties of academically marginal undergrads. Such commissions did not always involve money; money is, after all, only one medium of exchange. But you had to have something of value to exchange for the academic work—drugs, sex, the willingness to clean a filthy apartment—and not everyone had what was required. Also, some planning in advance was necessary: If you were ten hours away from the deadline for a paper, you would be hard-pressed to find someone competent to write it for you, even if you were willing to pay extra for a rush job.

With the advent of the Internet, the costs of commissioning dropped, for several reasons. Online essay-writing services keep on hand a library of essays written on common topics—Hamlet’s indecisiveness, the Federalist on the dangers of political faction, Durkheim’s theory of religion—which could be bought for reasonable prices and at short notice. If something better or less common were required, then bespoke work could be arranged, though, as in the earlier dispensation, with more time and more money. (But, if you were an American, thanks to the mighty dollar you could save a bit by commissioning work from people in the Global South, or those who were not native English speakers.)

Also, only the bespoke work was really safe, at least if your professors used Turnitin to discover pre-existing material. Turnitin and similar services arose when the costs of commissioning dropped and its frequency (naturally) increased: Professors who barely have time to grade the papers they assign certainly don’t have time, and maybe not the ability either, to search databases of papers. Googling peculiar phrases for signs of plagiarism often marks the limit of what they can do to detect cheating. And even then your quest could conclude in uncertainty about whether a particular passage was or was not plagiarized. It was much simpler to run all the papers through Turnitin and accept its verdict.

But note what’s happening here: an arms race. Students use certain Internet technologies to enable teaching, and teachers call in other Internet technologies to detect that cheating. Commissioning services arise that promise essays with undetectable provenance; ed-tech companies introduce new tools that promise to detect the undetectable; the alternation bids fair to go on forever. One begins to wonder after a while whether the paper mills and the cheating-detection services are in cahoots, because the longer the alternation goes on, the more money all of them make. And, as I have argued on this site, the heaviest costs are paid by teachers and students, not in money, but in trust—a rapidly vanishing commodity.

I don’t like this collapse of trust; I don’t like being in a technological arms race with my students. So over the years I have developed a series of eccentric assignments. These days I rarely assign the traditional thesis essay—an assignment I always hated anyway, because it makes both the writing and the grading utterly mechanical—but instead assign dialogues between two literary theorists, or an imaginary correspondence between two novelists, or just an old-fashioned textual explication: Take this passage and explain to me, I ask them, without paraphrase, what it’s doing, what’s going on in it. And those assignments have, as it were, taken us back in time, back to the time when commissioning was expensive and therefore rare: the online paper mills, after all, don’t have a stack of conversations about The Brothers Karamazov featuring Dostoevsky and Jane Austen. It’s been a very successful strategy…until now.

The advent of the chatbots has suddenly made my life much more difficult, for several reasons.

First: No one has to be a committed cheater to use them. You only have to be someone who, in the face of an onrushing deadline, experiences either extreme fatigue or disabling anxiety. You don’t even need any money—though money to purchase a more recent and powerful version of, say, ChatGPT will probably help. All you need is a computer that’s connected to the Internet and the ability to write an appropriate prompt. (The skill that’s going to improve the most, among the most people, in the coming years is prompt engineering.) The cost of commissioning has dropped almost to zero.

Second: Very few, if any, colleges and universities have developed clear and consistent rules for the use of chatbots. Is it okay for students to ask for an outline of an assignment, if they then go on to write the substance of the essay? Are they allowed to request a draft of the essay, if they then alter that draft significantly? But wait, what counts as “significantly”? Students can very plausibly claim ignorance on these and many other questions.

Third: Chatbots can already do some of my “eccentric assignments” and do them very well. When I asked for a dialogue between Michel Foucault and Judith Butler on gender and power I got a very good one indeed. Ditto a conversation on colonialism between Gayatri Spivak and Franz Fanon. And when I asked for an exchange of letters between Jane Austen and Fyodor Dostoevsky…Well. ChatGPT’s response was historically insensitive boilerplate: “I trust this letter finds you in good health and spirits. I have recently had the pleasure of reading your novel, The Brothers Karamazov, and I felt compelled to write to you to express my admiration for your work.” But Google’s Gemini was another thing altogether:
My Dear Mr. Dostoevsky,

It is with a peculiar mixture of curiosity and trepidation that I take up my pen to address you today. Curiosity, because your novel, The Brothers Karamazov, has caused quite a stir in English literary circles. The depth of emotion you portray, the exploration of faith and morality – it is a far cry from the quiet manners and matchmaking concerns that typically occupy my own pen. Trepidation, I confess, because the world you paint is one of such stark contrasts, such turmoil, that it feels worlds away from the drawing rooms and landed estates of Hampshire.
It was with a peculiar mixture of admiration and despair that I read that letter. 

by Alan Jacobs, The Hedgehog Review |  Read more:
Image: THR Illustration

Unitree G1 Humanoid Agent | AI Avatar


[ed. Three Laws of Robotics (Isaac Asimov):

The Three Laws, presented to be from the fictional "Handbook of Robotics, 56th Edition, 2058 A.D.", are:

  • The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The Three Segments of American Culture

It’s possible to understand, with some clarity, what’s before us. The writer and music historian Ted Gioia, who has emerged as one of the most trenchant cultural critics working today, has posited that 2024 will be the year the macroculture and the microculture go to war. Another astute cultural writer who publishes under the pseudonym Mo_Diggs has identified the mesoculture as what we most lack today, and what we might require to recover both stability and, eventually, sanity. All of this bleeds into the romantic upheaval that may be here already.

But what are the micro, the macro, and the meso? Why do they matter? What’s coming next? As someone who toggles between the macroculture and the microculture, and longs for the resurrection of the mesoculture, these questions are particularly pressing for me.

The Macroculture

Inherent in its name, the macroculture is still what most Americans think of today when they imagine who produces the music, the movies, the news, the books, all that content, to wield a dreaded term. Hollywood, of course, is the macroculture. Disney and Paramount reign above, along with tech giants like Amazon and Apple who have made significant incursions into the entertainment space. Spotify and Netflix are the macro streamers. The major record conglomerates, including Sony Music Group and Universal Music Group, belong here, as does all of legacy media. The Times and the Atlantic and the New Yorker are the macroculture, as is 20th Century Fox (Disney), Fox News, ABC News (Disney), ESPN (Disney), CNN (Warner Bros. Discovery), NBC, CBS (Paramount), and HBO (Warner Bros. Discovery). Corporate publishers and their imprints all belong, too. Size alone isn’t a determinant of what resides in the macroculture. Smaller newspapers and online publications, like Slate and Vox, can be considered macro in sentiment. Most magazines are the same way.

The macroculture is both extraordinarily wealthy and uniquely vulnerable. The second part of this formulation was not true until the twenty-first century, when the internet matured and obliterated, at once, several remarkably durable business models. When the macroculture was on sturdier financial ground, Americans benefited more, in part, because there was a greater degree of narrative diversity. Mainstream cinema could, at any given time, be composed of action films, rom-coms, high concept art films, historical epics, psychological thrillers, regular comedies, and original IP franchises. In the 1980s, 1990s, and 2000s, there were many types of tentpole films. As Hollywood grew vulnerable in the last decade and a half, as more and more consumers shifted to streaming and stayed home, the retread culture, which still strangles us today, emerged: superhero films, films based on video games, films owed entirely to ancient intellectual property. As good as Barbie was, this was the ultimate problem with Barbie, a doll first sold in 1959, a full decade before men walked on the moon. There is no new doll, no new flying hero or righteous mutant, no fresh IP. Thirty years ago, the macroculture cared far more about newness.

It is harder to generalize about book publishing because so many different kinds of books get published every year and the works, if marginalized by a public that mostly doesn’t read, can still be multifarious. But I’ll speak, in broad strokes, to the general culture of literary fiction, which held a kind of prestige in the twentieth century that it may never recapture again. When there had been less conglomeration in publishing, there were, arguably, a wider array of novels that would reach the broader public. Writers themselves could be regional, class-based, highly-educated, or completely bereft of elite formal schooling. Many more of them came from the working class. The moneyed coasts, East and West, always exerted the most pull, but there were many different schools and styles, even politics, taking root in the vast middle of the country. And it wasn’t just the middle: within coastal cities themselves, like New York and Los Angeles, tribunes for the alienated and the destitute could more readily emerge. Outsiders, like Samuel Delaney and Hubert Selby Jr. and Flannery O’Connor, still barreled their way into the macroculture and were even exalted there. I don’t want to idealize all of this too much—we are a less racist country today, and twentieth century publishing had many failures—but the discontent a reader might feel in the 2020s is connected to all the novels conceived, almost entirely, in one milieu: upper-middle class affluence within a fashionable metropolitan area. These worlds are usually white, but they don’t have to be, and what left-liberals never quite understand is that there is far more solidarity between a Black Swarthmore graduate and a white Swarthmore graduate than a white attorney from Grosse Pointe and a white Dollar General clerk in the Lansing exurbs. The literary novels that achieve widest distribution today are, with some exceptions, preoccupied with the struggles and neuroses of those wielding the most cultural capital.

The major record labels, meanwhile, doesn’t know how to break out big stars anymore. Taylor Swift won’t be supplanted. The A&R functionaries race desperately to the new stalwarts of the microculture like TikTok for hitmakers, throwing out record deals to anyone who seems to achieve a moment of virality. They don’t nurture talent or understand, really, how to distribute it outward. This is mostly—but not entirely—their fault; the internet wrecked every distribution channel imaginable, from the record store to the music magazine, and MTV has lost all relevance. Radio stations no longer distinguish themselves in any regional market. Drive through Chicago or Oakland or Buffalo and hear, quite literally, the same exact songs on any local affiliate, if you’re listening at all.

Much has been written on the fracturing of culture, of our dwindling consensus zones—no Friends or Seinfeld to gather around Americans every Thursday evening. For the macroculture, this has long been a challenge, and it will only get worse in the coming years. The theme here is struggle: most of the conglomerates and media properties aren’t as wealthy as they once were. The bleeding out of the large newspapers, the regional newspapers, and the digital insurgents alike is well-known, with an obvious enough culprit. The print advertising model was never replaced. What has been surprising, as we near the midpoint of this decade, is how some of the storied incumbents can’t even garner attention anymore. The 2010s riddle was how to monetize interest. More dire, for a vaunted institution like the Washington Post, is that half of its traffic has vanished since 2020. Traffic itself is quasi-worthless—I don’t monitor the traffic of this newsletter, for example, I simply care if more people sign up and whether they pay—but it is a barometer that can’t be ignored entirely.

It is not much sunnier at Spotify, which laid off 17% of its workforce. Apple’s stock is getting downgraded. CNN’s ratings are cratering, and it may merge with CBS News, which lags its competitors already. ESPN, once the king of the sports macroculture, has turned to an erratic microcultural star, Pat McAfee, to save them—and he plainly cannot.

The walls between the cultures can be porous. Many in the microculture still long for macro prestige and, perhaps, its cash. If not ignoring it altogether, the macrocultural players look upon the micro with a mixture of wariness and envy, wondering how they know little but retrenchment while the micro is booming. Individuals, like Joe Rogan, may shuttle from one culture to the other, and back again. Rogan first found fame as a comedian and host of Fear Factor, firmly situated in the macroculture. He then became far famous, and richer, in the microculture, launching one of the most popular podcasts in the world and streaming it on YouTube. The macroculture took notice: Spotify paid him more than $200 million, and he became the object of scorn—and genuine fascination—in the mainstream media. Rogan alienates liberals for a variety of reasons, but the greater story—one that still must play out—is how Spotify will not gain very much from pouring so much cash into Rogan. This has nothing to do with his views on Trump or Covid vaccines. It has everything to do with the reality that no individual, outside of a professional athlete, can be worth that much money to a company. Streaming itself is a dubious business model, one that will never deliver on its promised returns. When the deal is up, Rogan can take his mass audience and charge them to listen to him directly, sans Spotify. He’ll marinate in the microculture just fine.

The macroculture, it must be emphasized, is nowhere near collapse. I think it will transmogrify, not vanish. But it’s no longer growing. It’s the microculture that is expanding, often explosively.

This is not a value judgement, merely an expression of bare fact.

The Microculture

The cultural undercurrent, in the United States, of Israel’s war against Hamas and the ongoing, catastrophic siege of Gaza is the ideological cleavage between the old and the young. If you’re under thirty-five, you think Israel is an oppressor state, and the sins of Hamas are secondary to seventy-five years of colonialism. If you’re older, you might be disconcerted by the civilian casualties in Gaza but believe, ultimately, Israel has a right to defend itself against terrorism—or, at least, Zionism itself is not evil.

TikTok has harbored the most pro-Palestine and anti-Israel sentiment, leading to accusations that the Chinese-run social media giant is catalyzing an entire generation against Israel through the manipulation of algorithms. Jewish celebrities fumed that TikTok could even be responsible, in some form, for the Hamas attacks. Much of this was simplistic thinking because young, left-leaning Americans don’t need social media to care about the bombing of Gaza. Hamas doesn’t need social media to hate Israel. But it is inarguable that TikTok, for the time being, platformed more anti-Israel voices because its success is built on decentralization: anyone can create a TikTok video and gatekeepers, theoretically, are nonexistent.

TikTok is best understood as one of the most famous and successful components of the microculture. Even if growth there is slowing and the metrics of virality can be nebulous, it is a platform that is, for now at least, capturing the greatest share of youth attention. It embodies the microculture because it is bottom-up, not top-down; macrocultural luminaries can be successful there, but popularity percolates differently, and its celebrities might be rich without the attendant trappings of the old macrocultural fame, that lost world of Empire. Instagram works similarly: it is owned by Facebook, a macrocultural titan, yet it is fueled entirely by the attention of the individual users who fill it, free of charge, with all of its content.

In terms of raw growth, the greater success story of the microculture might be the Google-owned YouTube. The top creators are perpetually expanding. MrBeast has soared past 100 million subscribers, with his rate of growth still increasing. Forty-three YouTube channels attract more than a half billion views a month. Stripe, the payment processor for most online transactions, including those on Substack, revealed that the so-called creator economy—those in the microculture using online platforms—has continually expanded over the last two years. In 2021, Stripe aggregated data from 50 popular creator platforms and found they had added 668,000 creators who received $10 billion in payouts. In 2023, those same 50 platforms had added over 1 million creators and paid out more than $25 billion in earnings.

The context here is the timeframe. The early 2020s were a bloodbath for macrocultural media. Other than, perhaps, the New York Times, there were no success stories. Disney stock plunged. Cable ratings evaporated. Post-Trump news traffic dried up. Netflix suddenly realized there was no pot of gold at the end of the streaming rainbow.

Images and video don’t rule the entirety of the microculture. What you are reading now, this Substack, belongs to the micro, as blogs did in the 2000s before they were subsumed by social media and larger websites or undone by the lack of dollars available to those who wrote for the web. Substack cannot replace the newspapers that have collapsed or replicate the alternative media ecosystem that has mostly been destroyed. What it does offer is a way for some to either make a comfortable living or a partial living writing or, absent that, at least hunt out a fresh audience bored by what the macroculture has been disgorging over the last few years. Stripe is what makes Substack, for writers like me, viable; it’s easy for those who want to support me to pay for what I write, thus solving the great dilemma of the old blogs, which could not, for the most part, convert readers into paying customers.

What’s intriguing about Substack is what’s intriguing about modern day YouTube: growth. Like any online platform, there is a tremendous divide between the enormously popular and the anonymous, but a Substack middle class is slowly taking root as newsletters continually add new readers. There is no such thing as a Washington Post-style crisis, of an audience evaporating. The opposite is true, with those who put the work in getting rewarded with new subscribers. Whatever the pace, the numbers keep going up, not down. A knock on Substack is that the macrocultural heavyweights who end up there merely replicate their success; this is partially true. Matt Taibbi was a fairly famous Rolling Stone correspondent, Matt Yglesias had a large social media following from two decades of blogging, and Bari Weiss had sinecures at the Times and Wall Street Journal. While all of that aids in success, none of it guarantees large audiences will follow. Some have leapt from the legacy media to Substack and found, in fact, they can’t make it entirely work. And other Substack titans had no fame before migrating over to the newsletter service. Heather Cox Richardson was a history professor and author at Boston College, known chiefly in academic circles. The aforementioned Ted Gioia, who is nearing 100,000 subscribers on his own Substack, was known to jazz enthusiasts but didn’t boast a significant social media presence or decades spent on cable television. Anne Kadet, a New York-based journalist, shot past 10,000 subscribers in two short years by conducting quirky interviews and cultural excavations that the macroculture would ignore. The thrill of Substack is the sheer diversity of voices: racial, ideological, cultural, and political. It is something of an underground press, diffuse and raffish and maybe more ambitious. If only it could all be fused together into a neo-Village Voice, stuffed into a news box every week.

In the last century, the macrocultural elites would try to glom onto, appropriate, or make a study of the microcultural equivalent of its day: the counterculture. Hollywood, the large publishing houses, and Madison Avenue were all deeply interested in the protest movements, the new rock music, and the aesthetics of the baby boomer set, in part because they wanted to ensure all of it could be properly commodified. And the creators within the macro, the mainstream, wanted to learn—they were interested in advancement and innovation for its own sake, the desire to reimagine what was possible. New Hollywood, postmodernist literature, and the rising sophistication of network television were all reflective of this shift. The counterculture strengthened the mainstream.

Today, the relationship is far more fraught. Most macrocultural operators remain befuddled by Substack, wishing it all would go away or drown in its mostly nonexistent problems. CNN, the Atlantic, and NPR won’t set up on Substack. And when media conglomerates do poach YouTubers or podcasters from the microculture, like in the cases of McAfee or Lilly Singh, they hope the amorphous formats of their prior productions can be jammed into the structured world of television. Macrocultural elites rarely, though, try to learn from the success of what’s churning below, or how rapid growth is still possible when so much of the mainstream seems to be contracting. The trouble, too, is that the tech behemoths rely on the microculture for their own survival, and no longer innovate themselves. The relationship is, if not vampiric, than feudal: Google controls YouTube, Facebook controls Instagram, Elon Musk controls X, ByteDance controls TikTok, and the creators themselves till the digital fields, tirelessly generating value for their bosses while hoping some of it gets kicked back to them. At some point, this tension will break out into the open, as all of these platforms, in various forms, try to demonetize or suppress the content they do not like. Palestinian voices will find TikTok less hospitable in the coming months and years. The new platforms Big Tech tries to create will not help, either. Threads cannot replace Twitter because it hates the written word.

The microculture, though, is not any ideal because it is still a realm of haves and have-nots. Most people are not MrBeast or even a sliver of a fraction of MrBeast. Most people cannot crowdfund $1 million for their fantasy novel. Most people cannot pay their rent with Substack or Patreon income. There is the risk, like with the oversaturation of podcasts, that too many creators will go begging in front of the same audiences and monetary returns will diminish.

The microculture is, too often, a hustle culture. Many artists, rightfully, would rather not hustle.

What we need is more than a macroculture and a microculture—what thrived in the late twentieth and early twenty-first centuries, and is now dissipating.

The mesoculture.

by Ross Barkan, Political Currents |  Read more:
Image: uncredited
[ed. See also: In 2024, the Tension Between Macroculture and Microculture Will Turn into War (Honest Broker).]

Tuesday, June 25, 2024

Bruce Springsteen

Tom Petty & The Heartbreakers

 

Around 10 p.m. on September 25, 2017, Tom Petty told the audience at the Hollywood Bowl, “We’re almost out of time,” and struck three D chords in quick succession. “We’ve got time for this one here.”

In six minutes Petty’s public career will be over. Petty and the Heartbreakers will finish the song, thunderously and to thunderous applause. Petty will wish a good night on his audience, and then he’ll linger on stage after the band retreats. Seven days later his life will be over.

But before that we have four minutes of music.

Just as Petty’s third D starts to decay, drummer Steve Ferrone counts the band in, and Petty and the Heartbreakers lock into the last song of their fortieth anniversary tour. Petty prowls the stage playing a white Fender Electric XII, and then he steps to the mic and belts out in his late-career, Dylan-esque sneer, “She was an American Girl / raised on promises.”

"American Girl,” the final track on the Heartbreaker’s first record and the last song he’ll ever sing in public, is as perfect a rock song as there is. “Raised on promises” could be the national motto. It should adorn our currency, the contemporary American English for “In God We Trust.” Not that the phrases are synonymous. A promise is probably a poor substitute for a god, but it’s what we’ve got if we’re lucky and realistic — promises and hope. At his best Tom Petty excelled at articulating promises and hope, fulfilled and fallow. One of the things that rock offers is triumphant hope. Rock in its triumph mode, regardless of what bittersweetness resides in the lyrics, is open windows, open roads, open vistas. In America, the image of the road itself is often linked to such aspiration, with opportunity just beyond the horizon, and reinvention obtained if you can find a better spot to call home. All you need is a soundtrack to hold you to your pace. Few did this kind of hope — and the attendant rages of desperation, anger, longing, passion — like Petty. It’s an ageless passion. Tracks like “Refugee,” “The Waiting,” “Running Down a Dream” feel as vital today as when they were recorded. Petty’s best music doesn’t age into dotage like so many of his contemporaries. The songs sound clean, fresh, and vibrant affirmations that even if things get sticky, it’s ultimately gonna be alright.

Which is how “American Girl” sounds at the Hollywood Bowl.

The Commonwealth of Petty goes bonkers for this song, of course. I dropped in on some shows during the 2017 tour, and the crowds were always the same, spilling beer, smiling, maybe getting prematurely red-eyed and a bit belligerent. It’s hard to responsibly generalize about Petty’s audience because it’s like generalizing America. Yes, it’s usually the white, middle-aged, or older, bulge of America, but you take the point. Despite the reality that in any collection of 20,000 individuals, people will hold fast to irreconcilable cultural tastes, political opinions, and moral commitments, and when the band tears into “American Girl,” the crowd, already euphoric, feels the electric thrill of shared rock ’n’ roll communion.

Throughout the band’s life, the Heartbreakers retained quite a bit of purity when it came to their stage shows. This gig could’ve been back at the Whisky a Go Go, except for the ever-present screens, several-stories high, displaying real-time footage of the band or other images. But, different images accompany “American Girl.”

What does the phrase “American girl” conjure in your mind? I’d wager that many of you think of a white girl. Mary Ann or Ginger, fresh-faced or sultry. The subcategory doesn’t matter as much as the likelihood that in most of your minds, your American girl is white.

Not for Petty, tonight. Just as he sings the song’s first lines about being raised on promises, the screens transition from abstract swaths of color into images of women. At first the screens show the stereotype: fresh-faced white women and the open road. But soon there’s an African American family, a Latina soldier, and Alexis Arquette, the transgender activist and actor who died from HIV-related complications in 2016. Images of dozens of women cross the screen, young and old, all ethnicities. As the song speeds toward its end, hundreds of snapshots cross the screen, growing smaller as they gain in number before dissolving into a cartoon rendering of the Statue of Liberty shrouded in the American flag. As the song ends, Lady Liberty’s torch and crown preside over the audience.

Now, we shouldn’t give Petty a round of applause for figuring out that not all women are white. But he did punctuate this tour and, however unexpectedly, his career, by playing one of his most durable creations against a backdrop that both asserts and celebrates America’s multiracial society.

Though the optimism about American racial harmony might have been naïve, and the message of solidarity and diversity delivered with a somewhat corporate accent, choosing to close the show with these images was not haphazard. I don’t think many fans ponied up for a Petty concert looking for a message. Petty frequently received plaudits for appealing across the aesthetic and political spectrum of rock ’n’ roll fans. There’s something for just about everyone. That has more to do with the muscularity of the music and the elastic way his best songs easily stretch to fit most anyone’s life. But Petty did also subtly engage in politics during his career, especially in the later years. And he learned about the power of rock ’n’ roll iconography the hard way.

In essential aspects, Petty’s final performance of “American Girl” repudiates and corrects his largest, most embarrassing misstep: his use of the Confederate Battle Flag during the 1985 tour in support of his sixth album, Southern Accents. In 2017 the stage set celebrated a vision of racial harmony; in 1985 the set deployed an embattled icon that many see as our primary homegrown symbol of race-based hatred. In much the same way that one of Petty’s final public gestures was in part a repudiation of the Confederate Flag, his career in the decades following Southern Accents was a decided rejection of his Southern Accents era’s persona and aesthetics.

Southern Accents was released in March 1985. Over the previous nine years, Petty and the Heartbreakers had released a series of successful records. All of these albums are good rock records. Some are great. But at that point, Petty’s catalog lacked any concerted, unified artistic statement, and Petty was at a crossroads. His impending crisis was more than boredom, though. Petty had all kinds of money and all kinds of fame, but he wanted to challenge himself artistically. Early in his career, when people still confused him for a punk rocker, Petty said that rock music was just “stupid shit.” For Petty most contemporary rock musicians — including himself — wrote and rewrote versions of the same love songs over three chord progressions. After 1982’s Long After Dark, Petty sought to challenge himself and his artistry, and he began working on a set of ideas which became a loose concept album about the American South. Southern Accents was intended as an artistic breakthrough. On paper the record sounds like a winner. With the aura of history promised by many of the songs, its sense of place, and an expanded palette of textures including horns and a string arrangement, Southern Accents seems as if it could be the career defining record Petty intended. And this is even before you consider the groundbreaking Alice in Wonderland–inspired video for the record’s first single, “Don’t Come Around Here No More.” Although the album contains a few of Petty’s most accomplished songs, for reasons ranging from the aesthetic to the narcotic, Southern Accents didn’t stick the landing. (...)

The failure of Southern Accents is more than a lack of coherence and a crippling reliance on 1980s’ production gimmicks. I say this not because it doesn’t measure up to the rare few and almost objectively brilliant concept records in music history, like The Who’s Quadrophrenia. In fact, Southern Accents was always meant to be conceptually loose. Petty was not trying to create a fully formed rock version of William Faulkner’s Yoknapatawpha County in forty minutes. He wasn’t striving for a robustly detailed “novel.”. In listening to Southern Accents and considering the remnants of Petty’s original idea, we find a record that is less a comprehensive story than a series of snapshots about life in the South. The songs comprising the thematic core of Southern Accents — “Rebels,” “Don’t Come Around Here No More,” “Southern Accents,” “Spike,” and “Dogs on the Run” — predominantly follows a single unnamed Southerner as he shambles through life embittered, drunk, antagonistic, but still hopeful and yearning for love and connection.

So, yes, the record presents as Southern, from the opening song “Rebels” to the Civil War–era Winslow Homer painting on the cover. That’s not the problem. Things get dicey because the South of Petty’s imagination endorsed rather blindly some of the most corrosive myths of American culture and history. Petty adopted a staggeringly uncritical stance toward commonplace historical misunderstandings of the South, and his record manages to be both too much and too little about the South. The album is deeply suffused with a long-standing, parochial, and miniaturized understanding of the American South. This is almost not Petty’s fault. It’s hard to nail down any region in a record, period, even for a consummate pop rock writer like Petty. And with all its historical burden, the South is nearly impossible to succinctly explore. Moreover, the thirty-five year old Petty who made Southern Accents had spent his adult life as a rock star, so he likely didn’t have the inclination to interrogate his vision of the South. But the result was that Southern Accents promotes an aggressively narrow conception of Southern identity. To put it bluntly, Petty’s South is the white South.

by Michael Washburn, Longreads |  Read more:
Image: YouTube/Rebels

How the World Ran Out of Everything

I'm Dave Davies. Do you remember how in the early months of the pandemic, you couldn't find toilet paper or cleaning products on store shelves? And then soon enough, all kinds of other products were hard to get, from building materials to exercise equipment to new cars because automakers couldn't get computer chips. Our guest today, New York Times correspondent Peter Goodman, has spent a lot of time rummaging through the wreckage of those disruptions in the supply chain, discovering things less well known, like the 1 billion pounds of harvested almonds that California growers couldn't get to foreign buyers because hard-pressed shippers were busy with more profitable traffic.

In his new book, Goodman explores the business decisions that left the economy vulnerable to a disruption like this and the erosion of government regulation over critical transport industries that left their capacity to move freight weak and brittle. All those issues, he says, were exacerbated by the corporate drive to maximize short-term profits. The ultimate threat to the supply chain, he writes, is unregulated greed. Peter Goodman is the global economics correspondent for the New York Times. His new book is "How The World Ran Out of Everything: Inside The Global Supply Chain." (...)

DAVIES: In April 2020, when all this - the pandemic really hit us, your wife was about to give birth to your third child, and she'd had a premature child in the second. So maybe a little more care and caution than a lot of parents would be expecting. And you write that she was unable to get some needed items, you know, rubbing alcohol, disinfectant wipes, backup baby formula. I mean, you were experiencing this as we all were. Just remind us of some of the dimensions of this breakdown in the supply chain, some of its really meaningful impact.

GOODMAN: Sure. I mean, it was cosmically bewildering. We were, you know, in London in lockdown, and my wife was pretty stoic about the fact that I couldn't be in the hospital for more than an hour. Her parents couldn't fly in from New York to look after the baby. But to then go online and look for hand sanitizer once we were home and discover there was nothing to be found. And then you couldn't even find the ingredients to make your own hand sanitizer, and, of course, this was true throughout much of the global economy, right? We didn't have personal protective gear for frontline medical workers who were dealing with COVID patients. We ran out of computer chips. We, of course, ran out of toilet paper. I'm sure everybody remembers that.

And there was just this sense that something kind of deep that we had all taken for granted, that we all agreed on, you know, that you could click on your button on Amazon or whatever e-commerce provider you liked and wait a few hours or a couple of days or whatever, and a truck would show up at your door. Well, now, even that had broken down in the middle of this public health catastrophe that was, of course, incredibly confusing. It was very disconcerting. 

DAVIES: Right. Right. You know, there's a guy whose story runs through the book, a fellow from Mississippi named Hagan Walker, who had a startup company called Glo, and his efforts to produce and get a product that was really important to his business is kind of illustrative of some of what was going on in the supply chain. What did he make?

GOODMAN: So he made these novelty cubes that light up when they're dropped in water, and this started off as a thing you could sell to bars, the bartender could look down the bar and see who needed a refill because the light went off. And then he discovered that - he heard from somebody whose child was autistic and bath time had been really just a difficult time. And somebody dropped one of these cubes in the bath, discovered that the child was transfixed by this, and that generated this idea to make these bath toys.

And when I met Hagan Walker, he had recently gotten a deal with "Sesame Street." He was making these Elmo and Julia - that's another character - themed bath toys, using factories in China to make these cubes and shipping them in the first shipping container - it was the first time he'd ever had an order big enough to fill a whole 40-foot shipping container, these, you know, boxes that are like the workhorses of the global economy. And so I ended up tracking this one container from this factory in China to his warehouse in Mississippi, and it was a harrowing journey.

DAVIES: Right. This was a make-or-break thing for his, you know, emerging company. One of the things that's interesting is that when it was time to decide how he would find someone to manufacture these little figurines. He wanted to do it in the United States. He really wanted to have jobs here. He couldn't seem to manage that. Why?

GOODMAN: So much of the productive capacity had shifted overseas and specifically to China. So, you know, Hagan Walker's in his college town, Starkville, Miss., where he got a degree in engineering, and he likes the idea of keeping the business in the country. But as he calls around, he discovers, one place can make these steel plates he needs, the kind of molds for his product. But it's 12 times the cost of China. Another place has a slightly lower cost, but it turns out they're just farming the work to China and capturing a cut for themselves. Then at one point, he wants to make this kind of - imagine a children's pop-up book, like that kind of packaging for his product, and he has a meeting with somebody in the States who says, this is just so complicated. You know, you just have to have this made in China.

DAVIES: This is illustrative of what's happened in recent decades where China has emerged as this huge manufacturing power. The numbers are really striking. Chinese companies were making 80% of the world's air conditioners, 70% of the mobile phones. And this drew a lot of criticism from Donald Trump and others, you know, the Chinese are eating our lunch. The balance of trade is terrible. You say if this was a crime, what was happening with the trade imbalance, it was an inside job, right? Meaning what?

GOODMAN: That the reason why so much productive capacity is shifted to China, why so many factory jobs end up in China, is because of what American and other Western corporate executives decided was in their best interest. I mean, they had been perpetually on the prowl for ways to cut costs. They liked the idea of getting out from under labor unions. Unions were effectively banned in China. They like the idea that you could make your own rules. You didn't like an environmental regulation, you needed a big piece of land, as long as you cut in a local communist party official, you could do your deal. And ironically, as I argue in the book, maybe one of the greatest joint ventures in the history of global capitalism is that between Walmart, the world's largest retailer, and the People's Republic of China, this entity that comes out of a peasant-led rebellion in the name of Marxism. And this becomes really the center of the global economy for a time, making goods at an enormous scale.

So what we failed to do in the States was apportion the bounty of trade, and that's why we've had this backlash. I mean, we have had a consumer bonanza from this trade. Prices have gone down. We've got consumer choice. It's been very good for the investor class. It hasn't been good for a couple million workers who lost their jobs and who've largely been abandoned. But yes, that part is an inside job.

DAVIES: And you have a moment where you describe visiting a global procurement center for Walmart in the Chinese city of Shenzhen. Describe what you saw and what it tells us?

GOODMAN: Yeah, this was 20 years ago. What I saw was this waiting room full of the kinds of uncomfortable chairs that you'd see in an elementary school, you know, this sort of all-in-one desk chairs. People drinking tepid cups of tea out of these little plastic cups, sitting for hours and hours for their chance to go pitch a Walmart buyer on their products. And Walmart engineered this so that, you know, you would get your turn. Oh, you make Christmas trees, you make microwave ovens...

DAVIES: And these are Chinese manufacturers saying, hey, we want your business.

GOODMAN: These are Chinese - that's right. These are Chinese manufacturers saying, we can satisfy your demand for cheap goods. And Walmart would say, well, OK, here's the price we're willing to pay, and the Chinese factory reps would know full and well if they don't meet that price, even if that's a price that's so low, that they're going to have to squeeze labor, they're going to have to take shortcuts on workplace and environmental standards. They're going to have to get some credit to go get the materials. Well, they know that out there in the waiting room are representatives for all of their competitors, and somebody out there is going to be desperate enough for cash right now, and they'll take the terms of the deal.

DAVIES: Wow. So we have this situation where these hundreds and hundreds of, you know - thousands of factories all over China are making this stuff, and American investors and other investors are making a lot of money from it, and American consumers are getting really cheap goods. And one of the things, of course, that makes it work is cheap transport. These container ships, these 40-foot containers and these - I mean, the vessels that do these, some of them are as long as the Empire State Building is high. They can...

GOODMAN: Right.

DAVIES: ...Take - what? - tens of thousands of containers at a time, right? How cheap does it get to be to ship your stuff?

GOODMAN: Well, it gets to the point where, you know, as the CEO of Columbia Sportswear put it to me at one point, it feels like it's free. Like, you don't even have to think about it. I mean, the container standardizes shipping.

So, you know, before the shipping container comes along in the 1950s, loading and unloading any kind of cargo vessel is this excruciating, dangerous, grueling process. You know, we're going to put the barrel of chemicals over there. We got to figure out how to fit in the big side of beef over here, and it's very much a jigsaw puzzle.

And once the shipping container comes along, you can load factory goods or really anything into this standard-size box. That box can be lifted up by crane. It can be put on the back of a truck. It can be hoisted onto rail. It can be lifted onto the ship.

So that makes everything cheaper and quicker, and it invites these CEOs of publicly traded companies who are scouring the globe for the cheapest possible place to treat factories in China as if they might as well be in Ohio or Dusseldorf or wherever. You know, as long as a ship comes calling somewhere, and you got road and rail connections, it's all just one big grid, and it largely works that way, except when there are shocks.

DAVIES: Right. The other element of this, which sets up the disaster we experienced in the pandemic, is a change in management practices that dealt with how companies, both manufacturers and retailers, handle the inventory, how many goods they have on hand. You want to explain this?

GOODMAN: Yeah, sure. So Toyota, at the end of the second world war, pioneers this notion that's come to be known as just-in-time manufacturing or lean manufacturing, and the idea is fairly simple and sensible. It's the end of the second world war. Capital is very limited. Japan's dealing with the devastation of the war. They don't have that much developable land.

So Toyota says, well, instead of running our operations the way Ford did in the heyday of mass assembly in the States - just making as much stuff as you possibly can and letting salespeople figure out how to sell it - let's just make as many cars as we need to replenish those that are being sold. Let's get our suppliers to give us the parts and the materials we need right when we need them on the assembly line.

And this is very effective. It's very useful. And then along comes financialization, you know, the paramountcy of the shareholder interest and consultancies like McKinsey, who essentially say to the corporate executive ranks, lean manufacturing, just-in-time manufacturing - this is a way for you to just slash your inventory. Take the savings. Instead of sticking all these parts and extra products in warehouses as a hedge against troubles that aren't going to happen, you know, right now, probably, give the money to yourselves through executive compensation, you know, as a reward for being brilliant enough to hire McKinsey. Give it to shareholders in the form of dividends and share buybacks. That makes share prices go up, and everybody's happy.

And when one day, there is a shock and you run short of inventory, well, that'll be somebody else's problem. But by then, you know, you'll be presumably sleeping in a hammock on some beach hoisting a cocktail.

by Dave Davies, NPR |  Read more:
Image: Ian Taylor/Unsplash

Kate Bush (feat. Donald Sutherland)

Donald Sutherland was an irreplaceable aristocrat of cinema (Guardian);

Japanese choreographer Saburo Teshigawara and a dancer rehearse the ballet Voice of Desert as part of the Montpellier dance festival at the Théâtre de l’Agora
Image: Sylvain Thomas/AFP/Getty Images

Sunday, June 23, 2024

Are These Really ‘the World’s 50 Best Restaurants’?

To be media literate these days is to understand that no ranked list, whether it is the “100 Greatest Drummers of All Time” or the “35 Cutest Dog Breeds to Ever Exist,” should be taken too literally. We all know that the cuteness of the Maltipoo and the awesomeness of Keith Moon are matters of opinion.

When it comes to parsing the annual dining survey known as The World’s 50 Best Restaurants, though, you really have to open your mind. Forget asking whether these establishments are the best in the world. The bigger question is: Are they restaurants?

Consider some of the highest-ranking winners from this year’s edition, which was announced Wednesday night in a ceremony at the Wynn Las Vegas that began with feathered and painted dancers twirling light sticks to electronic dance music on a darkened stage.

Gaggan, in Bangkok, was named not just the ninth-best restaurant in the world but the single best restaurant in Asia. The chef, Gaggan Anand, greets diners at his 14-seat table facing the kitchen with “Welcome to my … .” completing the sentence with a term, meaning a chaotic situation, that will not be appearing in The New York Times.

What follows are about two dozen dishes organized in two acts (with intermission). The menu is written in emojis. Each bite is accompanied by a long story from Mr. Anand that may or may not be true. The furrowed white orb splotched with what appears to be blood, he claims, is the brain of a rat raised in a basement feedlot.

Brains are big in other restaurants on the list. Rasmus Munk, chef of the eighth-best restaurant in the world, Alchemist, in Copenhagen, pipes a mousse of lamb brains and foie gras into a bleached lamb skull, then garnishes it with ants and roasted mealworms. Another of the 50 or so courses — the restaurant calls them “impressions”— lurks inside the cavity of a realistic, life-size model of a man’s head with the top of the cranium removed.

Now, among the 50 Best are a number of establishments where they let you see a menu written in real words and order things you actually want to eat. Some of these, like Asador Etxebarri in Spain and Schloss Schauenstein in Switzerland, are hard to reach. Nearly all are very expensive. Still, there are places on the list where a relatively normal person might eat a relatively normal dinner and go home feeling relatively well-fed.

But the list is dominated by places that normal people can’t get into, where the few diners who will go to almost any length for reservations will go home feeling bloated and drunk. They are not restaurants, or not just restaurants. They are endurance tests, theatrical spectacles, monuments to ego and — the two most frightening words in dining — “immersive experiences.”

Whether the World’s 50 Best seeks out these spectacular spectaculars or has simply been hijacked by them is impossible to tell. The list’s website is a model that should be studied by anyone who wants to arrange words that sound important and don’t mean anything.

On the subject of what it takes to win the attention of the 1,080 “independent experts” who make up the organization’s voting body, the website has this to say: “What constitutes ‘best’ is up to each voter to decide — as everyone’s tastes are different, so is everyone’s idea of what constitutes a great restaurant experience. Of course, the quality of food is going to be central, as is the service — but the style of both, the surroundings, atmosphere and indeed the price level are each more or less important for each different individual.”

Well, that clears up that.

The World’s 50 Best Restaurants and its spinoff awards, by now almost too numerous to count, weren’t always so rarefied. In the early years, when the list was being published by Restaurants magazine, the editors saw it as a kind of anti-Michelin, and took pride in recognizing spots that would never, ever make Michelin’s little red guidebooks. 

No. 1 on the list that year, though, was the Spanish restaurant El Bulli, which set a standard for kitchen experimentation, highly manipulated food, restless change and marathon tastings to which the highest end of the business is still in thrall. The more famous the list became, the harder it was for a place like Carnivore to land a spot. Nobody much noticed, because the game that El Bulli played was starting to become the only one that mattered.

Today the list is dominated by tasting-menu restaurants, and every year those menus seem to get longer and more unforgiving. There are more courses than any rational person would choose to eat, and more tastes of more wines than anyone can possibly remember the next day. The spiraling, metastasizing length of these meals seems designed to convince you that there’s just no way a mere 10 or 15 courses could contain all the genius in the kitchen.

One well-traveled diner told me about a recent, four-hour meal at Disfrutar, in Barcelona — No. 1 this year. He said he was “blown away” and at the same time he never wants to go back. “It was an assault, and not fun,” he said.

by Pete Wells, NY Times |  Read more:
Image: Sergei Gapon/AFP via Getty Images

Willow

[ed. Daughter of Will and Jada Pinkett Smith. Appreciate the complexity here, vs most stuff on the charts these days.]