Friday, March 3, 2023

Aristotle (and The Stoics)

John Sellars is the author of Lessons in Stoicism (published as The Pocket Stoic in the US), The Fourfold Remedy: Epicurus and the Art of Happiness (published as The Pocket Epicurean in the US), and numerous other books on Stoicism and Hellenistic philosophy. He is a reader in Philosophy at Royal Holloway, University of London, a visiting research fellow at King’s College London (where he is associate editor for the Ancient Commentators on Aristotle project), and a member of Common Room at Wolfson College, Oxford.

Sellars’s newest book, Aristotle: Understanding the World’s Greatest Philosopher, explores Aristotle’s central ideas on a range of topics, from morality and living the good life to biology and the political climate of Athens. It is lucid and concise, and suitable for both the neophyte and scholars of Aristotle alike—it details the particulars of Aristotle’s thought but also reexamines his importance as a philosopher and scientist more generally.

Sellars kindly agreed to be interviewed by Riley Moore for Quillette in February. The following transcript has been edited for length.
***
Riley Moore: It’s difficult to discuss Aristotle without discussing everything, because Aristotle wrote about everything—ethics, logic, biology, politics, literature; anything knowable, he investigated it. You go through this in detail in your newest book, Aristotle: Understanding the World’s Greatest Philosopher. Let’s pretend I have never heard of Aristotle. Who was Aristotle, biographically? Was he a pupil of Plato just as Plato was a pupil of Socrates? Is there a direct lineage there?

John Sellars: Yes, there is. Aristotle was originally from northern Greece. His father was a doctor who died when Aristotle was about 10 years old. Aristotle is then brought up by his uncle who had been a student at Plato’s Academy some years earlier. When Aristotle reaches 17 or 18, he goes to Athens to study at Plato’s Academy and stays there for 20 years. Plato is certainly the key point of reference. Socrates, Plato, and Aristotle make up this kind of triumvirate of significant Greek thinkers, and they’re all engaging with their predecessors. We see Aristotle wrestling with Plato’s ideas and ultimately trying to break away from them in order to develop his own independent views. That’s the beginning of Aristotle’s career.

Then Plato dies, and the Academy passes to Plato’s nephew. Aristotle decides that’s a good point to leave. Maybe Plato’s nephew and Aristotle didn’t get on, we don’t know, but he heads off to Asia Minor—what we would now call Turkey—with some other pupils from the Academy who left around the same time. Then he moves just a short distance to Lesbos, which is the nearest Greek island, and starts to study the natural world. In particular, he studies marine biology. He does that for a few years, and then he is invited back north to his home region by Philip of Macedon, the king, to tutor Philip’s young son, Alexander, who goes on to become Alexander the Great. It may have been that Aristotle’s father, the doctor, had been a physician at the Macedonian court. So, there may have been a family connection there.

Alexander grows up after a few years studying with Aristotle and sets off for his great adventure in the Middle East and all the way to India. Philip of Macedon is murdered around the same time, and Aristotle, having been Philip of Macedon’s guest, decides it’s not a very safe environment. So, he returns to Athens and sets up his own school, the Lyceum, as an alternative, in a sense, to the Academy. The last few years of his life were spent primarily in Athens. So, there’s a big early period in Athens and a big later period. There’s a brief interlude where he’s traveling to Lesbos and Macedonia.

RM: It’s speculated that Plato’s death prompted Aristotle to leave Athens when he didn’t inherit the Academy. Philip of Macedon’s death prompted Aristotle to return to Athens. These monumental deaths have a powerful impact on his life.

JS: Yes, that’s true. Of course, when Aristotle comes to Athens he’s an outsider. He’s not an Athenian citizen. This may have played a role in the story. As a non-Athenian, he wouldn’t have been eligible to own property. If the succession of the Academy involved the transfer of property, it may well have been that Aristotle just wouldn’t have been a plausible successor because he couldn’t have owned anything. He’s kind of an orphan and an outsider with this slightly transient lifestyle. But at the same time, his father may have been at the court of Macedon. Aristotle’s obviously got enough private income to devote his life to intellectual pursuits. He’s moving in quite high circles. But at the same time, he doesn’t really have the social stability and security that one might want.

RM: Can you sketch the ground covered by Aristotle’s work?

JS: It covers nearly everything. It covers everything that we think of as philosophy today. Metaphysics, epistemology, ethics, aesthetics, political philosophy, logic. All the things that we would think of as parts of philosophy, he does all of that. But he’s also engaged in a series of wider intellectual pursuits that have now become disciplines in their own right. There’s a sense in which his work really founds the discipline of biology. No one’s done that kind of work carefully and closely—studying particular types of creatures—before him.

He starts with metaphysics when he’s in Plato’s Academy—the most abstract and complex stuff. And then he moves on to the study of nature. One presumes that his later time with Philip of Macedon and Alexander prompted him to become more interested in political questions. He spends time studying literature, thinking about Greek tragedy, thinking about what makes a good work of art. On the one hand, we might think of Aristotle as a kind of research scientist, but these days we wouldn’t imagine a research scientist to also be interested in questions of literary theory. But Aristotle is actually doing both. In our hyper-specialized world today, that rarely happens.

RM: Could you define “metaphysics” broadly? What does it mean to Aristotle?

JS: “Metaphysics” is a modern word. It’s not one that Aristotle himself would have known. According to legend—and some might dispute the story—Aristotle’s lecture notes were lost for a while after his death. One of his pupils inherited them. The notes were just ignored for a century or two, and then they were rediscovered and edited. A series of works about the natural world were grouped together and given the title Physics. And a series of other works were then grouped together and called Metaphysica, meaning “after the physics.” So, the word doesn’t refer to something beyond the physical world or supernatural or anything like that. It’s just the work that comes after the Physics. Aristotle called the content of those works “first philosophy”—the fundamental questions that deal with the most basic facts about the nature of what exists. So, if we’re asking about the nature of being or existence, we’re asking the most fundamental question because it applies to everything. Then we can ask questions like “What’s a living being?” But that’s also a much narrower question because it doesn’t apply to everything that is, it’s just a subcategory.

by Riley Moore, Quillette |  Read more:
Image: Rembrandt's Aristotle with a Bust of Homer (1653)
[ed. See also: How to be an Aristotlian (Antigone).]

Thursday, March 2, 2023

Wayne Shorter (1933-2023)

In Memoriam: Wayne Shorter, 1933-2023 (Downbeat)

Being Hapa (Or Not)


Sunset in Waikiki: Tourists sipping mai tais crowded the beachside hotel bar. When the server spotted my friend and me, he seemed to relax. "Ah," he said, smiling. "Two hapa girls."

He asked if we were from Hawaii. We weren't. We both have lived in Honolulu — my friend lives there now — but hail from California. It didn't matter. In that moment, he recognized our mixed racial backgrounds and used "hapa" like a secret handshake, suggesting we were aligned with him: insiders and not tourists.

Like many multiracial Asian-Americans, I identify as hapa, a Hawaiian word for "part" that has spread beyond the islands to describe anyone who's part Asian or Pacific Islander. When I first learned the term in college, wearing it felt thrilling in a tempered way, like trying on a beautiful gown I couldn't afford. Hapa seemed like the identity of lucky mixed-race people far away, people who'd grown up in Hawaii as the norm, without "Chink" taunts, mangled name pronunciations, or questions about what they were.

Over time, as more and more people called me hapa, I let myself embrace the word. It's a term that explains who I am and connects me to others in an instant. It's a term that creates a sense of community around similar life experiences and questions of identity. It's what my fiancé and I call ourselves, and how we think of the children we might have: second-generation hapas.

But as the term grows in popularity, so does debate over how it should be used. Some people argue that hapa is a slur and should be retired. "[It] is an ugly term born of racist closed-mindedness much like 'half-breed' or 'mulatto,'" design consultant Warren Wake wrote to Code Switch after reading my piece on a "hapa Bachelorette."

Several scholars told me it's a misconception that hapa has derogatory roots. The word entered the Hawaiian language in the early 1800s, with the arrival of Christian missionaries who instituted a Hawaiian alphabet and developed curriculum for schools. Hapa is a transliteration of the English word "half," but quickly came to mean "part," combining with numbers to make fractions. (For example, hapalua is half. Hapaha is one-fourth.) Hapa haole — part foreigner — came to mean a mix of Hawaiian and other, whether describing a mixed-race person, a fusion song, a bilingual Bible, or pidgin language itself.

This original use was not negative, said Kealalokahi Losch, a professor of Hawaiian studies and Pacific Island studies at Kapi'olani Community College. "The reason [hapa] feels good is because it's always felt good," he told me. Losch has been one of the few to study the earliest recorded uses of the term, buried in Hawaiian-language newspapers, and found no evidence that it began as derogatory. Because the Hawaiian kingdom was more concerned with genealogy than race, he explained, if you could trace your lineage to a Hawaiian ancestor, you were Hawaiian. Mixed Hawaiian did not mean less Hawaiian.

Any use of hapa as a slur originated with outsiders, Losch said. That includes New England missionaries, Asian plantation workers and the U.S. government, which instituted blood quantum laws to limit eligibility for Hawaiian homestead lands. On the continental U.S., some members of Japanese-American communities employed hapa to make those who were mixed "feel like they were not really, truly Japanese or Japanese-American," said Duncan Williams, a professor of religion and East Asian languages and cultures at the University of Southern California. He said this history may have led some to believe the word is offensive. (...)

The desire of many Native Hawaiians to reclaim this word is often linked to a larger call for change. In Hawaii, a growing sovereignty movement maintains that the late 19th-century overthrow and annexation of the kingdom were illegal and the islands should again exercise some form of self-governance. But even within that movement opinions on hapa vary. I spoke with attorney Poka Laenui, who said he has been involved in the Hawaiian sovereignty movement for more than 40 years. He told me, in the "idea of aloha" — the complex blend that includes love, compassion and generosity — he doesn't mind if the term is shared. "If our word can be used to assist people in identifying and understanding one another, who am I to object?" he said.

Linguist and consultant Keao NeSmith told me he was shocked the first time he heard hapa outside of a Native Hawaiian context. NeSmith, who grew up on Kauai, learned more about the wider use of hapa when interviewed for a PRI podcast last year. Hearing the episode, his family and friends were shocked, too. "It's a new concept to many of us locals here in Hawaii to call Asian-Caucasian mixes 'hapa' like that," NeSmith said. "Not that it's a bad thing." (...)

That broad interpretation of the word may have its roots in Hawaii, where I have friends descended from Japanese and Chinese immigrants who grew up thinking hapa meant part Asian. Elsewhere in the islands, "hapa haole" continued to mean part Hawaiian. This makes literal sense in that "part foreigner" describes only what is different, with the dominant race or culture assumed. It's like how I might answer, "half Japanese" to "What are you?"-type questions; where whiteness is normalized, it doesn't have to be named.

by Akemi Johnson, NPR |  Read more:
Images: Jennifer Qian for NPR; and, Akemi Johnson
[ed. I grew up in Hawaii and am hapa (half Caucasian/half Japanese). All this racial slicing and dicing is a recent construct to me, important for reasons I can't quite fathom. From personal experience (in Hawaii), hapa always meant part-Caucasian/part Asian, part Caucasian/part Hawaiian, part Caucasian/some sort of other race(s). Generally, you'd never hear anyone who wasn't partly Caucasian call themselves hapa if they were, say, just of mixed Asian races, or anything else (Hawaiian, Portugese, Samoan, other Pacific Islanders, etc). Always Caucasian/something. And being hapa was valued, something aesthetically attractive, having no clear racial characteristics. If you were of mixed races, with no Caucasian element, you identified with whatever the predominant weighting was (eg: half Chinese and half a bunch of other stuff? Chinese). Not sure why this is more important these days. Eventually, we'll all be mutts anyway.]

Wednesday, March 1, 2023


Romare Bearden
, Jamming at the Savoy, Brooklyn Museum 1981; and, Pittsburgh Memory (1964)
via: here and here


Brandan Henry, A Child's Dream, 2020
via:

via:

The Man Behind Bag Fees

John Thomas is a mild-mannered airline consultant, a cheerful native of Australia with a ready laugh who is known for throwing great parties at his Needham home. So why do his friends want to stick pins into a voodoo doll of his likeness?

Thomas, 54, is the guy who brought baggage fees to airlines in North America. He first advised carriers to start charging for checked luggage in 2008, setting off a chain reaction that saw one airline after another adopt the charge and opening the floodgates for a steady stream of other new fees.

Passengers fumed, but analysts say it was necessary at a time when jet fuel prices were soaring and the industry seemed near collapse.

Without the infusion of cash provided by baggage fees — which now generate more than $3 billion a year — some airlines might have shut down, said Jay Sorensen, president of the Wisconsin-based travel consulting firm IdeaWorksCompany.

“It was a tsunami of money,’’ Sorensen said. “I would credit bag fees with saving the industry that year.’’

Bag fees don’t affect Thomas, though. He either flies on his seven-passenger Cessna Citation jet, which he keeps at Norwood Memorial Airport, or stuffs his belongings in a carry-on — even on a three-week trip to China.

His response to the irony? “Oops.’’ And then a sheepish giggle. (...)

Thomas works from LEK’s Boston office, where his main job is helping airlines make money, something the industry has needed desperately in recent years. [ed. This was written in 2013; fees continue to persist.]

Carriers were still recovering from losses after the 9/11 terrorist attacks when oil prices started to climb in 2007, eventually hitting record highs. Meanwhile, online travel sites made it easier for customers to compare prices and more difficult for airlines to raise fares to cover rising costs.

“They were looking at certain economic death,’’ Thomas said. 

Thomas first proposed baggage fees to a Canadian carrier in 2006. Several low-cost airlines in Europe started imposing bag fees around that time, but none of the major North American carriers had. The Canadian airline, which Thomas can’t reveal under his contract, didn’t go along with baggage fees — or with another of Thomas’s suggestion, to sell teeth whitener to passengers on red-eye flights.

So, when a US airline came to him for ways to generate new revenue streams, Thomas had a solution in hand. He and his team did a route-by-route analysis for the carrier, which Thomas also can’t identify, determining that revenue gained from bag fees would more than offset any loss of passengers if competitors didn’t do the same.

Still, airline executives were nervous. If the airline lost more customers than he projected, Thomas’s reputation, and LEK’s, would have suffered. “It’s my head on the line,’’ he said.

On Feb. 4, 2008, United Airlines announced it would charge $25 for the second checked bag. Within two weeks, US Airways said it would do the same, and almost all the major carriers except Southwest Airlines followed, according to LEK.

In May that year, a week before United was to implement the fee, American Airlines said it would charge for the first bag, too.

The other carriers that were planning to make passengers pay for the second bag said they would also start charging for the first, with the exception of JetBlue Airways.

“We were ecstatic,’’ Thomas said. The success ushered in the era of airlines imposing fees for services once included in ticket prices, such as sitting in a window seat, while adding new charges for perks such as extra legroom and early boarding.

“Baggage fees were the first horse out of the barn and the door was never closed,’’ said Sorensen, of IdeaWorks.

The revenue stream that resulted has been credited with helping to stabilize the industry. From 2008 to 2011, non-ticket revenue reported by airlines around the world more than doubled, to $22.6 billion from $10.3 billion, according to IdeaWorks.

Passengers have grown resigned to these fees. Ken Lynch of Mont Vernon, N.H., usually travels with a carry-on and pays to board early so he doesn’t have to battle for space in overhead bins. The 6-foot-5 technology and banking consultant, who flies once or twice a month, also shells out for extra legroom.

“Everybody hates the airlines,’’ he said. “It’s the modern-day version of the stagecoach: It’s uncomfortable, cramped, and the air stinks. The only thing missing is the smell of horse manure.’’

Thomas knows he’s not going to win any popularity contests among fliers. His wife, Paula Vanderhorst, gets a kick out of telling flight attendants that they have him to blame for all the passengers jamming carry-ons into overhead bins. (...)

But that won’t stop Thomas from dreaming up new ways for airlines to make money. He advised a British carrier to charge passengers $100 to guarantee that the seat next to them would be empty and recommended that another airline offer a service that picks up passengers’ bags at home and delivers them to their destination.

by Katie Johnston, Boston.com |  Read more:
Image: Josh Reynolds for The Boston Globe
[ed. God forbid some airline company goes out of business because it can't compete. I will never fly United, ever. Ever. And this guy? It's never one person, and these luggage fees and other pricing decisions were made by the airlines themselves, but hopefully there's some special place in hell for people who gladly make the lives of millions of others more miserable and costly (and are proud of it).]

via:

Subjective Ageing

The puzzling gap between how old you are and how old you think you are. There are good reasons you always feel 20 percent younger than your actual age.

This past thanksgiving, I asked my mother how old she was in her head. She didn’t pause, didn’t look up, didn’t even ask me to repeat the question, which would have been natural, given that it was both syntactically awkward and a little odd. We were in my brother’s dining room, setting the table. My mother folded another napkin. “Forty-five,” she said.

She is 76.

Why do so many people have an immediate, intuitive grasp of this highly abstract concept—“subjective age,” it’s called—when randomly presented with it? It’s bizarre, if you think about it. Certainly most of us don’t believe ourselves to be shorter or taller than we actually are. We don’t think of ourselves as having smaller ears or longer noses or curlier hair. Most of us also know where our bodies are in space, what physiologists call “proprioception.”

Yet we seem to have an awfully rough go of locating ourselves in time. A friend, nearing 60, recently told me that whenever he looks in the mirror, he’s not so much unhappy with his appearance as startled by it—“as if there’s been some sort of error” were his exact words. (High-school reunions can have this same confusing effect. You look around at your lined and thickened classmates, wondering how they could have so violently capitulated to age; then you see photographs of yourself from that same event and realize: Oh.) The gulf between how old we are and how old we believe ourselves to be can often be measured in light-years—or at least a goodly number of old-fashioned Earth ones. (...)

But “How old do you feel?” is an altogether different question from “How old are you in your head?” The most inspired paper I read about subjective age, from 2006, asked this of its 1,470 participants—in a Danish population (Denmark being the kind of place where studies like these would happen)—and what the two authors discovered is that adults over 40 perceive themselves to be, on average, about 20 percent younger than their actual age. “We ran this thing, and the data were gorgeous,” says David C. Rubin (75 in real life, 60 in his head), one of the paper’s authors and a psychology and neuroscience professor at Duke University. “It was just all these beautiful, smooth curves.”

Why we’re possessed of this urge to subtract is another matter. Rubin and his co-author, Dorthe Berntsen, didn’t make it the focus of this particular paper, and the researchers who do often propose a crude, predictable answer—namely, that lots of people consider aging a catastrophe, which, while true, seems to tell only a fraction of the story. You could just as well make a different case: that viewing yourself as younger is a form of optimism, rather than denialism. It says that you envision many generative years ahead of you, that you will not be written off, that your future is not one long, dreary corridor of locked doors.

I think of my own numbers, for instance—which, though a slight departure from the Rubin-Berntsen rule, are still within a reasonable range (or so Rubin assures me). I’m 53 in real life but suspended at 36 in my head, and if I stop my brain from doing its usual Tilt-A-Whirl for long enough, I land on the same explanation: At 36, I knew the broad contours of my life, but hadn’t yet filled them in. I was professionally established, but still brimmed with potential. I was paired off with my husband, but not yet lost in the marshes of a long marriage (and, okay, not yet a tiresome fishwife). I was soon to be pregnant, but not yet a mother fretting about eating habits, screen habits, study habits, the brutal folkways of adolescents, the porn merchants of the internet.

I was not yet on the gray turnpike of middle age, in other words. (...)

Ian Leslie, the author of Conflicted and two other social-­science books (32 in his head, 51 in “boring old reality”), took a similar view to mine and Richard’s, but added an astute and humbling observation: Internally viewing yourself as substantially younger than you are can make for some serious social weirdness.

“30 year olds should be aware that for better or for worse, the 50 year old they’re talking to thinks they’re roughly the same age!” he wrote. “Was at a party over the summer where average was about 28 and I had to make a conscious effort to remember I wasn’t the same—they can tell of course, so it’s asymmetrical.”

Yes. They can tell. I’ve had this unsettling experience, seeing little difference between the 30-something before me and my 50-something self, when suddenly the 30-something will make a comment that betrays just how aware she is of the age gap between us, that this gap seems enormous, that in her eyes I may as well be Dame Judi Dench.

by Jennifer Senior, The Atlantic |  Read more:
Image: Klaus Kremmerz
[ed. For me, it varies. Mostly around 45-55. But sometimes (say, where risk or self-control is involved) it's more like 17-21.]

OpenAI's "Planning For AGI And Beyond"

Planning For AGI And Beyond

Imagine ExxonMobil releases a statement on climate change. It’s a great statement! They talk about how preventing climate change is their core value. They say that they’ve talked to all the world’s top environmental activists at length, listened to what they had to say, and plan to follow exactly the path they recommend. So (they promise) in the future, when climate change starts to be a real threat, they’ll do everything environmentalists want, in the most careful and responsible way possible. They even put in firm commitments that people can hold them to.

An environmentalist, reading this statement, might have thoughts like:
  • Wow, this is so nice, they didn’t have to do this.
  • I feel really heard right now!
  • They clearly did their homework, talked to leading environmentalists, and absorbed a lot of what they had to say. What a nice gesture!
  • And they used all the right phrases and hit all the right beats!
  • The commitments seem well thought out, and make this extra trustworthy.
  • But what’s this part about “in the future, when climate change starts to be a real threat”?
  • Is there really a single, easily-noticed point where climate change “becomes a threat”?
  • If so, are we sure that point is still in the future?
  • Even if it is, shouldn’t we start being careful now?
  • Are they just going to keep doing normal oil company stuff until that point?
  • Do they feel bad about having done normal oil company stuff for decades? They don’t seem to be saying anything about that.
  • What possible world-model leads to not feeling bad about doing normal oil company stuff in the past, not planning to stop doing normal oil company stuff in the present, but also planning to do an amazing job getting everything right at some indefinite point in the future?
  • Are they maybe just lying?
  • Even if they’re trying to be honest, will their bottom line bias them towards waiting for some final apocalyptic proof that “now climate change is a crisis”, of a sort that will never happen, so they don’t have to stop pumping oil?
This is how I feel about OpenAI’s new statement, Planning For AGI And Beyond.

OpenAI is the AI company behind ChatGPT and DALL-E. In the past, people (including me) have attacked them for seeming to deprioritize safety. Their CEO, Sam Altman, insists that safety is definitely a priority, and has recently been sending various signals to that effect.

Planning For AGI And Beyond (“AGI” = “artificial general intelligence”, ie human-level AI) is the latest volley in that campaign. It’s very good, in all the ways ExxonMobil’s hypothetical statement above was very good. If they’re trying to fool people, they’re doing a convincing job!

Still, it doesn’t apologize for doing normal AI company stuff in the past, or plan to stop doing normal AI company stuff in the present. It just says that, at some indefinite point when they decide AI is a threat, they’re going to do everything right.

This is more believable when OpenAI says it than when ExxonMobil does. There are real arguments for why an AI company might want to switch from moving fast and breaking things at time t to acting all responsible at time t + 1 . Let’s explore the arguments they make in the document, go over the reasons they’re obviously wrong, then look at the more complicated arguments they might be based off of.

Why Doomers Think OpenAI Is Bad And Should Have Slowed Research A Long Time Ago

OpenAI boosters might object: there’s a disanalogy between the global warming story above and AI capabilities research. Global warming is continuously bad: a temperature increase of 0.5 degrees C is bad, 1.0 degrees is worse, and 1.5 degrees is worse still. AI doesn’t become dangerous until some specific point. GPT-3 didn’t hurt anyone. GPT-4 probably won’t hurt anyone. So why not keep building fun chatbots like these for now, then start worrying later?

Doomers counterargue that the fun chatbots burn timeline.

That is, suppose you have some timeline for when AI becomes dangerous. For example, last year Metaculus thought human-like AI would arrive in 2040, and superintelligence around 2043.


Recent AIs have tried lying to, blackmailing, threatening, and seducing users. AI companies freely admit they can’t really control their AIs, and it seems high-priority to solve that before we get superintelligence. If you think that’s 2043, the people who work on this question (“alignment researchers”) have twenty years to learn to control AI.

Then OpenAI poured money into AI, did ground-breaking research, and advanced the state of the art. That meant that AI progress would speed up, and AI would reach the danger level faster. Now Metaculus expects superintelligence in 2031, not 2043 (although this seems kind of like an over-update), which gives alignment researchers eight years, not twenty.

So the faster companies advance AI research - even by creating fun chatbots that aren’t dangerous themselves - the harder it is for alignment researchers to solve their part of the problem in time.

This is why some AI doomers think of OpenAI as an Exxon-Mobil style villain, even though they’ve promised to change course before the danger period. Imagine an environmentalist group working on research and regulatory changes that would have solar power ready to go in 2045. Then ExxonMobil invents a new kind of super-oil that ensures that, nope, all major cities will be underwater by 2031 now. No matter how nice a statement they put out, you’d probably be pretty mad!

Why OpenAI Thinks Their Research Is Good Now, But Might Be Bad Later

OpenAI understands the argument against burning timeline. But they counterargue that having the AIs speeds up alignment research and all other forms of social adjustment to AI. If we want to prepare for superintelligence - whether solving the technical challenge of alignment, or solving the political challenges of unemployment, misinformation, etc - we can do this better when everything is happening gradually and we’ve got concrete AIs to think about:
We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize “one shot to get it right” scenarios […] As we create successively more powerful systems, we want to deploy them and gain experience with operating them in the real world. We believe this is the best way to carefully steward AGI into existence—a gradual transition to a world with AGI is better than a sudden one. We expect powerful AI to make the rate of progress in the world much faster, and we think it’s better to adjust to this incrementally.

A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place. It also allows for society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low.
You might notice that, as written, this argument doesn’t support full-speed-ahead AI research. If you really wanted this kind of gradual release that lets society adjust to less powerful AI, you would do something like this:
  • Release AI #1
  • Wait until society has fully adapted to it, and alignment researchers have learned everything they can from it.
  • Then release AI #2
  • Wait until society has fully adapted to it, and alignment researchers have learned everything they can from it.
  • And so on . . 
Meanwhile, in real life, OpenAI released ChatGPT in late November, helped Microsoft launch the Bing chatbot in February, and plans to announce GPT-4 in a few months. Nobody thinks society has even partially adapted to any of these, or that alignment researchers have done more than begin to study them.

The only sense in which OpenAI supports gradualism is the sense in which they’re not doing lots of research in secret, then releasing it all at once. But there are lots of better plans than either doing that, or going full-speed-ahead.

So what’s OpenAI thinking? I haven’t asked them and I don’t know for sure, but I’ve heard enough debates around this that I have some guesses about the kinds of arguments they’re working off of. I think the longer versions would go something like this:

by Scott Alexander, Astral Codex Ten |  Read more:
Image: Metaculus/Sam Altman et. al uncredited
[ed. See also: How to navigate the AI apocalypse as a sane person (Intrinsic Perspective); and, Microsoft's new AI chatbot has been saying some 'crazy and unhinged things' (NPR).]

Tuesday, February 28, 2023

Why Is Everything So Ugly?

We live in undeniably ugly times. Architecture, industrial design, cinematography, probiotic soda branding — many of the defining features of the visual field aren’t sending their best. Despite more advanced manufacturing and design technologies than have existed in human history, our built environment tends overwhelmingly toward the insubstantial, the flat, and the gray, punctuated here and there by the occasional childish squiggle. This drab sublime unites flat-pack furniture and home electronics, municipal infrastructure and commercial graphic design: an ocean of stuff so homogenous and underthought that the world it has inundated can feel like a digital rendering — of a slightly duller, worse world.

If the Situationists drifted through Paris looking to get defamiliarized, today a scholar of the new ugliness can conduct their research in any contemporary American city — or upzoned American Main Street, or exurban American parking lot, or, if they’re really desperate, on the empty avenues of Meta’s Horizon Worlds. Our own walk begins across the street from our apartment, where, following the recent demolition of a perfectly serviceable hundred-year-old building, a monument to ugliness has recently besieged the block. Our new neighbor is a classic 5-over-1: retail on the ground floor, topped with several stories of apartments one wouldn’t want to be able to afford. The words THE JOSH have been appended to the canopy above the main entrance in a passionless font.

We spent the summer certain that the caution tape–yellow panels on The Josh’s south side were insulation, to be eventually supplanted by an actual facade. Alas, in its finished form The Josh really is yellow, and also burgundy, gray, and brown. Each of these colors corresponds to a different material — plastic, concrete, rolled-on brick, an obscure wood-like substance — and the overall effect is of an overactive spreadsheet. Trims, surfaces, and patterns compete for attention with shifty black windows, but there’s nothing bedazzling or flamboyant about all this chaos. Somehow the building’s plane feels flatter than it is, despite the profusion of arbitrary outcroppings and angular balconies. The lineage isn’t Bauhaus so much as a sketch of the Bauhaus that’s been xeroxed half a dozen times.

The Josh is aging rapidly for a 5-month-old. There are gaps between the panels, which have a taped-on look to them, and cracks in the concrete. Rust has bloomed on surfaces one would typically imagine to be rustproof. Every time it rains, The Josh gets conspicuously . . . wet. Attempts have been made to classify structures like this one and the ethos behind their appearance: SimCityist, McCentury Modern, fast-casual architecture. We prefer cardboard modernism, in part because The Josh looks like it might turn to pulp at the first sign of a hundred-year flood. (...)

The urban building boom that picked up in the wake of the Great Recession wasn’t a boom at all, at least not by previous booming standards: in the early 2010s, multifamily housing construction was at its lowest in decades. But low interest rates worked in developers’ favor, and what had begun as an archipelago of scattered development had coalesced, by the end of the Obama years, into a visual monoculture. At the global scale, supply chains narrowed the range of building materials to a generic minimum (hence The Josh’s pileup of imitation teak accents and synthetic stucco antiflourishes). At the local level, increasingly stringent design standards imposed by ever-more-cumbersome community approval processes compelled developers to copy designs that had already been rubber-stamped elsewhere (hence that same fake teak and stucco in identical boxy buildings across the country). The environment this concatenation of forces has produced is at once totalizing and meek — an architecture embarrassed by its barely architected-ness, a building style that cuts corners and then covers them with rainscreen cladding. For all the air these buildings have sucked up in the overstated conflict between YIMBYs (who recognize that new housing is ultimately better than no housing) and NIMBYs (who don’t), the unmistakable fact of cardboard modernism is that its buildings are less ambitious, less humane, and uglier than anyone deserves.

They’re also really gray. The Josh’s steel railings are gray, and its plastic window sashes are a slightly clashing shade of gray. Inside, the floors are made of gray TimberCore, and the walls are painted an abject post-beige that interior designers call greige but is in fact just gray. Gray suffuses life beyond architecture: television, corporate logos, product packaging, clothes for babies, direct-to-consumer toothbrushes. What incentives — material, libidinal, or otherwise — could possibly account for all this gray? In 2020, a study by London’s Science Museum Group’s Digital Lab used image processing to analyze photographs of consumer objects manufactured between 1800 and the present. They found that things have become less colorful over time, converging on a spectrum between steel and charcoal, as though consumers want their gadgets to resemble the raw materials of the industries that produce them. If The Man in the Gray Flannel Suit once offered a warning about conformity, he is now an inspiration, although the outfit has gotten an upgrade. Today he is The Man in the Gray Bonobos, or The Man in the Gray Buck Mason Crew Neck, or The Man in the Gray Mack Weldon Sweatpants — all delivered via gray Amazon van. The imagined color of life under communism, gray has revealed itself to be the actual hue of globalized capital. “The distinct national colors of the imperialist map of the world have merged and blended in the imperial global rainbow,” wrote Hardt and Negri. What color does a blended rainbow produce? Greige, evidently.

A lot of ugliness accretes privately, in the form of household goods, which can make it hard to see — except on the first of the month. Today’s perma-class of renters moves more frequently than ever before (inevitably to smaller apartments), and on moving day the sidewalks are transformed into a rich bazaar of objects significant for ugliness studies. We stroll past discarded pottery from wild sip ’n’ spin nights; heaps of shrunken fast fashion from Shein; dead Strategist-approved houseplants; broken Wirecutter-approved humidifiers; an ergonomic gaming chair; endless Ikea BILLYs, MALMs, LACKs, SKUBBs, BARENs, SLOGGs, JUNQQs, and FGHSKISs. Perhaps this shelf is salvageable — ? No, just another mass of peeling veneer and squishy particleboard. On one stoop sits a package from a direct-to-consumer eyewear company, and we briefly fantasize about a pair of glasses that would illuminate, They Live–style, the precise number of children involved in manufacturing each of these trashed items, or maybe the acreage of Eastern European old-growth trees.

It occurs to us, strolling past a pair of broken BuzzFeed Shopping–approved AirPods, that the new ugliness has beset us from both above and below. Many of the aesthetic qualities pioneered by low-interest-rate-era construction — genericism, non-ornamentation, shoddy reproducibility — have trickled down into other realms, even as other principles, unleashed concurrently by Apple’s slick industrial-design hegemon, have trickled up. In the middle, all that is solid melts into sameness, such that smart home devices resemble the buildings they surveil, which in turn look like the computers on which they were algorithmically engineered, which resemble the desks on which they sit, which, like the sofas at the coworking space around the corner, put the mid in fake midcentury modern. And all of it is bound by the commandment of planned obsolescence, which decays buildings even as it turns phones into bricks.

Beyond the sidewalk, the street — which is mostly for cars, key technology of the 20th-century assault on the city. Barthes wrote that the 1955 CitroĂ«n DS marked a welcome shift in the appearance in cars toward the “homely,” meaning that they’d begun to carry the comfortable livability of kitchens and household equipment. Today’s automobiles, far from being “the supreme creation of an era,” are homely in the other sense of the word. A contemporary mythologist could sort them into either hamsters or monoliths. Hamster cars (the Honda Fit, the Toyota Prius) are undoubtedly ugly, but in a virtuous way. The monolith cars (the Cadillac Escalade, the Infiniti QX80) possess a militaristic cast, as if to get to Costco one must first stop off at the local black site. No brand has embraced the ethos more than Tesla, with its tanklike Cybertruck. Even Musk’s more domesticated offerings feel like they’re in the surveillance business: sitting inside a Tesla is not unlike sitting inside a smartphone, while also staring at a giant smartphone.

by The Editors, N+1 |  Read more:
Image: Mark Krotov

Northern Lights Dazzle

Turnagain Arm, AK
Northern Lights dazzle in big swath of Alaska (ADN)
Image: Loren Holmes/ADN
[ed. See also: Dazzling aurora lit up Sunday-night sky (WPO/ADN)]

Obama Meet and Greet

[ed. Stumbled onto this again this morning and it never gets old (for me anyway) haha. Jordan Peele smashes it... "1/8th black." "Afternoon, my octoroon!" lol

Yellow Tree
Image: Marz62 (Wikimedia Commons)

Monday, February 27, 2023

Everything Everywhere All at Once

It’s a testament to how far Hollywood has come in recent years that a mind-scrambling sci-fi action comedy, about a stressed Chinese American immigrant who has to save the multiverse, is leading the Oscars race with 11 nominations and is the favourite to win best picture – a standing reinforced by its sweep at the Screen Actors Guild on Sunday. The Academy likes serious prestige dramas; Everything Everywhere All at Once is anything but. It’s a ridiculously silly, outrageously hilarious and profoundly weird fantasy. And that’s exactly why it would be a worthy winner.

Made on a relatively modest budget of $25m by directing duo Daniel Kwan and Daniel Scheinert (collectively known as the Daniels), the surreal martial arts adventure seemingly came out of nowhere to become one of the biggest box office triumphs of the pandemic years. It’s increasingly rare these days for independent films to become commercial hits, but Everything Everywhere All at Once grossed more than $100m worldwide thanks to good old-fashioned word of mouth, with many fans heading back to the cinema for multiple viewings.

In an industry clogged with never-ending comic book adaptations, sequels, prequels and spin-offs, it takes balls, a febrile imagination and lots of googly eyes to come up with something genuinely surprising. Where else would you see a love scene enacted with plump hotdog fingers? Or fight sequences using a giant butt plug and a fanny pack as weapons? Or a lofty philosophical idea like nihilism represented by a huge, spinning bagel? (...)

All those ideas would be dismissed as mere gimmicks if the film didn’t have any heart to it, and that’s something Everything Everywhere All at Once has in buckets. If you take away the eye-popping visuals, multiverse battles and spectacular martial arts choreography, it boils down to a wholesome, universal story about family and the healing power of love and kindness.

by Ann Lee, The Guadian | Read more:
Image: YouTube
[ed. Looks interesting. Available on Showtime (and as an add-on to Amazon Prime if you sign up for a free 7 day trial). Also to be re-released in some theaters for some period of time (mileage may vary). See also: Screen Actors Guild awards 2023: Everything Everywhere All at Once breaks record for wins (Guardian).]

Saturday, February 25, 2023

Ming Smith, Julius+Joanne; and, Sun Ra Space II
[ed. See also: On Ming Smith (LRB); and, Ming Smith Shook Up Photography in the ’70s. Now, She Is Coming into Full View (ArtNews).]

I Think We’re Alone Now

I once​ drove to Forest Lawn Memorial Park. It was before Michael Jackson had his crypt there, but I remember finding Walt Disney’s grave and that of Gutzon Borglum, the sculptor of Mount Rushmore. A few writers are there too: Theodore Dreiser, who wrote well about department stores in Sister Carrie, and Clifford Odets, who believed shopping was one of America’s chronic diseases. After seeing the graves and spending an hour in the sweltering heat I went to the Glendale Galleria, not only a shopping mall of epic proportions but a space of infinite reprieve, with the world’s best air-conditioning.

‘The nature of these vast retail combinations,’ Dreiser wrote in 1900, ‘should they ever permanently disappear, will form an interesting chapter in the commercial history of our nation.’ Ray Bradbury saw the shopping strip as a ‘flowering out of a modest trade principle’, and his influence on the architects of the Glendale Galleria (built in 1976) was acknowledged by Jon Jerde, its principal designer, who was also responsible for the Mall of America in Minnesota (1992), the largest in the Western hemisphere, and the Bellagio Hotel and Casino in Las Vegas (1998). Jerde asked Bradbury to help him think about a project in San Diego, and he replied with a manifesto called ‘The Aesthetics of Lostness’, which still provides the best definition of the ambience of shopping malls, a feeling of comforting distraction and exciting misplacedness akin to foreign travel. ‘Jerde’s strongest precedent,’ Alexandra Lange writes in Meet Me by the Fountain: An Inside History of the Mall (Bloomsbury, £23), ‘came from the same environments for which Bradbury had already written scenarios: world’s fairs and theme parks, which shamelessly mashed up countries, decades, architectural styles and artificial topography in the interest of creating the most exciting visual narrative in the minimum quantity of space.’ ‘Artificial topography’ is very good; it precisely describes so many postwar built environments, from retail plazas to new towns, all of them founded on an idea of the way we might live if we were much better at living. (...)

Lange makes an interesting point about the patriotism of shopping. ‘During World War Two, female consumers were encouraged to plant victory gardens, cook with less meat, collect their scraps and save their pennies. In the postwar era, they were the target of a very different message: the patriotic thing to do was to spend.’ By the 1980s, this was a religion that included religion itself, but to focus too much on consumption would be to miss the special ambience of malls, where the form is so much more fun than the function. As with high flats or holiday camps, we begin to see the essence of these places only in the moment of their passing. Malls are playgrounds with parking. They are nightclubs without drinks and with muzak for music. They are billboards of aspiration and churches of boredom. You don’t wander round a shopping mall in order to be thrilled, but to overcome the wish to be thrilled; if you buy something, that’s fine, but you belong there just as much when you don’t. (To say you’re only shopping when you’re buying stuff is like saying you’re only a sexual person when you’re having sex.) That’s what teenagers understood: the mall was freedom with walls, a habitat much closer to their wants and not-wants than anything built by their parents.

Non-fans say they get lost in them, but getting lost is part of the point. You find your way back to the big stores, or you meet at the fountain. When a child is abducted, the mall can suddenly seem part of the abduction, having failed to protect those passing through its human engineering. That was the feeling in 1993 when the Merseyside toddler James Bulger was taken from the Strand Shopping Centre, as if the building itself was guilty of some terrible anomie. If you liked malls as much as I did as a teenager – Rivergate Mall in Irvine New Town, eat your heart out, and the shopping centre in the ‘plug-in city’ of Cumbernauld, now set to be demolished – you find it quite hard to admit all the bad things about them. ‘Go to the mall!’ the Jack Black character in the film of High Fidelity tells a naff customer who asks for an uncool record. That stung, but I knew what he meant. Malls had rubbish record shops. Malls had rubbish shops, full stop, but the shops were pretty much irrelevant. Malls are closing now, one after the other, but Lange is right when she tells us that the US is ‘over malled: the country has approximately 24 square feet of retail space for every American compared with ... 4.6 in the UK and 2.8 in China.’ As that space shrinks in real time, it grows in the imagination, and we think of Amazon aisles that stretch out beyond an invisible horizon, even as shopping malls become the industrial wastelands of the post-Trump era.

And so we look back. ‘During the 1970s,’ Lange writes, ‘a widening split developed between the commercial and academic branches of architecture. Malls ended up on the wrong side of the tracks: good architects design museums; bad architects design malls.’ That was the prevailing attitude, and Rem Koolhaas once referred to Jon Jerde, the Glendale architect, as Frank Gehry’s ‘evil twin’. This was just snobbery, of course: people who go to museums are thought to engage with the building they are in, while shoppers are thought not to notice they’re in a big shed or a bad copy of an Italian village. First: fuck off. Second: Gehry in fact was happy to design a mall in his early days, Santa Monica Place (1980), before the Disneyfying of ‘significant’ public buildings became a cultural clichĂ©. Pop culture has an admirable ability to make its own monuments, and from Dawn of the Dead and Fast Times at Ridgemont High through Mean Girls to The OC, the shopping mall is a place where human beings can be spotted at their most inscrutably social, their most poignantly alone, their most desirous and their most innocent. 

by Andrew O'Hagan, LRB |  Read more:
Image: uncreditable via web
[ed. I couldn't decide whether to include this or the next essay (both are great) on the Glendale mall wars, so included both.]

The Great LA Dumpling Drama

Perhaps we should start with the dumplings themselves, which are, of course, delicious. Worth the trip. Worth planning the trip around. Particularly the soup dumplings, or xiao long bao, which are — you could argue, and I would — the platonic ideal of the form: silky, broth-filled little clouds that explode inside your mouth upon impact. An all-timer of a dumping.

And that, more or less, is the most you will hear about the food made at the wildly popular Taiwanese dumpling chain Din Tai Fung: It’s great, it’s a draw, it’s the reason for everything that follows.

The remainder of our story begins and ends and pretty much exclusively takes place in Glendale, California — a city of close to 200,000 that sits just 10 miles north of downtown Los Angeles.

Glendale, like other cities within the Greater LA region, is often unfairly provincialized. For example, my 101-year-old grandmother, a native Angeleno, still calls Glendale “Dingledale” and still complains about briefly living there about eight decades ago. These cities are — again, unfairly — given a kind of shorthand: Santa Monica’s got beaches; West Hollywood’s got good nightlife and (relatedly) the gays; Studio City’s got… a studio? So does Burbank. But Glendale: Glendale’s got more Armenians than almost anywhere but Armenia and also, malls.

Specifically, the two huge malls that dominate its downtown: the Glendale Galleria and the Americana at Brand. These malls are neighbors, separated by a single street (Central Avenue) and are even immediately next door to each other in places. And yet, they could not possibly be more different, in terms of… well, everything. Both have Apple Stores. And a Wetzel’s. But really, after Wetzel’s, that’s about it.

Since 2013, the sole San Fernando Valley outpost of Din Tai Fung has been located within the Americana at Brand, a glitzy outdoor mall that opened in 2008 and is owned and operated by Caruso, a real estate company named after its founder, CEO, and lone shareholder, Rick Caruso. Perhaps you’ve heard of him? He recently ran to be mayor of Los Angeles, spent $104 million of his estimated $4 billion doing so, and lost by nearly 10 points.

Late last summer, as Caruso’s campaign was gearing up to spend more on local TV ads than any mayoral candidate in the city’s history, word got out that Din Tai Fung was leaving Caruso’s biggest mall (in square footage), the Americana. Not just leaving. Din Tai Fung was moving across the street. To the much more indoor, much less “cool” mall: the Galleria.

This was odd — definitely unexpected — and great gossip for a certain type of Angeleno who is aware of both the Americana and the Galleria and the garlic green bean situation at Din Tai Fung. In the 1980s teen rom-com movie version of this, it was like the most attractive, high-achieving girl in high school — Din Tai Fung — suddenly dating someone — the Galleria — from a whole different social clique; the Lloyd Dobler of malls.

Part of this image of the Galleria as somehow lower status than the Americana is simply that it’s an older mall, from an older era of mall design and philosophy. When it opened, in 1976, the Galleria’s principal designer, Jon Jerde, was heavily influenced by an essay by the novelist Ray Bradbury, published in The Los Angeles Times WEST Magazine and titled “Somewhere to Go.” For another Jerde mall, in San Diego, Bradbury even wrote a manifesto of sorts called “The Aesthetics of Lostness” — a phrase that, as the writer Andrew O’Hagan recently put it, “still provides the best definition of the ambience of shopping malls, a feeling of comforting distraction and exciting misplacedness akin to foreign travel.”

When I consider the aesthetics of lostness, Jerde’s Galleria immediately springs to mind. Specifically, its many-leveled, labyrinthine parking garage where — once, and never again — I forgot to take a photo of where I’d parked my car and ended up walking from floor to floor, pressing my keys and trying to hear it honk for — and I’m not even exaggerating one little bit here — two hours and 50-some-odd minutes.

The absolute horror and confusion brought about by the Galleria’s parking structure is also a running joke on the Americana at Brand Memes account, a popular parody Twitter account that goofs on not just the Americana, but the Galleria and other malls throughout Los Angeles, as well as countless other extremely specific details about living in LA. It’s the sort of hyperlocal humor that, particularly in LA — which is not one city but many, and vast, and often lonely — helps bind the place together, reminding us of our common, shared experiences, like losing our car in a mall parking lot.

Last August, moments after news of the Din Tai Fung move broke, the man who runs the Americana at Brand Memes Twitter account was out to breakfast with his mother-in-law when his phone began buzzing. Something was up. The buzzing did not stop. Hmm, he thought. This is probably big. This man — let’s just call him Mike — checked his phone. Oh, wow, yes. “This was like when Lebron left Cleveland,” he said, recalling the moment he saw his replies and learned the news. This was months later; we were talking on the phone. I reminded Mike that Lebron left Cleveland twice: first for Miami, then Los Angeles — two cities that are quite a bit flashier than Cleveland. Was he saying the Galleria was like those cities?

“Right,” Mike told me. “Right. No. You know, I don’t really follow sports.” Also, the Americana is nothing like Cleveland. I mean, it’s got one of those Vegas Bellagio-style fountains that fires off streams of choreographed dancing water. Also: a whimsical steampunk parking lot elevator. And a Cheesecake Factory. And a trolley! The Americana’s aesthetics are decidedly not of lostness. There is no “excited misplacedness,” no sense of the foreign. It’s all quite calming and familiar because it’s more or less Walt Disney’s Main Street, U.S.A., a place that, even if you’ve never been, you know. “So, what city’s like the Galleria?” Mike asked me. I said I wasn’t sure. Milwaukee, maybe? (...)

The reasons behind Din Tai Fung up and leaving the Americana are, from one angle, pretty cut-and-dried. This was a business decision. Din Tai Fung had “needed “more space for equipment upgrades” (their words, echoed by the official line from the Caruso camp: “[T]hey inquired about additional space [which] … we were unable to accommodate…”). The lease was coming up, and Brookfield Properties — which owns the Galleria — offered Din Tai Fung a location that was much bigger, with higher visibility, just across the street from the Americana’s Cheesecake Factory, smack in the middle of Central Avenue, and right at the main entrance of the Galleria where a Gap used to be. Keith Isselhardt, the Senior Vice President of Leasing at Brookfield who oversaw the deal, told me it was as simple as “one plus one equals three,” that the Galleria was, according to him, a property with “masses of asses,” and that they could put Din Tai Fung right on the corner of “Main and Main.”

by Ryan Bradley, Eater | Read more:
Image: Wonho Frank Lee