Sunday, May 25, 2025

On Life in the Shadow of the Boomers

Ideology, which was once the road to action, has become a dead end.
—Daniel Bell (1960)

Yuval Levin’s 2017 book Fractured Republic: Renewing America’s Social Contract in the Age of Individualism has several interesting passages inside it, but none so interesting as Levin’s meditation on the generational frame that clouds the modern mind. Levin maintains that 21st century Americans largely understand the last decades of the 20th century, and the first decades of the 21st, through the eyes of the Boomers. Many of the associations we have with various decades (say, the fifties with innocence and social conformity, or the sixties with explosive youthful energy), says Levin, had more to do with the life-stage in which Boomer’s experienced these decades than anything objective about the decades themselves:
Because they were born into a postwar economic expansion, they have been an exceptionally middle-class generation, targeted as consumers from birth. Producers and advertisers have flattered this generation for decades in an effort to shape their tastes and win their dollars. And the boomers’ economic power has only increased with time as they have grown older and wealthier. Today, baby boomers possess about half the consumer purchasing power of the American economy, and roughly three-quarters of all personal financial assets, although they are only about one-quarter of the population. All of this has also made the baby boomers an unusually self-aware generation. Bombarded from childhood with cultural messages about the promise and potential of their own cohort, they have conceived of themselves as a coherent group to a greater degree than any generation of Americans before them.

Since the middle of the twentieth century they have not only shaped the course of American life through their preferences and choices but also defined the nation’s self-understanding. Indeed, the baby boomers now utterly dominate our understanding of America’s postwar history, and in a very peculiar way. To see how, let us consider an average baby boomer: an American born in, say, 1950, who has spent his life comfortably in the broad middle class. This person experienced the 1950s as a child, and so remembers that era, through those innocent eyes, as a simple time of stability and wholesome values in which all things seemed possible.

By the mid-1960s, he was a teenager, and he recalls that time through a lens of youthful rebellion and growing cultural awareness—a period of idealism and promise. The music was great, the future was bright, but there were also great problems to tackle in the world, and he had the confidence of a teenager that his generation could do it right. In the 1970s, as a twenty-something entering the workforce and the adult world, he found that confidence shaken. Youthful idealism gave way to some cynicism about the potential for change, recreational drugs served more for distraction than inspiration, everything was unsettled, and the future seemed ominous and ambiguous. His recollection of that decade is drenched in cold sweat.

In the 1980s, in his thirties, he was settling down. His work likely fell into a manageable groove, he was building a family, and concerns about car loans, dentist bills, and the mortgage largely replaced an ambition to transform the world. This was the time when he first began to understand his parents, and he started to value stability, low taxes, and low crime. He looks back on that era as the onset of real adulthood. By the 1990s, in his forties, he was comfortable and confident, building wealth and stability. He worried that his kids were slackers and that the culture was corrupting them, and he began to be concerned about his own health and witness as fifty approached. But on the whole, our baby boomer enjoyed his forties—it was finally his generation’s chance to be in charge, and it looked to be working out.

As the twenty-first century dawned, our boomer turned fifty. He was still at the peak of his powers (and earnings), but he gradually began to peer over the hill toward old age. He started the decade with great confidence, but found it ultimately to be filled with unexpected dangers and unfamiliar forces. The world was becoming less and less his own, and it was hard to avoid the conclusion that he might be past his prime. He turned sixty-five in the middle of this decade, and in the midst of uncertainty and instability. Health and retirement now became prime concerns for him. The culture started to seem a little bewildering, and the economy seemed awfully insecure. He was not without hope. Indeed, in some respects, his outlook on the future has been improving a little is he contemplates retirement. He doesn’t exactly admire his children (that so-called “Generation X”), but they have exceeded his expectations, and his grandchildren (the youngest Millennials and those younger still) seem genuinely promising and special. As he contemplates their future, he does worry that they will be denied the extraordinary blend of circumstances that defined the world of his youth.

The economy, politics, and the culture just don’t work the way they used to, and frankly, it is difficult for him to imagine America two or three decades from now. He rebelled against the world he knew as a young man, but now it stands revealed to him as a paradise lost. How can it be regained? This portrait of changing attitudes is, of course, stylized for effect. But it offers the broad contours of how people tend to look at their world in different stages of life, and it shows how Americans (and, crucially, not just the boomers) tend to understand each of the past seven decades of our national life. This is no coincidence. We see our recent history through the boomers’ eyes. Were the 1950s really simple and wholesome? Were the 1960s really idealistic and rebellious? Were the 1970s aimless and anxious? Did we find our footing in the 1980s? Become comfortable and confident in the 1990s? Or more fearful and disoriented over the past decade and a half? As we shall see in the coming chapters, the answer in each case is not simply yes or no. But it is hard to deny that we all frequently view the postwar era in this way—through the lens of the boomer experience.

The boomers’ self-image casts a giant shadow over our politics, and it means we are inclined to look backward to find our prime. More liberal-leaning boomers miss the idealism of the flower of their youth, while more conservative ones, as might be expected, are more inclined to miss the stability and confidence of early middle age—so the Left yearns for the 1960s and the Right for the 1980s. But both are telling the same story: a boomer’s story of the America they have known. The trouble is that it is not only the boomers themselves who think this way about America, but all of us, especially in politics. We really have almost no self-understanding of our country in the years since World War II that is not in some fundamental way a baby-boomer narrative. [1]
When I first read this passage in 2018 I experienced it as a sort of revelation that suddenly unlocked many mysteries then turning in my mind.

To start with: The 1950s did not seem like an age of innocent idyll or bland conformity to the adults who lived through it. It was a decade when intellectual life was still attempting to come to terms with the horrors of World War II and the Holocaust. Consider a few famous book titles: Orwell’s 1984 (published 1949), Hersey’s The Wall (1950), Arendt’s The Origins of Totalitarianism (1951), Chambers’ Witness (1952), Miller’s The Crucible (1953), Bradbury’s Fahrenheit 451 (1953), Golding’s Lord of the Flies (1954), Pasternak’s Doctor Zhivago (1957), and Shirer’s Rise and Fall of the Third Reich (1960) were all intensely preoccupied with the weaknesses of liberalism and the allure of totalitarian solutions. For every optimistic summons to Tomorrowland, there was a Lionel Trilling, Reinhold Niebuhr, or Richard Hofstadter ready to declare Zion forever out of reach, hamstrung by the irony and tragedy of the American condition. Nor was it the wholesome era of memory. An age we associate with childlike obedience saw its children as anything but obedient—witness the anxiety of the age in films like The Wild One (1953), Rebel Without a Cause (1955), and Blackboard Jungle (1955). This age of innocence saw the inaugural issue of Playboy, the books Lolita (1955) and Peyton Pace (1956) hitting the New York Times Fiction best seller list, the Kinsey reports topping the Non-fiction best seller list, and Little Richard inaugurating rock ‘n roll with the lyrics
Good Golly Miss Molly, sure like to ball
When you’re rocking and rolling
Can’t hear your mama call.
And that is all without considering a lost war in Korea, the tension of the larger Cold War, and the tumult of the Civil Rights revolution. We may think of the 1950s as an age of conformity, purity, and stability, but those who lived through it as adults experienced it as an age of fragmentation, permissiveness, and shattered innocence.[2]

Levin explains why our perception of the era differs so much from the perceptions of the adults who lived through it. We see it as an age of innocence because we see it through the eyes of the Boomers, who experienced this age as children. But his account also helps explain something else—that odd feeling I have whenever I watch Youtube clips of a show like What’s My Line. Though products of American pop culture, those shows seem like relics from alien world, an antique past more different in manners and morals from the America of 2020 than many foreign lands today. However, this eerie feeling of an alien world does not descend upon me when I see a television show from the 1970s. The past may be a different country, the border line is not crossed until we hit 1965.

This observation is not mine alone. In his new book, The Decadent Society: How We Became Victims of Our Own Success, Ross Douthat describes it as a more general feeling, a feeling expressed in many corners on the 30 year anniversary of the 1985 blockbuster Back to the Future. The plot of that film revolves around a contemporary teenager whisked back via time machine to the high school of his parents, 30 years earlier. When the film’s anniversary hit in 2015, many commented that the same plot could not work today. The 1980s simply seemed far too similar to the 2010s for the juxtaposition to entertain. Douthat explains why this might be so:
A small case study: in the original Back to the Future, Marty McFly invaded his father’s sleep dressed as “Darth Vader from the planet Vulcan.” The joke was that the pop culture of the 1960s and 1970s could be passed off as a genuine alien visitation because it would seem so strange to the ears of a 1950s teen. But thirty years after 1985, the year’s biggest blockbuster was a Star Wars movie about Darth Vader’s grandkid… which was directed by a filmmaker, J. J. Abrams, who was coming off rebooting Star Trek… which was part of a wider cinematic landscape dominated by “presold” comic-book properties developed when the baby boomers were young. A Martina McFly visiting the Reagan-era past from the late 2010s wouldn’t have a Vader/ Vulcan prank to play, because her pop culture and her parents’ pop culture are strikingly the same….
by Tanner Greer, The Scholar's Stage |  Read more:
Image: via

Friday, May 23, 2025

The Linda Problem

Conjunctive fallacies

A conjunction effect or Linda problem is a bias or mistake in reasoning where adding extra details (an "and" statement or logical conjunction; mathematical shorthand to a sentence makes it appear more likely. Logically, this is not possible, because adding more claims can make a true statement false, but cannot make false statements true: If A is true, then A+B might be false (if B is false). However, if A is false, then A+B will always be false, regardless of what B is. Therefore, A+B cannot be more likely than A.

Definition and basic example

The most often-cited example of this fallacy originated with Amos Tversky and Daniel Kahneman.
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
Which is more probable?
  1. Linda is a bank teller.
  2. Linda is a bank teller and is active in the feminist movement.
The majority of those asked chose option 2. However, this is logically impossible: if Linda is a bank teller active in the feminist movement, then she is a bank teller. Therefore, it is impossible for 2 to be true while 1 is false, so the probabilities are at most equal. (...)

Tversky and Kahneman argue that most people get this problem wrong because they use a heuristic (an easily calculated) procedure called representativeness to make this kind of judgment: Option 2 seems more "representative" of Linda from the description of her, even though it is clearly mathematically less likely.

In other demonstrations, they argued that a specific scenario seemed more likely because of representativeness, but each added detail would actually make the scenario less and less likely. In this way it could be similar to the misleading vividness fallacy. More recently, Kahneman has argued that the conjunction fallacy is a type of extension neglect.

Joint versus separate evaluation

In some experimental demonstrations, the conjoint option is evaluated separately from its basic option. In other words, one group of participants is asked to rank-order the likelihood that Linda is a bank teller, a high school teacher, and several other options, and another group is asked to rank-order whether Linda is a bank teller and active in the feminist movement versus the same set of options (without "Linda is a bank teller" as an option). In this type of demonstration, different groups of subjects still rank-order Linda as a bank teller and active in the feminist movement more highly than Linda as a bank teller.

Separate evaluation experiments preceded the earliest joint evaluation experiments, and Kahneman and Tversky were surprised when the effect was observed even under joint evaluation.

Other examples

While the Linda problem is the best-known example, researchers have developed dozens of problems that reliably elicit the conjunction fallacy.

Tversky & Kahneman (1981)

The original report by Tversky & Kahneman (later republished as a book chapter) described four problems that elicited the conjunction fallacy, including the Linda problem. There was also a similar problem about a man named Bill (a good fit for the stereotype of an accountant — "intelligent, but unimaginative, compulsive, and generally lifeless" — but not a good fit for the stereotype of a jazz player), and two problems where participants were asked to make predictions for events that could occur in 1981.

Policy experts were asked to rate the probability that the Soviet Union would invade Poland, and the United States would break off diplomatic relations, all in the following year. They rated it on average as having a 4% probability of occurring. Another group of experts was asked to rate the probability simply that the United States would break off relations with the Soviet Union in the following year. They gave it an average probability of only 1%.

In an experiment conducted in 1980, respondents were asked the following:
Suppose Björn Borg reaches the Wimbledon finals in 1981. Please rank order the following outcomes from most to least likely. 
  • Borg will win the match 
  • Borg will lose the first set
  • Borg will lose the first set but win the match 
  • Borg will win the first set but lose the match
On average, participants rated "Borg will lose the first set but win the match" more likely than "Borg will lose the first set". However, winning the match is only one of several potential eventual outcomes after having lost the first set. The first and the second outcome are thus more likely (as they only contain one condition) than the third and fourth outcome (which depend on two conditions).

Tversky & Kahneman (1983)

Tversky and Kahneman followed up their original findings with a 1983 paper that looked at dozens of new problems, most of these with multiple variations. The following are a couple of examples.
Consider a regular six-sided die with four green faces and two red faces. The die will be rolled 20 times and the sequence of greens (G) and reds (R) will be recorded. You are asked to select one sequence, from a set of three, and you will win $25 if the sequence you choose appears on successive rolls of the die.
  1. RGRRR
  2. GRGRRR
  3. GRRRRR
65% of participants chose the second sequence, though option 1 is contained within it and is shorter than the other options. In a version where the $25 bet was only hypothetical the results did not significantly differ. Tversky and Kahneman argued that sequence 2 appears "representative" of a chance sequence (compare to the clustering illusion).

by Wikipedia |  Read more:
[ed. Very common. Mexican immigrant/Illegal Mexican immigrant.]

Patrick Fry, Suwun
via:

Thursday, May 22, 2025

Abundance and China

Could America pursue an abundance agenda without the threat of the PRC? And can podcasters change the world?

To discuss, ChinaTalk interviewed Ezra Klein and Derek Thompson, who need no introduction, as well as Dan Wang, who has written all those beautiful annual letters and is back in the US as a research fellow at Kotkin’s Hoover History Lab. He has an excellent book called Breakneck coming out this August, but we’re saving that show for a little later this year.

Today, our conversation covers…
  • The use of China as a rhetorical device in US domestic discourse,
  • Oversimplified aspects of Chinese development, and why the bipartisan consensus surrounding Beijing might fail to produce a coherent strategy,
  • The abundance agenda and technocratic vs prophetic strategies for policy change,
  • How to conceptualize political actors complexly, including unions, corporations, and environmental groups,
  • The value of podcasting and strategies for positively impacting the modern media environment.

Ezra Klein: ... Going back to at least the 2010s, probably before, I’ve begun to really notice this feeling in American politics that they can build and we can’t. This became a pathway through which different kinds of bipartisan legislation that would not otherwise have been bipartisan began to emerge.

The re-emergence of industrial policy in America is 100% about China. Take China out of the equation, and there is no re-emergence of American industrial policy. It’s reasonable from the American perspective, when you’re trying to understand American politics, to understand China as an American political object, because that’s what it actually is in our discourse.

American policymakers don’t understand China at all. Most of what they think about it has a high chance of proving to be dangerously misguided. Dan will be much more expert here than I will, but I’m very skeptical of the bipartisan consensus that has emerged. Nevertheless, it’s completely trackable that China exerts a force on American politics. It has reshaped the American political consensus, often in ways that operate in the shadows because they don’t become part of the major partisan fights of modern American politics. (...)

Dan Wang: I would always be the first person to put my hand up to say I know nothing about what’s going on in China. That is always true...

China is very messy. That is always my first proposition about China — it is very big, and many things are true about China all at the same time. They are a country that claims to be pursuing “socialism with Chinese characteristics,” which is still one of the most wonderful political science terms ever.

What sort of socialism is this? In my view, this is one of the most right-wing regimes in the world. A country that would make any American conservative salivate in terms of its immigration restrictions, its incredible amount of manufacturing prowess, and its enforcement of very traditional gender roles in which men have to be very macho and women have to bear children.

China is all of these things. It is also a place where there are really wonderful bike paths, specifically in Shanghai. This year, Shanghai has completed around 500 parks. By 2030, they want to create 500 more parks. It is a country that is getting better and getting worse all at the same time.

Ezra Klein: This goes back to this idea of envy — the degree to which the right envies China is fascinating. It doesn’t just want to compete with it or beat it. It’s not just afraid of it. What it wants is to be more like it.

America’s politicians are so obsessed with trying to take manufacturing back from China, which I don’t think they have a well-thought-through approach to doing, that they look quite ready to give up America’s financial power. They seem to have reconceived of dollar dominance, which used to be called the “exorbitant privilege” because we got so many advantages from it, as some sort of terrible weakness that has hollowed out our industrial base and that we need to shatter.

Throughout history, being the power that controls the money flows has proven to be an extraordinary lever of control. But it has been recast in current New Right thinking as a sort of feminized decadence — something that “not real” countries and “not real” powers do, a distraction from the “real economy” and the “real work” of making things.

I’m not against bringing back manufacturing. I support the CHIPS Act. There are many aspects of manufacturing that I would like to bring back. But we can become so envious that it becomes hard to see our own advantages and strengths, and then make serious policy built on what we are doing well. That strikes me as one of the profound weaknesses of Washington’s approach to policymaking. It is so obsessed with what we are not doing well that it seems ready to set fire to what we are doing well.

Dan Wang: Edward Luttwak has this term “great state autism,” which he created regarding the US thinking about the Soviet Union. There is certainly an aspect, once you are a “superpower,” of becoming obsessed with the other party. You have to choose your enemies very carefully because you will end up looking quite a lot like them.


I wonder in which way the US is actually quite mimetic in thinking about how to be like the other superpower. In my sense, China — after the 2008 financial crisis, or perhaps after 2012 when Xi came into power — Beijing decided it does not really want to look too much like the US, which has been driven by Wall Street on one coast and Silicon Valley on the other in terms of economic growth.

Rather, Beijing has this purely mercantilist view, which would be recognizable to anyone in the 18th century, which is, “Let’s just make a ton of products. That is our source of power, that is our source of advantage.” (...)

I definitely want to defend the dulcet tones of both Ezra and Derek, but as an amateur member of the community of China watchers, there are debates that aren’t easily resolved. For example, a question I would pose to US policymakers would be: Do you judge it is in America’s interest that China is richer, or is America better off if China is poorer? Having that answer would help structure many subsequent policy choices.

There is debate within the China community about how expansionist China is. They certainly want Taiwan — no question there. But is the next step that they want to take Vietnam, Philippines, as well as Japan? People are extensively debating this. When we can answer these more technocratic questions and reach some agreement, many things become easier.

This isn’t about Ezra’s show, but in the US there aren’t many experts really trying to debate and resolve these questions. In my field studying Chinese technology development and manufacturing, policymakers frequently use the laziest trope that China got where it is totally through stealing. This is easily disprovable, yet we hear it all the time. As long as we can’t move beyond these tropes, it becomes much more difficult to resolve even the harder questions.

Ezra Klein: ... My views are actually quite weak on many of these things. There are areas where I have very strong views about how America should build more and faster. A big portion of the book Derek and I wrote is fundamentally motivated, as we say at the end, by competition with China. We believe we won’t continue thriving as a nation in terms of our own strength if we don’t get better at manufacturing, construction, deployment, innovation, and cyclical experimental policy. There’s something for us to learn and compete with there.

On the narrower level, there’s a view that has taken hold in Washington that some version of decoupling is the way forward. One place where I’m uncertain — not certain I disagree, but the conventional view is so dominant that I’m more interested in the counter-argument — is Tom’s argument from the Huawei campus and his other experiences. He suggests we should do with China in the 2020s what we did with Japan in the 1980s and 1990s when they were outcompeting us on cars: create joint ventures in America where we develop their technological and manufacturing processes and embed them in our own companies. China did this with us too.

In Washington, this is considered virtually unsayable. I’d like to hear a better argument against it than I’ve heard because it’s not obvious that our current approach will accelerate the sophistication of our manufacturing chains.

My view is similar to Dan’s — I’d like us to have more precise conversations about means and ends. But that’s difficult in the current political atmosphere where you have to out-compete others to be symbolically tough or hawkish. (...)

Regarding what we need to do to accelerate our manufacturing and innovative ecosystems, the question of whether we should be decoupling or trying to couple and do tech transfer, engaging in more direct competition with products like Chinese EVs while heavily subsidizing our own industries with clear goals — that doesn’t seem completely crazy to me.

by Jordan Schneider, Ezra Klein, Derek Thompson and Dan Wang, ChinaTalk |  Read more:
Image: YouTube/Zhong Zaiben (钟在本)
[ed. Nice to read such a thoughtful discussion of US/China policy. More informed than most. I've exchanged a couple emails with Dan Wang and am very much looking forward to his upcoming book Breakthrough. If you're unfamiliar with Dan and his annual essays on everything China, I highly recommend you check out his: 2023 letter and 2022 letter.]

Don McCullin, Taking Gifts to the Sea Gods (Bali, 1982)

Krasnov Theory

In February 2025, a rumor circulated online that U.S. President Donald Trump was recruited as an "asset" by Russian intelligence in the late 1980s and given the codename "Krasnov," following allegations from a former Soviet and Kazakh security official, Alnur Mussayev. 

The claim spread on TikTok, Facebook and X, where one account published a thread in response to the rumor, purporting to tie together evidence to support it. (...)

That user wrote: "Now that it's been reveals that Trump has been a Russian asset for 40 years named Krasnov by the FSB, I will write a simple thread of various pieces of information that solidifies the truth of everything I've written." At the time of publishing this article, the thread had been viewed more than 10 million times.
  • In February 2025, Alnur Mussayev, a former Soviet and Kazakh security official, claimed in a Facebook post that U.S. President Donald Trump was recruited in 1987 by the KGB, the intelligence agency of the Soviet Union, and assigned the code name "Krasnov."
  • Mussayev's post didn't state whether he personally recruited Trump or simply knew about the recruitment, nor did it state whether Trump actively participated in espionage or was just a potential asset.
  • Trump did visit Moscow in 1987, but there is no clear evidence suggesting he was actively recruited by the KGB during that trip or at any other time.
  • Mussayev's allegations that Trump was recruited by the KGB at that time don't line up with Mussayev's documented career path. Several biographies of him on Russian-language websites suggest that at the time Trump was supposedly recruited, Mussayev was working in the Soviet Union's Ministry of Internal Affairs, not the KGB.
  • Trump's pro-Russia stance (compared with other U.S. presidents) has fed into past allegations that he is a Russian asset — for instance, the 2021 book "American Kompromat" featured an interview with a former KGB spy who also claimed the agency recruited Trump as an asset. Again, however, there is no clear evidence supporting this claim.
The claim gained traction when the news website The Daily Beast published a now-deleted story (archived) titled "Former Intelligence Officer Claims KGB Recruited Trump," using only Mussayev's Facebook post as a source. The article described Mussayev's allegations as "unfounded." We contacted The Daily Beast to ask why the story was deleted and will update this story if we receive a response.

We also reached out to Mussayev for comment on the story and will update if he responds.

Meanwhile, Snopes readers wrote in and asked us whether the rumor that Trump was recruited to be a Russian asset was true. Here's what to know:

The allegations don't line up with official records

The allegations originated from a Facebook post that Mussayev published on Feb. 20, 2025 (archived). The post alleged that in 1987, the KGB recruited a "40-year-old businessman from the USA, Donald Trump, nicknamed 'Krasnov.'" Mussayev claimed he was serving in the KGB's Moscow-based Sixth Directorate at the time, and it was "the most important direction" of the department's work to recruit businessmen from "capitalist countries."

Mussayev's post didn't specify whether Trump participated in any spying, only that he was recruited. In an earlier post (archived) from July 18, 2018, he described Trump's relationship with Russian President Vladimir Putin as follows:
Based on my experience of operational work at the KGB-KNB, I can say for sure that Trump belongs to the category of perfectly recruited people. I have no doubt that Russia has a compromise on the President of the United States, that for many years the Kremlin promoted Trump to the position of President of the main world power.
Trump did visit Moscow in 1987, reportedly to look at possible locations for luxury hotels. However, several Russian-language websites (of unknown trustworthiness) with short biographies of Mussayev revealed a discrepancy: While Mussayev claimed he worked in the Sixth Directorate of the KGB in 1987, those online biographies, including one from the Moscow State Institute of International Relations, placed him in Kazakh KGB counterintelligence from 1979 until 1986, when he moved to the Soviet Union's Ministry of Internal Affairs.

It is absolutely possible that the public timeline of Mussayev's work history was established by the KGB as a cover for more covert activities. At face value, however, information on Mussayev's background does not completely align with what he claims.

Other sources corroborated that the Sixth Directorate's main focus was not foreign intelligence. The journalist and author W. Thomas Smith Jr.'s book "Encyclopedia of the Central Intelligence Agency" states that the directorate was responsible for "enforcing financial and trade laws, as well as guarding against economic espionage," in line with the counterintelligence descriptions present in the online biographies. Meanwhile, the First Chief Directorate was the KGB's main espionage arm.

by Amelia Clarke and Jack Izzo, Snopes |  Read more:
Image: Twitter
[ed. True? Does it matter? Results are the same.]

Wednesday, May 21, 2025

AI, Cartoons and Animation


[ed. See also: What if Making Cartoons Becomes 90% Cheaper? (NYT). Better cat videos?]
via: YouTube/X

No part of the entertainment business has more to lose — and gain — from A.I. than animation.

The $420 billion global animation industry (movies, television cartoons, games, anime) has long been dominated by computer-generated imagery; Walt Disney Animation hasn’t released a hand-drawn film since 2011. In other words, unlike much of Hollywood, animation companies are not technophobic.

Even with computers, however, the process of making an animated movie (or even a cartoon) remains extraordinarily expensive, requiring squadrons of artists, animators, graphic designers, 3-D modelers and other craftspeople. Studios have a big incentive to find a more efficient way, and A.I can already do many of those things far faster, with far fewer people.

Jeffrey Katzenberg, a former chairman of Walt Disney Studios and a co-founder of DreamWorks Animation, has predicted that by next year, it will take only about 50 people to make a major animated movie, down from 500 a decade ago. If he were founding DreamWorks today, Mr. Katzenberg said of A.I. on a recent episode of the podcast “The Speed of Culture,” he would be “jumping into it hook, line and sinker.” (NYT)

via:

Cool Tips

Cool tips
via:
[ed. Probably won't remember any of them (except maybe the broken key).]

Kazuo Ishiguro Reflects on Never Let Me Go, 20 Years Later

On the Decades-Long Creative Process Behind His Most Successful Novel

While I’d been busy writing my fourth and fifth novels, my study had mysteriously transformed itself around me into a kind of miniature indoor jungle. Everywhere were dusty mountains of scribbled-on pages and precarious towers of folders.

In the spring of 2001, however, I began work on my new novel with renewed energy, having just had the room entirely refurbished to my own exacting specifications. I now had well-ordered shelves up to the ceiling and—something I’d wanted for years—two writing surfaces that met in a right angle. My study felt, if anything, even smaller than before (I’ve always preferred to write in small rooms, my back to any view), but I was immensely pleased with it. I’d tell anyone interested how it was like being ensconced in the sleeper compartment of a period luxury train: all I had to do was revolve my chair and reach out a hand to get whatever it was I needed.

One such item now readily accessible was a box file on the shelf to my left marked “Students Novel.” It contained handwritten notes, spidery diagrams, and some typed pages deriving from two separate attempts I’d made—in 1990, then in 1995—to write the novel that was to become Never Let Me Go. On each occasion I’d abandoned the project and gone on to write a completely unrelated novel.

Not that I needed to bring down the file very often: I was quite familiar with its contents. My “students” had no university anywhere near them, nor resembled at all the sort of characters encountered in, say, The Secret History or the “campus novels” of Malcolm Bradbury and David Lodge. Most importantly, I knew they were to share a strange destiny, one that would drastically shorten their lives, yet make them feel special, even superior.

But what was this “strange destiny”—the dimension I hoped would give my novel its unique character?

The answer had continued to elude me throughout the previous decade. I’d toyed with scenarios involving a virus, or exposure to nuclear materials. I even dreamt up once a surreal sequence in which a young hitchhiker, late at night on a foggy motorway, thumbs down a convoy of vehicles and is given a lift in a lorry hauling nuclear missiles across the English countryside.

Despite such flourishes, I’d remained dissatisfied. Every conceit I came up with felt too “tragic,” too melodramatic, or simply ludicrous. Nothing I could conjure would come close to matching the needs of the novel I felt I could see dimly before me in the mists of my imagination.

But now in 2001, as I returned to the project, I could feel something important had changed—and it was not just my study. (...)
***
There might have been other factors around at that time: Dolly the Sheep, history’s first cloned mammal, adorning the fronts of newspapers in 1997; the writing of my two previous novels (The Unconsoled, When We Were Orphans) making me feel more sure-footed about taking deviations from everyday “reality.” In any case, my third attempt at “the Students Novel” went differently to before.

I even had a kind of “eureka” moment—though I was in the shower, not a bath. I suddenly felt I could see before me the entire story. Images, compressed scenes, ran through my mind. Oddly I didn’t feel triumphant or even especially excited. What I recall today is a sense of relief that a missing piece had finally fallen into place, and along with it a kind of melancholy, mixed with something almost like queasiness.

I went about auditioning three different voices for my narrator, having each one narrate the same event over a couple of pages. When I showed the three samples to Lorna, my wife, she picked one without hesitation—a choice that concurred with my own.

After that I worked, by my standards, pretty rapidly in my refurbished study, completing a first draft (albeit in horribly chaotic prose) within nine months. I then worked on the novel for a further two years, throwing away around eighty pages from near the end, and going over and over certain passages.
***
In the twenty years since its publication in 2005, Never Let Me Go has become my most-read book. (In hard sales terms, it overtook quite quickly The Remains of the Day despite the latter’s sixteen years head start, Booker Prize win, and the acclaimed James Ivory film.) The novel has been widely studied in schools and universities, and translated into over fifty languages. It has been adapted into a movie (with Carey Mulligan, Keira Knightley, and Andrew Garfield as Kathy, Ruth, and Tommy—and a superb screenplay, appropriately, by Alex Garland); a Japanese stage play directed by the great Yukio Ninagawa; a ten-part Japanese TV series starring Haruka Ayase; and most recently a British stage play written by Suzanne Heathcote.

This has meant that over the years I’ve been asked many questions about the novel, not just from a range of readers, but from writers, directors, and actors wrestling with the task of transferring this story into a new medium. Reflecting on these questions today, it occurs to me that the great majority of them can be gathered into two broad categories.

The first might be summarized by this question: “Given the awful fate that hangs over these young people, why don’t they run away, or at least show more signs of rebellion?”

The second group of FAQs is slightly harder to characterize, but essentially comes down to: “Is this a sad, bleak book or is it an uplifting, positive one?”

I’m not going to attempt here to answer either of the above, partly because I don’t wish to give spoilers in an introduction, but also because I feel quite content, even proud, that this novel should provoke such questions in readers’ minds. I will however make the following observation—which may possibly make greater sense after you’ve finished the book.

It seems to me that these most-asked questions about Never Let Me Go arise because of tensions concerning its metaphorical identity. Is this story a metaphor about evil man-made systems that already exist today—or are in imminent danger of existing—ushered in by uncontrolled innovations in science and technology? Or, alternatively, is the novel offering a metaphor for the fundamental human condition—the necessary limits of our natural lifespans; the inescapability of aging, sickness, and death; the various strategies we adopt to give our lives meaning and happiness in the time we have allotted to us.

It may be both a strength and a weakness of this novel that it often wishes to be both of the above at one and the same time, thereby setting certain elements of the story in conflict with one another.

by Kazuo Ishiguro, Lit Hub |  Read more:
Image: Never Let Me Go 

Tuesday, May 20, 2025

'News' in 2025: Eye of the Beholder

Our qualitative research (conducted with 57 Americans in August 2024) and a survey of 9,482 U.S. adults in March 2025 confirm the idea that what news is varies greatly from person to person. And each decides what news means to them and which sources they turn to based on a variety of factors — including their own identities and interests.

These findings build on decades of our own and others’ research on the changing dynamics of news consumption, illuminating key distinctions between what news was and what it is today.

What was news?

Before the rise of digital and social media, researchers had long approached the question of what news is from the journalist perspective. Ideas of news were generally tied to the institution of journalism as media “gatekeepers” determined what was newsworthy, producing and packaging information with a particular tone or set of values for a passive audience.

“The journey we’re on is 30 years ago, the platforms or places where you could be told something you don’t know aside from being personally told by a friend of yours was very limited,” said Nicholas Johnston, the publisher of Axios. “Now, it’s essentially infinite.”

What is news?

In the digital age, when people are exposed to more information from more sources than ever, researchers — including at Pew Research Center — increasingly study news from the audience perspective, as audiences themselves define what “news” is.

“That makes everyone like a wire service editor,” said ProPublica managing editor Tracy Weber. “That’s good. That’s also bad because they’re not trained to be the best wire service editor.”

The reality of exactly how audiences make these assessments is complicated. News is less central to most people’s experiences on digital platforms and social media than journalists often hope. Some work — including our own — finds the definitions people hold for news are not always consistent with their actual behaviors. There are also clashes between what people believe others think of as news and what “feels” like news to them.

Our study finds that people generally say something is more likely to be news if it is factual, up to date, important to society and unbiased. But more than half of Americans also say it’s at least somewhat important for their news sources to have political views similar to their own.

Opinions on whether something is “news” or “not news” also aren’t black and white. Research has found that people view “news” as more of a continuum than a simple yes-or-no question. People classify content as more or less “news-like,” and this varies across platforms and sources, as well as from one person to the next.

Platforms and sources

The rise of digital and social media platforms has changed how people experience news. Where people encounter information can influence what they experience as news, and their standards often vary depending on how and where they find it.

For instance, when our qualitative research participants scrolled through their own social media feeds, they drew from cues related to both the topic and its source — including that source’s political orientation — to make case-by-case evaluations about whether the information was news or not.

It is a reflection of the changing nature of how news is distributed that audiences can, and regularly do, make distinctions about what kinds of content they see online. Part of the scrolling experience is making snap judgments about whether content is news and whether to engage with it.

Individuals: Each person brings their own mindset and approach to navigating today’s information environment. For instance, many of our participants acknowledged that their personal identities — including their age, race, ethnicity, gender, religion and especially their political leaning — influence how they consume and think about news.

It also influences how they feel about it. Many of the most common emotions people associate with the news they get are negative: angry, sad, scared, confused.

These feelings can vary depending on people’s political opinions and the political context. Our March 2025 survey found that Democrats are more likely than Republicans to say the news they get makes them feel angry, sad and scared.

But the vast majority of Americans also say news makes them feel informed at least sometimes. And despite all the complications and challenges of the current news environment, many of our participants also view news as an essential part of their life.

Kirsten Eddy, Neiman Lab |  Read more:
Image: via
[ed. See also: What is news, anyway? (NL).]


A pair of slip shade wall light fixtures from the late 20’s and early 30’s. Manufactured by the Lincoln Company.
via:

Art Deco house, San Francisco
via:

Monday, May 19, 2025


via:

Farhad Moshiri (Iranian, 1963-2024), Control Room, 2004. Embroidery on black velvet, 50 x 68 cm

Iggy Pop


“Iggy Pop, lead singer of The Stooges, a late 1960s/early 1970s rock band influential in the development of the nascent hard rock and punk rock genres. In this interview he talks about the method behind the madness.” [ed. here's the song they played that night - Five Foot One]

Wiktor Jackowski (Polish, 1987), suspended river, 2025, oil on canvas, 90 x 120 cm
via:

May 18, 2025: Big Bad Billionaire Bill

AKA: Medicaid Death Watch

Tonight, late on a Sunday night, the House Budget Committee passed what Republicans are calling their “Big, Beautiful Bill” to enact Trump’s agenda although it had failed on Friday when far-right Republicans voted against it, complaining it did not make deep enough cuts to social programs.

The vote tonight was a strict party line vote, with 16 Democrats voting against the measure, 17 Republicans voting for it, and 4 far right Republicans voting “present.” House speaker Mike Johnson (R-LA) said there would be “minor modifications” to the measure; Representative Chip Roy (R-TX) wrote on X that those changes include new work requirements for Medicaid and cuts to green energy subsidies.

And so the bill moves forward.

In The Bulwark today, Jonathan Cohn noted that Republicans are in a tearing hurry to push that Big, Beautiful Bill through Congress before most of us can get a handle on what’s in it. Just a week ago, Cohn notes, there was still no specific language in the measure. Republican leaders didn’t release the piece of the massive bill that would cut Medicaid until last Sunday night and then announced the Committee on Energy and Commerce would take it up not even a full two days later, on Tuesday, before the nonpartisan Congressional Budget Office could produce a detailed analysis of the cost of the proposals. The committee markup happened in a 26-hour marathon in which the parts about Medicaid happened in the middle of the night. And now, the bill moves forward in an unusual meeting late on a Sunday night. (...)

Cohn explains that Medicaid cuts are extremely unpopular, and the Republicans hope to jam those cuts through by claiming they are cutting “waste, fraud, and abuse” without leaving enough time for scrutiny. Cohn points out that if they are truly interested in savings, they could turn instead to the privatized part of Medicare, Medicare Advantage. The Congressional Budget Office estimates that cutting overpayments to Medicare Advantage when private insurers “upcode” care to place patients in a higher risk bracket, could save more than $1 trillion over the next decade.

Instead of saving money, the Big, Beautiful Bill actually blows the budget deficit wide open by extending the 2017 tax cuts for the wealthy and corporations. The Congressional Budget Office estimates that those extensions would cost at least $4.6 trillion over the next ten years. And while the tax cuts would go into effect immediately, the cuts to Medicaid are currently scheduled not to hit until 2029, enabling the Republicans to avoid voter fury over them in the midterms and the 2028 election. [ed. emphasis added]

The prospect of that debt explosion led Moody’s on Friday to downgrade U.S. credit for the first time since 1917, following Fitch, which downgraded the U.S. rating in 2023, and Standard & Poor’s, which did so back in 2011. “If the 2017 Tax Cuts and Jobs Act is extended, which is our base case,” Moody’s explained, “it will add around $4 trillion to the federal fiscal primary (excluding interest payments) deficit over the next decade. As a result, we expect federal deficits to widen, reaching nearly 9% of GDP by 2035, up from 6.4% in 2024, driven mainly by increased interest payments on debt, rising entitlement spending and relatively low revenue generation.” (...)

The continuing Republican insistence that spending is out of control does not reflect reality. In fact, discretionary spending has fallen more than 40% in the past 50 years as a percentage of gross domestic product, from 11% to 6.3%. What has driven rising deficits are the George W. Bush and Donald Trump tax cuts, which had added $8 trillion and $1.7 trillion, respectively, to the debt by the end of the 2023 fiscal year.

But rather than permit those tax cuts to expire— or even to roll them back— the Republicans continue to insist Americans are overtaxed. In fact, the U.S. is far below the average of the 37 other nations in the Organization for Economic Cooperation and Development, an intergovernmental forum of democracies with market economies, in its tax levies. According to a report by the Center for American Progress in 2023, if the U.S. taxed at the average OECD level, over ten years it would have an additional $26 trillion in revenue. If the U.S. taxed at the average of European Union nations, it would have an additional $36 trillion. (...)

So with the current Big, Beautiful Bill, we are looking at a massive transfer of wealth from ordinary Americans to those at the top of American society. The Democratic Women’s Caucus has dubbed the measure the “Big Bad Billionaire Bill.” (...)

Speaker Johnson hopes to pass the bill through the House of Representatives by this Friday, before Memorial Day weekend.

by Heather Cox Richardson, Letters from an American |  Read more:
Image: Speaker of the House Mike Johnson (R-La.). Bill Clark/CQ-Roll Call, Inc via Getty Images

Sunday, May 18, 2025

The Internet is for Extremism

Everything seems insane on the internet.

It’s 2023, and Donald Trump still dominates American political discussion. The internet is filled with wild MAGA nonsense, and if you follow politics online you’ve probably also learned what a Tankie is against your will. But it’s not just politics - everything seems insane. Influencers are doing crazier and crazier stunts to go viral. Pop culture fights happen more often and with more venom. Niche communities seem to fall into deranged niche drama more easily than ever.

To understand how Donald Trump used the internet to take over American politics - and why everything else is also going insane - we first need to understand MrBeast.

You have to go bigger

YouTuber MrBeast could fairly claim to be the biggest online content creator in the world. He’s the most-subscribed individual creator on Youtube. He has more than 290 million followers across his YouTube channels and his videos have collected more than 45 billion views. And it’s possible that no one in the world has thought as deeply about how to go viral as he has.

MrBeast has talked at length about his obsession with YouTube, producing content, and going viral. He often talks about how he’s been uploading videos since he was 11 years old, how he’s probably spent 40,000 hours discussing content creation tactics for the YouTube platform. He faked going to community college to live at home with his parents and make content 15 hours a day. He’s the kind of guy who has extremely detailed (and evidence backed) opinions about the facial expressions that go on video thumbnails, how often a video should jump cut, and what types of videos will get views. So it’s worthwhile to think about some of his earliest viral videos, what he’s making now, and what it says about the nature of virality. (...)

MrBeast was 19 and a small-time YouTuber, nowhere near a household name. He was offered the biggest sponsorship he’d ever been offered to date - 5,000 dollars - and his immediate reaction was ‘Double that and let me give it away to a random homeless person’. He ended up being right, and the video went insanely viral. He knew that the bigger the number (especially if it could break into five digits and be 10,000 dollars) the better the video would do.

The instinct to go bigger has informed virtually everything MrBeast has done since then. He soon had a new video giving away $20,000 to homeless people, then $100,000, then an actual house. He is always pushing the limits, doing bigger and wilder and more, and not just when it comes to giving away money. He’s driven through the same drive-through 1000 times straight. He spent four million dollars to enact a real life Squid Game and bought a train so he could run it off a cliff. He spent two days buried alive in a coffin and a week stranded on a raft in the middle of the ocean. He’s given away a private island a cured 1000 blind and deaf people.

The biggest and probably most knowledgeable content creator on the planet has one philosophy - if you want people to watch, push things to the extreme. And this rule doesn’t just govern YouTube videos. It governs everything we do online.

The MrBeastification of Everything

Think of a relatively normal and uncontroversial thing to be a fan of. Let’s pick hot sauce - imagine yourself as a hot sauce aficionado. If you were living 30 years ago before social media, your options were pretty limited. Maybe you’d know a couple restaurants nearby with pretty spicy food. Maybe you read an extremely niche hot sauce magazine that would publish twice a year for a tiny audience. Maybe you knew of a mail order company that sold some really hot stuff, hotter than you could get at the supermarket. But fundamentally, even as an obsessive fan, your options were pretty limited.

Today, your options are not limited. There are dedicated hot sauce forums online. There are mountains of social content analyzing hot sauce, discussing hot sauce, watching celebrities eat incredibly hot chicken wings. There’s a hot sauce subreddit with hundreds of thousands of subscribers where people have incredibly strong opinions about this topic.

We can even measure this empirically - the hottest pepper in the world today is up to 10x as hot as the world’s hottest pepper in the 1990s. And it’s also far more accessible. You can buy hotter sauces than ever before, easier than ever before, and it’s all thanks to the power of the internet. Hot sauce is undergoing MrBeastification - always pushing for hotter.

There are times where it seems like this extremism is happening to everything online. Celebrity fandoms are more extreme than ever - in fact, it’s no longer enough to be a wildly deranged stan, you must also engage in the anti-fandoms that are now common. Sorority rush has gone from a relatively understated affair to a giant social media production requiring intense planning. Our financial scams have gone from straightforward ponzi schemes to meme stocks and cryptocurrencies that border on being actual cults. Fringe beliefs in every field - economics, vaccinology, history - are flourishing. The more conspiratorial and extreme the view, the better it tends to do on the internet. Everything is being pushed to be the most extreme version of itself.

The Incentives We’re Chasing

What’s causing this? It’s the social web. There are a couple of structural ways in which the internet empowers and incentivizes extremism. (...)

Fundamentally, the social dynamics of the internet turbo-charge this extremism. When the amount of content available online is near-infinite, why wouldn’t you gravitate towards the content that is the most of whatever it is you’re searching for? Why watch a video where a cook bakes a 10 pound cake when you could watch a 50 pound cake? Why watch someone give away a thousand dollars when you could watch someone else give away a hundred thousand dollars? MrBeast recognized this early on and he’s correct - this is how things work with social content. People always want bigger and crazier and more extreme.

by Jeremiah Johnson, Infinite Scroll |  Read more:
Image: MrBeast
[ed. Reveal parties, celebrity outfits, weed, $400 million gift planes... everything.]