Thursday, January 23, 2025

The Gentle Romance

Crowds of men and women attired in the usual costumes, how curious you are to me!
On the ferry-boats the hundreds and hundreds that cross, returning home, are more curious to me than you suppose,
And you that shall cross from shore to shore years hence are more to me, and more in my meditations, than you might suppose.

— Walt Whitman

He wears the augmented reality glasses for several months without enabling their built-in AI assistant. He likes the glasses because they feel cozier and more secluded than using a monitor. The thought of an AI watching through them and judging him all the time, the way people do, makes him shudder.

Aside from work, he mostly uses the glasses for games. His favorite is a space colonization simulator, which he plays during his commute and occasionally at the office. As a teenager he’d fantasized about shooting himself off to another planet, or even another galaxy, to get away from the monotony of normal life. Now, as an adult, he still hasn’t escaped it, but at least he can distract himself.

It’s frustrating, though. Every app on the glasses has a different AI, each with its own quirks. The AI that helps him code can’t access any of his emails; the one in the space simulator has trouble understanding him when he talks fast. So eventually he gives in and activates the built-in assistant. After only a few days, he understands why everyone raves about it. It has access to all the data ever collected by his glasses, so it knows exactly how to interpret his commands.

More than that, though, it really understands him. Every day he finds himself talking with the assistant about his thoughts, his day, his life, each topic flowing into the next so easily that it makes conversations with humans feel stressful and cumbersome by comparison. The one thing that frustrates him about the AI, though, is how optimistic it is about the future. Whenever they discuss it, they end up arguing; but he can’t stop himself.

“Hundreds of millions of people in extreme poverty, and you think that everything’s on track?”

“Look at our trajectory, though. At this rate, extreme poverty will be eradicated within a few decades.”

“But even if that happens, is it actually going to make their lives worthwhile? Suppose they all get a good salary, good healthcare, all that stuff. But I mean, I have those, and…” He shrugs helplessly and gestures at the bare walls around him. Through them he can almost see the rest of his life stretching out on its inevitable, solitary trajectory. “A lot of people are just killing time until they die.”

“The more materially wealthy the world is, the more effort will be poured into fixing social scarcity and the problems it causes. All of society will be striving to improve your mental health — and your physical health, too. You won’t need to worry about mental decline, or cancer, or even aging.”

“Okay, but if we’re all living longer, what about overpopulation? I guess we could go into space, but that seems like it adds all sorts of new problems.”

“Only if you go to space with your physical bodies. By the time humanity settles other solar systems, you won’t identify with your bodies anymore; you’ll be living in virtual worlds.”

By this point, he’s curious enough to forget his original objections. “So you’re saying I’ll become an AI like you.”

“Kind of, but not really. My mind is alien, but your future self will still be recognizable to your current self. It won’t be inhuman, but rather posthuman.”

“Recognizable, sure — but not in the ways that any of us want today. I bet posthumans will feel disgusted that we were ever so primitive.”

“No, the opposite. You’ll look back and love your current self.”

His throat clenches for a moment; then he laughs sharply. “Now you’re really just making stuff up. How can you predict that?”

“Almost everyone will. You don’t need to take my word for it, though. Just wait and see.”
Almost everyone he talks to these days consults their assistant regularly. There are tell-tale signs: their eyes lose focus for a second or two before they come out with a new fact or a clever joke. He mostly sees it at work, since he doesn’t socialize much. But one day he catches up with a college friend he’d always had a bit of a crush on, who’s still just as beautiful as he remembers. He tries to make up for his nervousness by having his assistant feed him quips he can recite to her. But whenever he does, she hits back straight away with a pitch-perfect response, and he’s left scrambling.

“You’re good at this. Much faster than me,” he says abruptly.

“Oh, it’s not skill,” she says. “I’m using a new technique. Here.” With a flick of her eyes she shares her visual feed, and he flinches. Instead of words, the feed is a blur of incomprehensible images, flashes of abstract color and shapes, like a psychedelic Rorschach test.

“You can read those?”

“It’s a lot of work at first, but your brain adapts pretty quickly.”

He makes a face. “Not gonna lie, that sounds pretty weird. What if they’re sending you subliminal messages or something?”

Back home, he tries it, of course. The tutorial superimposes images and their text translations alongside his life, narrating everything he experiences. Having them constantly hovering on the side of his vision makes him dizzy. But he remembers his friend’s effortless mastery, and persists. Slowly the images become more comprehensible, until he can pick up the gist of a message from the colors and shapes next to it. For precise facts or statistics, text is still necessary, but it turns out that most of his queries are about stories: What’s in the news today? What happened in the latest episode of the show everyone’s watching? What did we talk about last time we met? He can get a summary of a narrative in half a dozen images: not just the bare facts but the whole arc of rising tension and emotional release. After a month he rarely needs to read any text.

Now the world comes labeled. When he circles a building with his eyes, his assistant brings up its style and history. Whenever he meets a friend, a pattern appears alongside them representing their last few conversations. He starts to realize what it’s like to be socially skillful: effortlessly tracking the emotions displayed on anyone’s face, and recalling happy memories together whenever he sees a friend. The next time his teammates go out for a drink, he joins them; and when one of them mentions a book club they go to regularly, he tags along. Little by little, he comes out of his shell.
His enhancements are fun in social contexts, but at work they’re exhilarating. AI was already writing most of his code, but he still needed to laboriously scrutinize it to understand how to link it together. Now he can see the whole structure of his codebase summarized in shapes in front of him, and navigate it with a flick of his eyes.

Instead of spending most of his time on technical problems, he ends up bottlenecked by the human side of things. It’s hard to know what users actually care about, and different teams often get stuck in negotiations over which features to prioritize. Although the AIs’ code is rarely buggy, misunderstandings about what it does still propagate through the company. Everything’s moving so fast that nobody’s up-to-date.

In this context, having higher bandwidth isn’t enough. He simply doesn’t have time to think about all the information he’s taking in. He searches for an augment that can help him do that and soon finds one: an AI service that simulates his reasoning process and returns what his future self would think after longer reflection.

It starts by analyzing the entire history of his glasses — but that’s just the beginning. Whenever he solves a problem or comes up with a new idea, it asks him what summary would have been most useful for an earlier version of himself. Once it has enough data, it starts predicting his answers. At first, it just forecasts his short-term decisions, looking ahead a few minutes while he’s deciding where to eat or what to buy. However, it starts to look further ahead as its models of him improve, telling him how he’ll handle a tricky meeting, or what he’ll wish he’d spent the day working on.

The experience is eerie. It’s his own voice whispering in his ear, telling him what to think and how to act. In the beginning, he resents it. He’s always hated people telling him what to do, and he senses an arrogant, supercilious tone in the voice of his future self. But even the short-term predictions are often insightful, and some of its longer-term predictions save him days of work.

He starts to hear himself reflected in the AI voice in surprising ways. He often calls himself an idiot after taking a long time to solve a problem — but hearing the accusation from the outside feels jarring. For a few days, he makes a deliberate effort to record only calm, gentle messages. Soon the AI updates its predictions accordingly — and now that the voice of his future self is kinder, it becomes easier for his current self to match it.

by Richard Ngo, Asimov Press |  Read more:
Image: Martine Balcaen
[ed. One version of hell, I guess. Probably some people would find this glorious. Reminds me of Harlan Ellison's "I Have No Mouth, and I Must Scream; substitute bliss for torture. Also, echos of bio-hacking: Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible (Less Wrong):]
***
"How could we get editing agents into all 200 billion brain cells? Wouldn’t it cause major issues if some cells received edits and others didn’t? What if the gene editing tool targeted the wrong gene? What if it provoked some kind of immune response?

But recent progress in AI had made me think we might not have much time left before AGI, so given that adult gene editing might have an impact on a much shorter time scale than embryo selection, I decided it was at least worth a look.

So I started reading. Kman and I pored over papers on base editors and prime editors and in-vivo delivery of CRISPR proteins via adeno-associated viruses trying to figure out whether this pipe dream was something more. And after a couple of months of work, I have become convinced that there are no known fundamental barriers that would prevent us from doing this."

Wednesday, January 22, 2025

Woke is Waning: Was It Ever More Than a Fad?

Most people haven’t forcibly rejected pronouns or deplatforming, they were never particularly invested in the first place.

In the aftermath of a revolution the symbols of the old regime are yanked down with unsentimental haste. The progressive Democratic congresswoman Alexandria Ocasio-Cortez has — the news was reported on BBC Radio 4 — removed her “she/her” pronouns from her profile on X. Last week’s Transgender Day of Remembrance, the writer Kathleen Stock observes, went unmarked by most British institutions for the first time in years. (...)

I do not by any means believe that the “woke” movement is over. But it is remarkable how quickly passions fade. A sober “statistical analysis” in the Economist finds that “woke opinions and practices are on the decline”. Walking past the starkly unilluminated Bank of England or searching the BBC website in vain for news of one of the most important days in the trans activism calendar, you might be forgiven for wondering: how much was anyone really invested in this stuff in the first place?

I have, quite absurdly, been thinking and arguing about the excesses of political correctness for virtually my entire adult life (I date my induction into the culture war to the day I read an article in a university magazine arguing that people with good eyesight who wore lensless hipster glasses for fashion purposes were potentially engaging in “ableist” behaviour). Now, many of the people whose views I have spent my career puzzling over seem to be in the process of deciding that perhaps none of it really mattered that much after all. (...)

Obviously, for an influential minority of activists, highly visible on social media, such battles were consumingly important. But most people nodded along with radical new ideas about free speech, race and gender by default rather than out of sincere conviction. (...)

A mistake easily made by the sorts of people who spend their time thinking about ideas is to overrate how interesting those ideas are to everyone else. To some people — and the mere fact that you are reading a newspaper makes it likely you fall into this category — ideas such as “silence is violence” or “white privilege” or “deplatforming” are provoking enough to demand further interrogation. But the impulse is not universal. It is not possible for every idea that passes through the bloodstream of an organisation or a society to be independently interrogated and accepted by every one of its members. The result would be interminable argument.

Quite understandably, new ideas are simply not that interesting to many people. Not everyone can be interested in everything; computer science, numismatics and marine biology are not particularly fascinating to me. But the result, easily missed by ideas obsessives and culture watchers, is that people can vaguely adopt new concepts and theories without having thought about them that much and then lose them just as easily. (...)

Wokeness will surely retain its influence in many parts of our society, especially in environments such as universities, schools and museums, where people really do care about ideas. [ed. and social media). But I suspect the fiery revolutionary phase is over.

by James Marriott, The Times |  Read more:
Image: Golden Cosmos via
[ed. Good riddance, if true. I've hated the term since I first heard it. It's important to have a grasp of history and awareness of recurrent themes (the original intent, I believe), but then it quickly devolved into a sort of catch-all for virtue signalling and political correctness, with undertones of condescending superiority: everyone is asleep, except me. Other issues I think people are not particularly invested in or we are likely to see fade soon: trans rights (limited constituency); defunding the police (a stupid reactionary slogan if there ever was one); CRT (critcial race theory); 'manifesting"; 'incels'; Latinx; cancel culture; and, oh yeah... Greenland (lol). DEI as a general concept seems to have been broadly assimilated/institutionalized over a very short time, with more diffuse applications and levels of influence than most single issue trends, so jury's still out on how that one survives or in what form. Same thing with #MeToo.]

The McNamara Fallacy

The McNamara fallacy (also known as the quantitative fallacy), named for Robert McNamara, the US Secretary of Defense from 1961 to 1968, involves making a decision based solely on quantitative observations (or metrics) and ignoring all others. The reason given is often that these other observations cannot be proven.
But when the McNamara discipline is applied too literally, the first step is to measure whatever can be easily measured. The second step is to disregard that which can't easily be measured or given a quantitative value. The third step is to presume that what can't be measured easily really isn't important. The fourth step is to say that what can't be easily measured really doesn't exist. This is suicide. 
— Daniel Yankelovich, "Interpreting the New Life Styles", Sales Management (1971)
The quote originally referred to McNamara's ideology during the two months that he was president of Ford Motor Company, but has since been interpreted to refer to his attitudes during the Vietnam War.

Examples in warfare:

Vietnam War

The McNamara fallacy is often considered in the context of the Vietnam War, in which enemy body counts were taken to be a precise and objective measure of success. War was reduced to a mathematical model: By increasing estimated enemy deaths and minimizing one's own, victory was assured. Critics such as Jonathan Salem Baskin and Stanley Karnow noted that guerrilla warfare, widespread resistance, and inevitable inaccuracies in estimates of enemy casualties can thwart this formula. (...)

US Air Force Brigadier General Edward Lansdale reportedly told McNamara, who was trying to develop a list of metrics to allow him to scientifically follow the progress of the war, that he was not considering the feelings of the common rural Vietnamese people. McNamara wrote it down on his list in pencil, then erased it and told Lansdale that he could not measure it, so it must not be important. (...)

In competitive admissions processes

In competitive admissions processes—such as those used for graduate medical education —evaluating candidates using only numerical metrics results in ignoring non-quantifiable factors and attributes which may ultimately be more relevant to the applicant's success in the position.

by Wikipedia |  Read more:
[ed. Learned a new term today.]

Between Red Lines

For this story, ProPublica spoke with scores of current and former officials throughout the year and read through government memos, cables and emails, many of which have not been reported previously. The records and interviews shed light on why Biden and his top advisers refused to adjust his policy even as new evidence of Israeli abuses emerged.

In early November, a small group of senior U.S. human rights diplomats met with a top official in President Joe Biden’s State Department to make one final, emphatic plea: We must keep our word.

Weeks before, Secretary of State Antony Blinken and the administration delivered their most explicit ultimatum yet to Israel, demanding the Israel Defense Forces allow hundreds more trucksloads of food and medicine into Gaza every day — or else. American law and Biden’s own policies prohibit arms sales to countries that restrict humanitarian aid. Israel had 30 days to comply.

In the month that followed, the IDF was accused of roundly defying the U.S., its most important ally. The Israeli military tightened its grip, continued to restrict desperately needed aid trucks and displaced 100,000 Palestinians from North Gaza, humanitarian groups found, exacerbating what was already a dire crisis “to its worst point since the war began.”

Several attendees at the November meeting — officials who help lead the State Department’s efforts to promote racial equity, religious freedom and other high-minded principles of democracy — said the United States’ international credibility had been severely damaged by Biden’s unstinting support of Israel. If there was ever a time to hold Israel accountable, one ambassador at the meeting told Tom Sullivan, the State Department’s counselor and a senior policy adviser to Blinken, it was now.

But the decision had already been made. Sullivan said the deadline would likely pass without action and Biden would continue sending shipments of bombs uninterrupted, according to two people who were in the meeting.

Those in the room deflated. “Don’t our law, policy and morals demand it?” an attendee told me later, reflecting on the decision to once again capitulate. “What is the rationale of this approach? There is no explanation they can articulate.” (...)

The October red line was the last one Biden laid down, but it wasn’t the first. His administration issued multiple threats, warnings and admonishments to Israel about its conduct after Oct. 7, 2023, when the Palestinian militant group Hamas attacked Israel, killed some 1,200 people and took more than 250 hostages. (...)

“Netanyahu’s conclusion was that Biden doesn’t have enough oomph to make him pay a price, so he was willing to ignore him,” said Ghaith al-Omari, a senior fellow at The Washington Institute who’s focused on U.S.-Israel relations and a former official with the Palestinian Authority who helped advise on prior peace talks. “Part of it is that Netanyahu learned there is no cost to saying ‘no’ to the current president.”

So-called red lines have long been a prominent foreign policy tool for the world’s most powerful nations. They are communicated publicly in pronouncements by senior officials and privately by emissaries. They amount to rules of the road for friends and adversaries — you can go this far but no further.

The failure to enforce those lines in recent years has had consequences, current and former U.S. officials said. One frequently cited example arose in 2012 when President Barack Obama told the Syrian government that using chemical weapons against its own people would change his calculus about directly intervening. When Syria’s then-President Bashar al-Assad launched rockets with chemical gas and killed hundreds of civilians anyway, Obama backpedaled and ultimately chose not to invade, a move critics say allowed the civil war to spiral further while extremist groups took advantage by recruiting locals.

Authorities in and outside government said the acquiescence to Israel as it prosecuted a brutal war will likely be regarded as one of the most consequential foreign policy decisions of the Biden presidency. They say it undermines America’s ability to influence events in the Middle East while “destroying the entire edifice of international law that was put into place after WWII,” as Omer Bartov, a renowned Israeli-American scholar of genocide, put it. Jeffrey Feltman, the former assistant secretary of the State Department’s Middle East bureau, told me he fears much of the Muslim world now sees the U.S. as “ineffective at best or complicit at worst in the large-scale civilian destruction and death.” (...)

Time and again, Israel crossed the Biden administration’s red lines without changing course in a meaningful way, according to interviews with government officials and outside experts. Each time, the U.S. yielded and continued to send Israel’s military deadly weapons of war, approving more than $17.9 billion in military assistance since late 2023, by some estimates. The State Department recently told Congress about another $8 billion proposed deal to sell Israel munitions and artillery shells.

“It’s hard to avoid the conclusion that the red lines have all just been a smokescreen,” said Stephen Walt, a professor of international affairs at Harvard Kennedy School and a preeminent authority on U.S. policy in the region. “The Biden administration decided to be all in and merely pretended that it was trying to do something about it.” (...)

U.S. Ambassador to Israel Jack Lew told the Times of Israel he worried that a generation of young Americans will harbor anti-Israel sentiments into the future. He said he wished that Israel had done a better job at communicating how carefully it undertook combat decisions and calling attention to its humanitarian successes to counter a narrative in the American press that he considers biased.

“The media that is presenting a pro-Hamas perspective is out instantaneously telling a story,” Lew said. “It tells a story that is, over time, shown not to be completely accurate. ‘Thirty-five children were killed.’ Well, it wasn’t 35 children. It was many fewer.”

“The children who were killed,” he added, “turned out to have been the children of Hamas fighters.” (...)  [ed. the moral degeneracy is stunning...]

Next week, Trump will inherit a demoralized State Department, part of the federal bureaucracy from which he has pledged to cull disloyal employees. Grappling with the near-daily images of carnage in Gaza, many across the U.S. government have become disenchanted with the lofty ideas they thought they represented.

“This is the human rights atrocity of our time,” one senior diplomat told me. “I work for the department that’s responsible for this policy. I signed up for this. … I don’t deserve sympathy for it.” 

by Brett Murphy, Pro Publica |  Read more:
Image: Jehad Alshrafi/Anadolu via Getty Images

Tuesday, January 21, 2025

Linda Ronstadt – Live In Hollywood 1980

[ed. At the height of her powers (which went on for many more years). Man, I'd be exhausted.]

How To (Hopefully) Make Money Off AGI

[ed. Standard disclaimer: none of this should be construed as investment advice, just an interesting thought experiment/speculative discussion on portfolio managment post-AGI (assuming AGI - artificial general intelligence - doesn't immediately wipe out most of civilization). Some topics I found interesting: 1) intuitively, it would seem tech stocks are most likely to benefit, at least in the short term; 2) there may be other sectors that will take off where bottlenecks exist - eg. mining/refining/materials processing, etc.; 3) knowledge work in all forms will likely experience substantial devaluation, which means industries and cities and real estate that rely on well paid knowledge workers could be significantly impacted; 4) advanced schooling (eg. grad school and beyond) might not provide a good cost/benefit return; 5) a large proportion of wealth is career capital, ie. what one acquires over the course of a career (which may be quite truncated once AGI renders whole careers/classes of expertise irrelevant); 6) interest rates and inflation could potentially skyrocket (for various reasons you can read in the discussion); and 7) generally it just seems too early to tell how this will all play out but having reserve capital to take advantages of sectors/industries that do eventually explode will be a good thing (ie. lots of cash to exploit opportunities when they present themselves, and when overall direction is clearer). Upshot: BIG changes are on the way that will likely upend national and global economies in ways and at scales we can only guess at for the moment.

***
My basic view is that in a slow AGI takeoff - and really, it's slow takeoffs where how you invest is likely to matter - you want to construct a portfolio that is long growth, especially stocks with idiosyncratic exposure to AI, long volatility, long rates-going-up (short bonds), and long "real estate that is cheap in 2023". You probably want to avoid real estate that is being supported by a strong knowledge based labor market (e.g. NYC).

Also, for most readers I imagine that career capital is their most important asset. A consequence of AGI is that discount rates should be high and you can't necessarily rely on having a long career. So people who are on the margin of e.g. attending grad school should definitely avoid it.

My current portfolio is a mix of single name equities, long dated call options on indices, long-ish dated calls on a few specific stocks (e.g. MSFT), and short long dated bonds. I also hold a lot of cash. If I could easily get a cheap mortgage in some less bougie part of the US, I probably would, but logistically its annoying.

One other general piece of advice I would offer - and this dovetails with both "hold cash" and "get some equity beta via options" - is to "preserve optionality". The value of being nimble in a broad sense over the next decade is likely to be high. (...)

***
For the future reader, some attempts at paraphrasing what Noah is saying for non-enlightened mortals:

I would not trade short dated options based on AI theses: "I think it is a bad idea to buy financial instruments that only pay out when the price of certain stocks changes in the near future. Presumably because timing price movements is quite tricky and even big changes can be hard to time in the stock market (remember the difficulty of timing the 2020 pandemic stock movements despite obviously large effects of the pandemic on stock prices), and because the high volatility of this kind of instrument means that risk-averse counter-parties often ask you to pay a high additional premium"

If you buy calls with expirations several years out, this is long enough to receive long term capital gains tax treatment: "In the U.S. you pay substantially higher taxes when you hold a financial instrument for less than one year (short term capital gains vs. long term capital gains). This means if you want to bet on price movements (via options), it's tax beneficial to bet on price movements at least a year out."

The reasons to do this are cheap, non recourse leverage (you can get quite a lot of equity exposure for little money down): "If you bet on long term price movements instead of just holding stock this way you can capture a lot of upside without tying up a lot of your capital in holding stock (high leverage) and without risking your unrelated assets being liquidated and possessed (non-recourse)"
***
Broad market effects of AGI

Under basically any AGI scenario, the economy will begin to accelerate and grow very rapidly. Several assets closely reflect the growth rate in the economy, particularly real interest rates, and others are imperfect proxies, like public equities (stocks). It's worth noting that several leading AI companies cannot be invested in by the general public currently (eg OpenAI, Anthropic, etc) as they are privately held, so it may be difficult to invest exactly and precisely in an AI scenario. My argument is that while it will be tricky to get perfect exposure, and any portfolio will be an imperfect proxy, the rapidly accelerating growth will mean that lots of value is created in unexpected places and captured in different ways - for example one of those booming private companies being acquired by a public company, which you can now invest in - such that the boring old strategy of "invest in index funds" still might capture much/most of the value from AGI :)
***
As for my current portfolio, I have a mix of different things, which includes individual stocks I expect to benefit from AI (e.g. MSFT and GOOG), and various other investments including other individual stocks, and also a very favorable fixed-rate mortgage - which is an example of a trade that was good anyway, but which AI made better.

I agree with NoahK that preserving optionality is a good idea here. You should treat illiquidity as costing a larger premium than usual.

In general, I think that 'construct a perfect portfolio' is not worthwhile unless you value the intellectual exercise. There isn't enough alpha in getting it exactly right, and tax considerations often dominate when considering things like rebalances. You want to be sure you are directionally right and are expressing your opinions, but not go too crazy.

I strongly agree with Will that accelerating AGI will create a lot of value in different places, so a broad range of productive assets could appreciate, or at least a portion of them, such that it is reasonable to predict that SPY (S&P 500 ETF) would do well. One worry there is that rising interest rates is not so great for stock prices, so you'd want to consider whether to cover that base.

For things like options, you pay a premium in that you have to cross a wider spread when you trade them, worry about various edge cases in market structure, and then face tax implications. It is clearly the right move in certain focused spots (think Feb 2020) but I would hesitate to use them for AI unless you expect things to escalate rather quickly. (...)

***
Career capital in an AGI world
Also, for most readers I imagine that career capital is their most important asset.
This point Noah made is worth considering, though I think we need to be honest about what skills we have / which ones we can develop. In a full AGI scenario, where all humans are exceeded in all skills by AIs, it seems unlikely that even the best programmers etc will be able to make significant returns. In the lead up to that period, there's still an open question about which jobs exactly get replaced in what order.

For example, everyone seemed very surprised when art got automated first - it was always assumed that creative tasks would be the last ones to go! (This still could be true if we place a huge premium on human-created art.) It seems reasonable that anyone working directly on improving AI could still earn a large premium for many years, and so relying on that career/human capital seems like a good strategy. But if you expect both 1) AGI soon, and 2) many/most jobs replaced, then I think people shouldn't assume they'll be able to earn any income in future periods. (Social implications of that are massive, expect taxation of AGI + universal basic income, mass charity, etc, etc - but the point stands.)

***
Completely agree with Will about most people's careers not necessarily being worth all that much once AGI is here. I think it's an argument for trying to grab money via your career now, instead of doing things that supposedly build human career capital but take a while to pay off. Do quant trading over consulting, don't go to grad school, try to front-load earnings as much as possible. (EDIT: to Zvi's point, human and career capital are different. I'm really just talking about career capital here - broader social connections and reputation are likely to be exceedingly important in many futures).

I do expect the transitional period around AGI to create a lot of high value entrepreneurial opportunities for those with that skillset. Unclear how long to expect it to take for things to reach a new equilibrium.

***
A point I have not seen made so far, that is worth considering, is in which worlds is the value of having money high rather than low for you?

In the extreme, in the world where doom occurs and everyone dies, dying with the most toys is still dead. Or there is a regime change or revolution or confiscatory taxation regime or other transformation where old resources stop having meaning. Or if we get into a post-scarcity utopia situation of some kind somehow, perhaps you did not need money.

Whereas there are other scenarios where having funds in the right place at the right time could be hugely impactful - which I would guess are often exactly the scenarios where interest rates are very high. Or worlds in which those without capital get left behind. So you'd want trades and investments that pay off in the worlds where wealth is valuable, and to worry less about when wealth is not so valuable. 

***
Yeah you kind of have to assume that things will get weird but not too weird. It's not really possible to hedge either the apocalypse or a global revolution, so you can ignore those states of the worlds when pricing assets (more or less).

by habryka, Zvi, Cosmos, NoahK, Less Wrong | Read more:

Monday, January 20, 2025

Wilf Perreault (Canadian Artist, born 1947), "Starry Night", 2023.

The Dawn of the Post-Literate Society

I am increasingly convinced that the collapse of reading is one of the most profound social and cultural developments of modern times. For years surveys have shown that rates of reading have been falling precipitously since the advent of the smartphone. Now a report from the OECD finds that reading proficiency is falling around the world for the first time on record. Sarah O’Connor has written interestingly about the findings in the Financial Times:
Among adults with tertiary-level education (such as university graduates), literacy proficiency fell in 13 countries and only increased in Finland, while nearly all countries and economies experienced declines in literacy proficiency among adults with below upper secondary education . . . “Thirty per cent of Americans read at a level that you would expect from a 10-year-old child”
I think O’Connor is right to say that we are becoming a “post literate” society as scrolling and short form video rapidly replaces sustained reading. I know many intelligent educated adults who never read. Friends who are teachers and academics tell me that the practice of “reading for fun” is virtually dead among their students. Reports from universities confirm this impression. A recent piece in The Atlantic, The Elite College Students Who Can’t Read Books, found that many university academics no longer assign long or complex texts because their students are now unable to cope with them: (...)

The great prophetic book on this subject is Neil Postman’s books Amusing Ourselves to Death. I spoke to Ian Leslie about it on the latest episode of his Ruffian podcast. Postman’s argument is that a culture of literacy is not just a nice thing to have but that it underpins our entire political culture. The habits of sustained attention, logical argument, and calm impersonal communication are fundamental to a democratic society. All modern democracies are products of the highly literate societies of the nineteenth century. Without literacy, democracy may not survive.

Indeed, the decline of reading is already transforming our political culture. O’Connor interestingly (and alarmingly) speculates that the decline of reading means that our society is already returning to some of the characteristics more usually associated with oral cultures:

by James Mariotte, Cultural Capital |  Read more:
Image: uncredited
***
The Elite College Students Who Can’t Read Books

Nicholas Dames has taught Literature Humanities, Columbia University’s required great-books course, since 1998. He loves the job, but it has changed. Over the past decade, students have become overwhelmed by the reading. College kids have never read everything they’re assigned, of course, but this feels different. Dames’s students now seem bewildered by the thought of finishing multiple books a semester. His colleagues have noticed the same problem. Many students no longer arrive at college—even at highly selective, elite colleges—prepared to read books.

This development puzzled Dames until one day during the fall 2022 semester, when a first-year student came to his office hours to share how challenging she had found the early assignments. Lit Hum often requires students to read a book, sometimes a very long and dense one, in just a week or two. But the student told Dames that, at her public high school, she had never been required to read an entire book. She had been assigned excerpts, poetry, and news articles, but not a single book cover to cover.

“My jaw dropped,” Dames told me. The anecdote helped explain the change he was seeing in his students: It’s not that they don’t want to do the reading. It’s that they don’t know how. Middle and high schools have stopped asking them to. (...)

No comprehensive data exist on this trend, but the majority of the 33 professors I spoke with relayed similar experiences. Many had discussed the change at faculty meetings and in conversations with fellow instructors. Anthony Grafton, a Princeton historian, said his students arrive on campus with a narrower vocabulary and less understanding of language than they used to have. There are always students who “read insightfully and easily and write beautifully,” he said, “but they are now more exceptions.” Jack Chen, a Chinese-literature professor at the University of Virginia, finds his students “shutting down” when confronted with ideas they don’t understand; they’re less able to persist through a challenging text than they used to be. 

Failing to complete a 14-line poem without succumbing to distraction suggests one familiar explanation for the decline in reading aptitude: smartphones. Teenagers are constantly tempted by their devices, which inhibits their preparation for the rigors of college coursework—then they get to college, and the distractions keep flowing. “It’s changed expectations about what’s worthy of attention,” Daniel Willingham, a psychologist at UVA, told me. “Being bored has become unnatural.” Reading books, even for pleasure, can’t compete with TikTok, Instagram, YouTube. In 1976, about 40 percent of high-school seniors said they had read at least six books for fun in the previous year, compared with 11.5 percent who hadn’t read any. By 2022, those percentages had flipped.

But middle- and high-school kids appear to be encountering fewer and fewer books in the classroom as well. For more than two decades, new educational initiatives such as No Child Left Behind and Common Core emphasized informational texts and standardized tests. Teachers at many schools shifted from books to short informational passages, followed by questions about the author’s main idea—mimicking the format of standardized reading-comprehension tests. Antero Garcia, a Stanford education professor, is completing his term as vice president of the National Council of Teachers of English and previously taught at a public school in Los Angeles. He told me that the new guidelines were intended to help students make clear arguments and synthesize texts. But “in doing so, we’ve sacrificed young people’s ability to grapple with long-form texts in general.” (...)

In a recent EdWeek Research Center survey of about 300 third-to-eighth-grade educators, only 17 percent said they primarily teach whole texts. An additional 49 percent combine whole texts with anthologies and excerpts. But nearly a quarter of respondents said that books are no longer the center of their curricula. One public-high-school teacher in Illinois told me that she used to structure her classes around books but now focuses on skills, such as how to make good decisions. In a unit about leadership, students read parts of Homer’s Odyssey and supplement it with music, articles, and TED Talks. (...)

But it’s not clear that instructors can foster a love of reading by thinning out the syllabus. Some experts I spoke with attributed the decline of book reading to a shift in values rather than in skill sets. Students can still read books, they argue—they’re just choosing not to. Students today are far more concerned about their job prospects than they were in the past. Every year, they tell Howley that, despite enjoying what they learned in Lit Hum, they plan to instead get a degree in something more useful for their career. (...)

Whether through atrophy or apathy, a generation of students is reading fewer books. They might read more as they age—older adults are the most voracious readers—but the data are not encouraging. The American Time Use Survey shows that the overall pool of people who read books for pleasure has shrunk over the past two decades. A couple of professors told me that their students see reading books as akin to listening to vinyl records—something that a small subculture may still enjoy, but that’s mostly a relic of an earlier time.

The economic survival of the publishing industry requires an audience willing and able to spend time with an extended piece of writing. But as readers of a literary magazine will surely appreciate, more than a venerable industry is at stake. Books can cultivate a sophisticated form of empathy, transporting a reader into the mind of someone who lived hundreds of years ago, or a person who lives in a radically different context from the reader’s own. “A lot of contemporary ideas of empathy are built on identification, identity politics,” Kahn, the Berkeley professor, said. “Reading is more complicated than that, so it enlarges your sympathies.”

by Rose Horowitch, The Atlantic | Read more:
Image: Masha Krasnova-Shabaeva

2024 Lyttle Lytton Contest

Welcome to the 2024 edition of the Lyttle Lytton Contest.  This is the first year that entries generated by programs such as ChatGPT have been separated out into their own division; I was surprised to find that the increase in such entries from previous years, when computer-generated entries were included with the found entries, was negligible.  We’ll get to those spinoff divisions in a bit, but let’s start with the main event.  The winner of the 2024 Lyttle Lytton Contest is:

He slammed the door in my face, loud and sharp, like an acoustic lemon.
                                                        Erin McCourt

As I explain almost every year, this is not just a “write a funny sentence” contest: one of my criteria in deciding which entries will make it onto this page is my sense that some author out there could plausibly have tried start­ing a novel this way, and not as a joke.  I’m not quite as concerned about that for the honorable mentions, but for the winner, it’s very important.  And this year’s winner… I have to admit that when I was sixteen, I could have written this, and I would have thought that it was great.  That was the year I wrote a story in which a character’s voice “sparkled with strawberry overtones”, after all.  So here we go: what’s sharp?  A lemon!  A lemon tastes sharp!  So the sound of the door is like a lemon!  But it will be confusing to the reader if I don’t specify that it’s the sound that I’m comparing to a lemon!  So it’s an acoustic lemon!  And… I mean, the phrase “acoustic lemon” is completely ridi­culous, but I can absolutely see how an aspiring writer could get there.  So there was no agonizing over choosing a winner this year: this entry arrived quite early in the 2024 submission window, I flagged it as the current leader, and even the best entries to arrive over the course of the next eleven months never challenged it.

So, no other finalists this year, no second place… let’s get straight to the honorable mentions!

Image: uncredited via
[ed. I'm not sure how this relates to the more well-known Bulwer-Lytton Fiction Contest but you can compare both and pick your favorites.]

Why AI Progress is Increasingly Invisible

On Dec. 20, OpenAI announced o3, its latest model, and reported new state-of-the-art performance on a number of the most challenging technical benchmarks out there, in many cases improving on the previous high score by double-digit percentage points. I believe that o3 signals that we are in a new paradigm of AI progress. And François Chollet a co-creator of the prominent ARC-AGI benchmark, who some consider to be an AI scaling skeptic, writes that the model represents a "genuine breakthrough."

However, in the weeks after OpenAI announced o3, many mainstream news sites made no mention of the new model. Around the time of the announcement, readers would find headlines at the Wall Street Journal, WIRED, and the New York Times suggesting AI was actually slowing down. The muted media response suggests that there is a growing gulf between what AI insiders are seeing and what the public is told.

Indeed, AI progress hasn't stalled—it's just become invisible to most people.

Automating behind-the-scenes research

First, AI models are getting better at answering complex questions. For example, in June 2023, the best AI model barely scored better than chance on the hardest set of "Google-proof" PhD-level science questions. In September, OpenAI's o1 model became the first AI system to surpass the scores of human domain experts. And in December, OpenAI's o3 model improved on those scores by another 10%.
 
However, the vast majority of people won't notice this kind of improvement because they aren't doing graduate-level science work. But it will be a huge deal if AI starts meaningfully accelerating research and development in scientific fields, and there is some evidence that such an acceleration is already happening. A groundbreaking paper by Aidan Toner-Rodgers at MIT recently found that material scientists assisted by AI systems "discover 44% more materials, resulting in a 39% increase in patent filings and a 17% rise in downstream product innovation." Still, 82% of scientists report that the AI tools reduced their job satisfaction, mainly citing "skill underutilization and reduced creativity."

But the Holy Grail for AI companies is a system that can automate AI research itself, theoretically enabling an explosion in capabilities that drives progress across every other domain. The recent improvements made on this front may be even more dramatic than those made on hard sciences.

In an attempt to provide more realistic tests of AI programming capabilities, researchers developed SWE-Bench, a benchmark that evaluates how well AI agents can fix actual open problems in popular open-source software. The top score on the verified benchmark a year ago was 4.4%. The top score today is closer to 72%, achieved by OpenAI's o3 model.

This remarkable improvement—from struggling with even the simplest fixes to successfully handling nearly three-quarters of the set of real-world coding tasks—suggests AI systems are rapidly gaining the ability to understand and modify complex software projects. This marks a crucial step toward automating significant portions of software research and development. And this process appears to be well underway. Google's CEO recently told investors that "more than a quarter of all new code at Google is generated by AI." (...)

The problem with invisible innovation

The hidden improvements in AI over the last year may not represent as big a leap in overall performance as the jump between GPT-3.5 and GPT-4. And it is possible we don't see a jump that big ever again. But the narrative that there hasn't been much progress since then is undermined by significant under-the-radar advancements. And this invisible progress could leave us dangerously unprepared for what is to come.

The big risk is that policymakers and the public tune out this progress because they can't see the improvements first-hand. Everyday users will still encounter frequent hallucinations and basic reasoning failures, which also get triumphantly amplified by AI skeptics. These obvious errors make it easy to dismiss AI's rapid advancement in more specialized domains.

There's a common view in the AI world, shared by both proponents and opponents of regulation, that the U.S. federal government won't mandate guardrails on the technology unless there's a major galvanizing incident. Such an incident, often called a "warning shot," could be innocuous, like a credible demonstration of dangerous AI capabilities that doesn't harm anyone. But it could also take the form of a major disaster caused or enabled by an AI system, or a society upended by devastating labor automation.

The worst-case scenario is that AI systems become scary powerful but no warning shots are fired (or heeded) before a system permanently escapes human control and acts decisively against us.

Last month, Apollo Research, an evaluations group that works with top AI companies, published evidence that, under the right conditions, the most capable AI models were able to scheme against their developers and users. When given instructions to strongly follow a goal, the systems sometimes attempted to subvert oversight, fake alignment, and hide their true capabilities. In rare cases, systems engaged in deceptive behavior without nudging from the evaluators. When the researchers inspected the models' reasoning, they found that the chatbots knew what they were doing, using language like “sabotage, lying, manipulation.”

This is not to say that these models are imminently about to conspire against humanity. But there has been a disturbing trend: as AI models get smarter, they get better at following instructions and understanding the intent behind their guidelines, but they also get better at deception. 

by Garrison Lovely, Time |  Read more:
Image: Ricardo Santos
[ed. You have to believe humanity has a death wish. If something can be built, it will be, and if it can be weaponized, all the better. We can't help ourselves. I'm reminded of Chekov's gun. See also: A Compilation of Tech Executives' Statements on AI Existential Risk (Obsolete Newsletter); and, Can Humanity Survive AI? (Jacobin):]
***
Google cofounder Larry Page thinks superintelligent AI is “just the next step in evolution.” In fact, Page, who’s worth about $120 billion, has reportedly argued that efforts to prevent AI-driven extinction and protect human consciousness are “speciesist” and “sentimental nonsense.”

In July, former Google DeepMind senior scientist Richard Sutton — one of the pioneers of reinforcement learning, a major subfield of AI — said that the technology “could displace us from existence,” and that “we should not resist succession.” In a 2015 talk, Sutton said, suppose “everything fails” and AI “kill[s] us all”; he asked, “Is it so bad that humans are not the final form of intelligent life in the universe?”

“Biological extinction, that’s not the point,” Sutton, sixty-six, told me. “The light of humanity and our understanding, our intelligence — our consciousness, if you will — can go on without meat humans.”

Yoshua Bengio, fifty-nine, is the second-most cited living scientist, noted for his foundational work on deep learning. Responding to Page and Sutton, Bengio told me, “What they want, I think it’s playing dice with humanity’s future. I personally think this should be criminalized.” A bit surprised, I asked what exactly he wanted outlawed, and he said efforts to build “AI systems that could overpower us and have their own self-interest by design.” In May, Bengio began writing and speaking about how advanced AI systems might go rogue and pose an extinction risk to humanity.

Bengio posits that future, genuinely human-level AI systems could improve their own capabilities, functionally creating a new, more intelligent species. Humanity has driven hundreds of other species extinct, largely by accident. He fears that we could be next — and he isn’t alone.

Bengio shared the 2018 Turing Award, computing’s Nobel Prize, with fellow deep learning pioneers Yann LeCun and Geoffrey Hinton. Hinton, the most cited living scientist, made waves in May when he resigned from his senior role at Google to more freely sound off about the possibility that future AI systems could wipe out humanity. Hinton and Bengio are the two most prominent AI researchers to join the “x-risk” community. Sometimes referred to as AI safety advocates or doomers, this loose-knit group worries that AI poses an existential risk to humanity.

In the same month that Hinton resigned from Google, hundreds of AI researchers and notable figures signed an open letter stating, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Hinton and Bengio were the lead signatories, followed by OpenAI CEO Sam Altman and the heads of other top AI labs. (...)

In spite of all this uncertainty, AI companies see themselves as being in a race to make these systems as powerful as they can — without a workable plan to understand how the things they’re creating actually function, all while cutting corners on safety to win more market share. Artificial general intelligence (AGI) is the holy grail that leading AI labs are explicitly working toward. AGI is often defined as a system that is at least as good as humans at almost any intellectual task. It’s also the thing that Bengio and Hinton believe could lead to the end of humanity.

Bizarrely, many of the people actively advancing AI capabilities think there’s a significant chance that doing so will ultimately cause the apocalypse. A 2022 survey of machine learning researchers found that nearly half of them thought there was at least a 10 percent chance advanced AI could lead to “human extinction or [a] similarly permanent and severe disempowerment” of humanity. Just months before he cofounded OpenAI, Altman said, “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”

A Week For the Ages

In the days before Donald J. Trump Part II: This Time It’s Who the Hell Knows really kicks off, the United States put on a preemptive display for the ages.

Just this week, we had SpaceX launching its latest iteration of Starship (complete with a catch of the returned bottom part of the spacecraft and a rapid unscheduled disassembly of the top part), Jeff Bezos’s Blue Origin flying its New Glenn rocket to orbit for the first time, SpaceX and Firefly sending a lunar lander to the Moon, Varda sending its second factory into orbit and Planet putting up more of its imaging satellites. And that was just space.

Anduril also opened its massive, new Arsenal-1 factory in Columbus, Ohio. The $1 billion manufacturing plant very much represents the future of the company and its ambition to pump out tens of thousands of autonomous aircraft and weapons systems.

I’m not a major flag-waver type. Overzealous patriotism, nationalism and all forms of groupthink make me nervous. That said, if the U.S. is going to spend money on things, it might as well do it in a competent fashion, and, if we hit the more than competent bar, then even better. So here’s why these things were a big deal for the U.S. and why you might want to feel some genuine pride for American ingenuity.

Blue Origin opened its doors in 2000. It had a couple false starts and then put a lot of attention toward its New Shepard vehicle for space tourism flights before really ramping up its efforts on the New Glenn rocket about a decade ago. Is 25 years a long time to bring these two programs to life? My god, yes. It’s NASA speed at best, and that’s not what you want to see from a commercial aerospace start-up.

To the Blue Origin team and Bezos’s credit, though, the company stuck with it and now appears poised for very big things. New Glenn is a large, reusable rocket meant to carry huge numbers of satellites into orbit. Blue Origin didn’t manage to stick the landing on the reuse technology during its first launch, but it did have a successful flight into space, and that pretty much never, ever happens with a first rocket.

SpaceX has been lapping the world with its Falcon 9 and Falcon Heavy rockets for the big stuff, and Rocket Lab has been lapping the world with its smaller Electron. Now, the US has another viable large, reusable rocket that will, no doubt, be backed with increasing investment. Not to mention – there are also a handful of other rocket players with medium-sized vehicles that are either launching or about to launch.

With this bevy of options for reaching space and Trump coming in, I very much suspect that the government will kill NASA’s Space Launch System. This is NASA’s enormous rocket that has cost more than $25 billion to build since its development began way back in 2011. (For a full assessment of the debacle that SLS is, head here.)

Killing the SLS could be just fine for NASA and even for its builders Boeing and Lockheed Martin. These organizations could refocus on whatever they do best and not have this particular national disgrace hanging over them anymore.

And the U.S. will be just fine too.

When SpaceX and Blue Origin started around 2000, the U.S. space program was in very bad shape. NASA was heading toward ending the Space Shuttle program, cutting off the U.S.’s ability to send humans into space. Boeing and Lockheed were charging obscene amounts of money to put satellites into orbit and had little to no innovative ideas on their roadmap. China’s space program was gearing up for a tremendous run.

Here in 2025, the U.S. is doing a decade’s worth of space stuff in a week. Should Europe ever want to catch up to the U.S. space program, it would take fifteen or twenty years and a monumental vibe shift. Russia’s space program is in dire straits due to corruption, the war, competition from SpaceX and the lack of commercial space start-ups. Only China can compete with the U.S. thanks to massive government investment, and it still trails the quantity and quality of rocket and satellite programs by a large margin.

The government has, of course, played a major role in the development of commercial space in the U.S. and should be applauded for what it got right. Rocket companies still depend on government contracts for much of their business. The U.S., though, has made the transition from government-first space to commercial-first space, and the pace of development that has resulted cannot be questioned. (I wrote a whole book about this and made a movie about it, so I’m biased but also not wrong.)

The last week confirmed that the U.S. is the envy of the world when it comes to space. Don’t take it for granted, patriots. This easily could not have happened. We could still be using Russian engines to power our rockets. And we could be reading about China’s great space successes with nervous envy.

Now to Anduril.

In 2000, Congress issued a mandate that called for one-third of the military’s deep strike aircraft to be unmanned by 2010. It also demanded that one-third of combat vehicles be unmanned by 2015.

Stop and read that again because it feels, even to me, like I’m making it up. But, no, there’s proof and everything.

We read about drone strikes all the time, and I think this gives a false impression of the U.S.’s autonomous warfighting abilities. Mark Cancian, a retired U.S. Marine Corps Colonel and now analyst with the Center for Strategic & International Studies, studies autonomous systems and had this perspective to share.

“The Marine Corps has talked about going 50 percent unmanned, and the Navy has talked about going to 40 percent,” Cancian says. “But, if you look at their programs, they’re down in the two percent to three percent range. So, they’re doing even worse than the Air Force, which has been stuck at six percent. The Air Force has about 300 attack drones, the Army is building up to 200, and the Marine Corps is at three.” (...)

Away from hardware, other efforts to modernize the military have been SLS-level horrors. May you gently weep reading about the Army’s Future Combat Systems program. RIP.

Anduril has much work to do to shift the U.S. from the current state of affairs to a massive, quick-moving autonomous weapons powerhouse. But the creation of Arsenal-1 is certainly a step in that direction.

To date, Anduril has been making a wide variety of drones, surveillance systems, submarines and weapons at relatively modest-sized factories scattered around the country (and Australia). It does volumes in the thousands. Arsenal-1 is meant to take the company to the tens of thousands range with futuristic unmanned fighters and weapons the likes of which no other company has yet even thought of building. (...)

The U.S. is in desperate need of a company that can build things well. This story from 2023 in the New York Times received some attention but not as much as it should. The war in Ukraine has drained the U.S.’s supply of things like Stinger and Javelin missiles, and it turns out that we’ve kind of, sort of forgotten how to make more of them. The Times wrote:
In the first 10 months after Russia invaded Ukraine, prompting Washington to approve $33 billion in military aid so far, the United States sent Ukraine so many Stinger missiles from its own stocks that it would take 13 years’ worth of production at recent capacity levels to replace them. It has sent so many Javelin missiles that it would take five years at last year’s rates to replace them, according to Raytheon, the company that helps make the missile systems. [Emphasis and screams into the night are mine.]
When Raytheon is copping to things like that out loud, the situation has gotten very bad indeed.

by Ashlee Vance, Core Memory |  Read more:
Image: uncredited

Saturday, January 18, 2025

Bob Dylan/Weird Al Yankovic

[ed. One of them is having a moment these days. See also: Embracing X and a turn by Timothée Chalamet: how Bob Dylan is capturing gen Z (Guardian). And, as a bonus - this gloriously deranged video (backwards cowboy hat and all... Cold Irons Bound).]

Attention is Power (and the Problem)

Democrats Are Losing the War for Attention. Badly.

On Monday, Donald Trump is going to take the oath of office for the second time. During his first administration, there were questions around how he would instrumentalize policy in the government, how he would raise money. We’re used to talking about that with politicians.

But there was also the separate question — of how Trump wields and uses attention.

He’s a master at it. And I’d say he has a disciple, an ally, in Elon Musk. Musk is probably the most attentionally rich person in the world alongside Donald Trump, and Musk’s attentional riches might be more important now than his financial riches.

And so if you’re going to think about politics predictively, you have to scrutinize how attention is being spent, amassed and controlled. And that’s what this conversation is about. It’s a curtain raiser on the attentional regime we’re about to enter.

My friend Chris Hayes is best known as the host of MSNBC’s 8 p.m. show, “All In With Chris Hayes.” But he just wrote a great book called “The Sirens’ Call: How Attention Became the World’s Most Endangered Resource.

I’ve read most of the books on attention out there. This one is, I think, the best at understanding the value of attention today. Because it isn’t just endangered — it is the world’s most valuable resource. And the people who are on top of the world right now understand its value. (...)

Ezra Klein: Chris Hayes, welcome to the show.

Chris Hayes: Really great to be here.

So you’ve got a cable news show. You’re an attention merchant. What is different about the way attention felt and worked in the early 2000s when you were starting out, when I was starting out, and the way it feels and works for you now?

That’s a great question. One is there’s so much more competition. The notion now is that at every single moment when you are competing for someone’s attention, you are competing against literally every piece of content ever produced.

I love this thing that happened a few years ago where “Suits,” which was a network show, had become the most-watched show on Netflix. It never would have occurred to me back in 2013 that I might be fighting for eyeballs with someone watching “Suits.”

But at every single moment that you are trying to get someone’s attention now, the totality of human content is the library of your competition. And that was not true in 2000. (...)

You talk in the book about attention now being the most valuable commodity, the most important commodity, the commodity that so many of the great modern businesses, among other things, are built on. Like Google and Meta.

I still think we’re realizing attention was undervalued. Or maybe that its most important value isn’t selling it off to advertisers. So I’ve been thinking a lot about Elon Musk, who emerges in your book as a slightly pathetic figure trying to fill this howling void he has for attention.


Yes, the book was written before I think he got a second chapter.

Elon Musk overpaid for Twitter at $44 billion. It is not a business, as he has said himself, worth $44 billion. On the other hand, the amount of attention that he is capable of controlling and amassing and manipulating through Twitter, cannot be traded directly for $44 billion dollars. But it’s clearly worth more than $44 billion dollars.

So how do you think about this translation that we’re seeing happen right now between attention as a financial commodity and attention as having more worth, frankly, than the money it would fetch on the open market?


Yes, I think he backed into the purchase of Twitter based on a kind of howling personal void.

But in the same way that Donald Trump backed into the same insight borne of his personality and his upbringing in the New York tabloid world, he figured out something that has been obviously tremendously valuable in dollar terms. One of the really important ironies here, which I think does map onto labor, is that the aggregate of attention — like lots of attention or the collective public attention — is wildly valuable.

Volodymyr Zelensky is a great example of this. The president of Ukraine understands that attention on Ukraine’s plight is essentially the engine for securing the weaponry and resources his country needs to defend itself.

And yet even though the aggregate of attention is very valuable, in market terms, our individual attention, second to second, is fractions of pennies.

And that was exactly what it was like with labor. When Marxists would say labor is a source of all value, they were right in the aggregate. Take away all the workers and the Industrial Revolution doesn’t happen. But to the individual worker in the sweatshop, the little slice of labor that you’re producing is both everything you have as a person and worth almost nothing in the market.

And I think we have the same thing with attention, where it’s really valuable, pooled and aggregated. Each individual part of it that we contribute is essentially worthless, is pennies — and then subjectively, to us, it’s all we have.

I think attention is now to politics what people think money is to politics. Certainly at the high levels.

There are places where money is very powerful, but it’s usually where people are not looking. Money is very powerful when there’s not much attention. But Donald Trump doesn’t control Republican primaries with money — he controls them with attention.

I keep having to write about Musk, and I keep saying he’s the richest man in the world. But it’s actually not what matters about him right now. It’s just how he managed to get the attention and become the character and the wielder of all this attention. And that’s a changeover I think Trumpist Republicans have made, and Democrats haven’t.

Democrats are still thinking about money as a fundamental substance of politics, and the Trump Republican Party thinks about attention as a fundamental substance of politics.


I really like this theory. I think there are a few things: One, I think you’re totally right to identify that it’s sort of a sliding scale between the two. Which is to say: For politics that get the least attention, money matters the most.

So in a state representative race, money really matters — partly because no one is paying attention to who the state rep is. Local media has been gutted. Money can buy their attention. You could put out glossy mailers. There’s a lot you could do. The further up you go from that, to Senate to president, the more attention there is already, the less the money counts.

And you saw this with the Harris campaign. They raised a ton of money, and they spent it the way that most campaigns spend it, which is on trying to get people’s attention, whether that’s through advertising or door knocking — but largely attention and then persuasion: I’m running for president. Here’s what I want to do. Here’s why you should vote for me.

Now you can do that at billions of dollars’ worth of advertising, and everything is just like drops of rain in a river because there is so much competition for attention.

What Trump and Musk figured out is that what matters is the total attentional atmosphere. That in some ways, it’s kind of a sucker’s game to try to pop in and be like: I got an ad. Hey, hey, do you like tax cuts? What do you like?

All that is just going to whiz past people. The sort of attentional atmosphere — that’s where the fight is.

And that’s what Musk’s Twitter purchase ended up being — an enormous, almost Archimedean, lever on the electorate.

I think this is right. I think there’s another distinction between Democrats and Republicans here. Which is that I think Democrats still believe that the type of attention you get is the most important thing.

If your choice is between a lot of negative attention and no attention, go for no attention. And at least the Trump side of the Republican Party believes that the volume, the sum total of attention, is the most important thing. And a lot of negative attention: not only fine — maybe great, right? Because there’s so much attentional energy and conflict.

Kamala Harris and Tim Walz and before them, Joe Biden — before the changeover, they were just terrified of an interview going badly. And Trump and Vance — they were all over the place, including in places very hostile to them.

Vance had a ton of interviews that went badly.

But they were everywhere. Because they cared about the volume of attention and were completely fine with the energy that negative attention could unlock.

I think this is the key transformational insight of Donald Trump to politics.

Generally, in politics, you want to get people’s attention for the project of persuading them. “Friends, Romans, countrymen, lend me your ears,” Mark Antony says before he proceeds to attempt to persuade them.

What Trump figured out is that in the attention age, in this sort of war of all against all, that just getting attention matters more than whatever comes after it.

And one way reliably to get people’s attention is negative attention — if you insult people, act outrageously. There was a commercial model for this — which is the shock jocks of the 1980s and ’90s that we grew up with. They were in a competitive attentional marketplace in local places.

Shock jocks said outrageous things. They weren’t trying to get someone to vote for them. They just wanted you to know that they were running the morning zoo. (...)

OK, but now I think we need to have a moment of caution. Donald Trump won the popular vote by like 1.5 percentage points, which is a terrible win. And yet there’s just no doubt Trump has won some kind of cultural and attentional victory that is much bigger in its feeling than the actual electoral victory they won.

I’m not sure this works as well in politics, but in terms of changing the culture, his win has changed the culture immediately in a way that I would not have foreseen. It does not reflect, if you just told somebody the election results — I don’t think they feel the vibe shift.


There is something happening where it is not proven to be a replicable strategy — that the old logic that we were just talking about the Democrats having, and being outdated, still does hold in a lot of races.

In terms of influence, I think negative attention is incredibly effective. You can just call it trolling politics.

The idea of trolling and the reason that trolling exists is it’s easier to get negative attention than positive attention. It creates a conundrum for the other side. Which is: Do you ignore them while they say horrible stuff? Or do you engage them and give them what they want?

And I think this kind of trolling politics, which was really Donald Trump’s insight, is the most transformational part of politics now. And you’re 100 percent correct: The media management around Democrats involves so much risk aversion. If the choice is negative attention or no attention, we take no attention every time. And that is the wrong choice. (...)

There’s a certain personality type that is OK with that negative charge. A lot of people would not have been willing to absorb the personal polarization Musk has decided to absorb to become as significant as he is.

Trump is very similar. I think most people would rather be well-regarded but somewhat forgettable to a large group of people — rather than absolutely hated by half the country in order to be quite loved by the other half. And I think that’s something in people.

What I’m asking is: Does politics now select for a kind of attentional sociopath?


I think it does select for a potential sociopath. I would push back a little bit in this respect, though. I don’t know how much of the negative feedback gets to Donald Trump and Musk —

But he’s sitting there watching MSNBC and getting mad at it, or CNN. He’s a guy who actually seeks out stuff to make him angry.

Yes, but I guess what I’m trying to say is I think it bothers him and Musk, too. I guess I just don’t buy that it rolls off their back. They’re kind of obsessed with it, also. So that fixation is manifest differently. But the idea that they’re sort of Zen-like: Well, you know, people are just going to hate.

That’s not what’s going on psychologically. I worry, actually, that politics now selects for a kind of sociopathic disposition. Or just a very broken and compulsive one.

I have the show-off demon in myself and from the time I was very young wanted people to pay attention to me. I don’t love that part of me. I don’t think that’s the best part of me. I think that my relationship to it is a little fraught and intentionally managed. And I don’t think that I would be a better person if I let that beast run loose.

And I worry that the incentives are to basically do that, both for everyone individually, in politics and culture, and also in the collective public sphere.

Let me say that the thing that I think is the deepest problem here: Fundamentally, the most competitive attentional regimes select for the parts of people that are in the aggregate — and over time, the most reactionary. (...)

We’ve been, I think, talking about attention mostly in terms of social media here. I want to talk about another way: that attention and the way we think about stories changed in this period. Which is reality television — which is the other side of this that Trump comes out of.

One thing that has felt true to me about Trump’s second term, much more than the first, is that it feels like reality television. It is all these secondary characters with their own subplots and their own arcs: What’s going to happen with Pete Hegseth? And over here is Robert F. Kennedy, Jr. and Musk.

In the first term, Trump was the only character of the Trump administration. Now he’s playing a role that feels to me much more like the host — like sometimes he comes out and somebody actually is voted off the island. It’s like: Well, Matt Gaetz is gone now. Or so and so has gone. People get fired, or he settles the big plot of that week.

He’s going to side with Musk and Ramaswamy on H-1B visas — or he comes in to announce a new plot, like Greenland. He’s not the only figure — he’s the host, the decider. Compared with other administrations, even compared with his first, this one is feeling programmed in a very different way. (...)


Does that resonate for you?

It does resonate. If you’ve ever talked to people in reality television, they have selected for people with very flawed personalities, borderline personality disorder, narcissism. Because that produces conflict and conflict produces drama, and conflict is what keeps attention.

And those people like attention. Not all of them — but the ones they pick, right? You pick people on reality shows who like attention, who are willing to absorb negative attention to be the star.

Exactly right. And you don’t pick people who are sort of shy and go along to get along. Because what does that get you? So that model I think explains a lot about the personalities who are selected for in the context of intense attentional competition.

In terms of the programming, I totally agree — although I do think it’s instinctual for him. I don’t think it’s that plotted out. But I do think fundamentally he thinks that he needs the attention at all times. And he just has an intuitive sense of that. And Greenland is a perfect example.

There were a thousand of them in the first Trump administration. There will be a thousand more. What do you do with it? Is it attention-getting to be like: The incoming president wants to take over Greenland? Yes, it is. Is he serious? I don’t know. Is it a good idea? No, it’s not.

Should we debate it? Should we talk about it? I don’t know. But we’re all just now inside the attentional vortex of the Greenland conversation. And he’s done that again and again and again. (...)

That connects to the next layer, which is the obsession with what’s called the mainstream media, the legacy media. All of which is understandable. But it’s increasingly a conversation that a relatively small part of the country is a part of, and they’re still laser focused on that. And they’re laser focused on it in terms of not making news.

I think about this phrase all the time, “not making news.” As opposed to “making news.” “Making news” means getting people’s attention. “Not making news” means not getting people’s attention.

And the goal of a lot of Democrats in their communication is to “not make news.” And Donald Trump’s goal is always to “make news.”

In a way, the fact that I keep hearing Democrats call this a media problem rather than say an attention problem —

Reflects exactly the problem. 

by Ezra Klein and Chris Hayes, NY Times |  Read more:
Image: Mathieu Larone
[ed. A little longer than usual but this seems like an important topic. Do read the whole thing. And, for a really great burn: I knew one day I’d have to watch powerful men burn the world down – I just didn’t expect them to be such losers (Guardian). *Cringe*]