Friday, March 7, 2025

From the Gut: A Literary History of Indigestion

Irritable Bowel Syndrome.

A decade ago, I would’ve been mortified to type those words. Recent years, however, have seen a surge in awareness of digestive disorders—IBS, celiac disease, lactose intolerance, ulcerative colitis—such that I find myself constantly trading war stories. A journalist friend doesn’t leave the house without his Lactaid. An art-historian pal goes days on white bread and rice, her stomach requiring the blandest fare possible. A musicologist colleague grimly claims he can’t eat “anything with a skin.”

Science and commerce have risen to meet the need: wide-ranging medical research into once arcane procedures like fecal-matter transplants, an over-the-counter digestive-remedy industry valued at more than $20 billion, and endless Instagram and #GutTok gurus hawking stomach cures like aloe-vera juice, ice-water baths, and left-side sleeping. A 2020 survey by the Rome Foundation, which promotes GI health, says that more than 40 percent of the globe suffers from a digestive disorder. Almost half the population, seemingly, feels something isn’t sitting right.

Some months ago, I began to wonder, Where has this crisis come from? And why, given all of this alimentary advocacy, and all of my own dietary austerity, is my stomach, at forty-five, still rioting? Doctors and the internet provided only partial answers, so I went looking in books, where I found a sprawling body of medical history and, surprisingly, literary history on these indelicate matters. The gastrointestinal agonies of writers, it turns out, forms practically its own canon, one that dates back almost to the beginning of Western science’s attempts to understand the digestive tract. For more than two hundred years, countless bizarre theories and treatments were adopted and feverishly promoted by men of letters, including such esteemed figures as Voltaire, Coleridge, Twain, Henry James, Kafka, and Beckett.

While the root causes of our collective dyspepsia eluded me, never mind a cure, I did find strange comfort in such company. And I got some context and understanding. The literary history of indigestion, I came to see, has much to tell us about why we seem to be living, once again, in an age of the stomach.
***
In 1700, Bernardino Ramazzini, a professor of medicine in Modena, Italy, published his seminal Diseases of Workers. His study of laborers—porters, bakers, blacksmiths, mirror makers—first identified what we now call repetitive-stress injuries: bowed legs, rounded shoulders, humped backs. Ramazzini also considered the travails of “the learned.” By hunching over books for hours on end, he argued, scholars and philosophers brought on arthritis, weak eyesight, and, through compression of their pancreatic juices, dire ventral infirmities. Ramazzini held that the stomach couldn’t properly mulch its food while the brain was busy digesting its own sustenance. Indeed, so deleterious was a life of contemplation, he contended, that it was even possible to “die of wisdom.”

Early Enlightenment anatomists saw the gut as the seat of the imagination; it processed emotions and perception and was so spiritually attuned it perhaps even contained the soul. Any disruption below was of grave import. Following Ramazzini’s study of the learned, physicians theorized and investigated a set of conditions known as les maladies des gens de lettres, wherein mental exertion and overeating led to “engorgement of the viscera of the lower abdomen,” as well as “hypochondria, melancholy, and hysteria.”

Naturally, the patriarchal nexus of medicine and letters produced further absurdities. Women, due to supposed softness in their “cerebral pulp,” were thought incapable of the intellectual endeavor necessary to truly injure the bowels. Even as hysteria came to be seen as a feminine complaint, doctors remained stubbornly fascinated by the straining of men. Or, as Anne Vila, a scholar of French literature, puts it, science found a way for les gens de lettres to be “nervous in a manly way.”  (...)

It wasn’t just food that became productively unsettling in the nineteenth century. Writers were similarly agitated by the expulsive and inspirational effects of coffee and tea. Balzac wrote that coffee “acts like a food and demands digestive juices…it brutalizes these beautiful stomach linings as a wagon master abuses ponies; the plexus becomes inflamed; sparks shoot all the way up to the brain.” Across Europe, gut trouble became a mark of the consummate artist.

Literary men in America, however, were less sure. In 1858, an advice column, “Manly Health and Training,” appeared in the New York Atlas newspaper. One Mose Velsor stumped for early-rising, fresh air, and bare-knuckle boxing, and warned of “The Great American Evil—Indigestion.” Velsor advised against fried potatoes, prostitutes, condiments, and “too much brain action and fretting”—all of which might result in a sickly male specimen whose “bowels are clogged with accumulations of fearful impurity.”

Mose Velsor was one of Walt Whitman’s many pseudonyms and used for the hackwork he undertook after the first printing of Leaves of Grass received little notice. The Atlas columns can read less like advice than a wounded man’s self-exhortation. “To you, clerk, literary man, sedentary person, man of fortune.… Up!” And “Eat enough, and when you eat that, stop!”

Mark Twain also took up the cause. Motivated by either his notoriously bad business sense or his own frequent stomach pain (or, more likely, both), he peddled a digestive powder, licensed from the English Plasmon company, as both treatment for dyspepsia and a superfood: “One teaspoon is equivalent to an ordinary beefsteak.” George Bernard Shaw was convinced and “generally dined off a Plasmon biscuit and a bean.” To the novelist William Dean Howells, Twain instructed, “stir it into your soup…use any method you like, so’s you get it down.” Perhaps unsurprisingly, given such appeals, the Plasmon Company of America, like many of Twain’s other commercial ventures, quickly went bust.

Others fared far better. Particularly in America, new ideas about purity, diet, and hygiene proliferated in the second half of the century. A Connecticut minister, Sylvester Graham, introduced his Graham Bread (later, Graham Cracker), which was bland enough to stomach easily while also aiding in the avoidance of drink and masturbation. (Graham believed that sugar fueled intemperate feelings like lust and greed.) At his Battle Creek, Michigan, sanitarium, John Harvey Kellogg (of Corn Flakes fame) treated his digestively ailing patients by prescribing them abstinence, hydrotherapy, electrotherapy, phototherapy, and yogurt enemas. At one point, Kellogg claimed his sanitarium hosted more than seven thousand patients a year, many of them wealthy and willing to pay the rapidly increasing fees. A gastric boom was well underway and about to take another curious turn. (...)

By the late nineteenth century, constipation was dreaded as the “disease of diseases.”  In 1895, an entrepreneur named Horace Fletcher set out to cure it. Fletcher had been a gifted athlete in his youth but at forty found his energies sapped by stoppages below. His solution: Produce as little bodily waste as possible. His system: digesting “in the head” by chewing his food at least two hundred times,  into a slurry that slid down unaided.

Newly energized by his method, Fletcher set about promoting it, performing somersaults and high-dives in his underwear before crowds, mailing his own ash-like turds—no more odor than “a hot biscuit”—to scientists. He eventually converted Kellogg to his regimen; every meal at Battle Creek began with a “chewing song.” Other celebrity chewers included John D. Rockefeller, King Edward VII, and writers like Sir Arthur Conan Doyle, Twain (again), both William  and Henry James, and Upton Sinclair, who called Fletcherism “one of the great discoveries of my life.” Its benefits, apparently, weren’t only physical. In 1903, Fletcher observed a “literary test subject” who subsisted off a glass of milk and four exhaustively chewed corn muffins per day. After eight days, the subject had made just one hot biscuit but had written sixty-four thousand words.

Legitimate medicine would soon discredit Fletcher, but the literary world’s infatuation with mastication lingered in the imagination of an insurance lawyer from Prague. In Franz Kafka’s “A Hunger Artist,” an anonymous professional faster starves himself in view of the public, at first to great curiosity and acclaim, then absolute indifference, until finally he dies unnoticed and un-mourned. The story has been interpreted as religious parable, self-portrait, and as representing modern man’s alienation from family and nation. But perhaps we might read it more directly.

Kafka struggled with constipation. He visited sanitariums, tried laxatives made from powdered seaweed, and obsessively notated his meals and bowel movements, or lack of them. No surprise, then, that he was drawn to Fletcherism. Kafka’s father was so disgusted by his son’s incessant chewing he would hide behind his newspaper at dinner. The hunger artist is likewise unimpressed by his own feats of starvation. “I have to fast,” he says, “I can’t help it.… I couldn’t find the food I liked. If I had found it, believe me, I should have made no fuss and stuffed myself like you or anyone else.”

I read this line with a shudder of recognition. In the worst nights of my sleepless stomach, I ate nothing for days at a time. Sometimes wasting away seemed preferable to enduring yet more intestinal agony. I’m struck, too, by how much the family-dinner scene early in The Metamorphosis resembles Fletcherism. “It seemed remarkable to Gregor that among the various noises coming from the table he could always distinguish the sound of their masticating teeth.” Fletcher’s method seems insane now, but in the grip of dyspepsia, you’ll try anything. The internal turbulence doesn’t just “mar the soul’s serenest hour,” as Twain once wrote; it seems to choke the very life and joy out of you. (...)

Of course, indigestion hasn’t been the torment solely of the literati. Other artists suffered, too, some famously. There has been a surprising amount of scientific investigation into whether Beethoven had IBS. Kurt Cobain harbored for years an undiagnosed stomach pain. His love of Kraft mac & cheese and strawberry milk probably didn’t help, but lactose intolerance wasn’t much discussed, or not so publicly, in the 1990s. “Thank you all,” Cobain wrote in his suicide note in 1994, “from the pit of my burning, nauseous stomach.”

by Will Boast, VQR | Read more:
Image: Margeaux Walter
[ed. See also: Flushed Away: The crappy lie Americans still believe about their toilets (Slate).]

You Should Start Worrying About the Raid on Social Security. Now.


Perhaps the most frequently cited quote from President Donald Trump relevant to his purported efforts to root out government waste has been “we’re not touching Social Security,” or variations thereof.

I expressed skepticism about this pledge shortly after the election by listing all the oblique ways the Trump administration could hack away at the program.

It gives me no pleasure to update my observation with the words, “I told you so.”

Among the weapons Trump could wield, I wrote, was starving the program of administrative resources — think money and staff. Sure enough, on Feb. 28 the program, which is currently led by acting Commissioner Leland Dudek, announced plans to reduce the program’s employee base to 50,000 from 57,000.

Its news release about the reduction referred to the program’s “bloated workforce.”

To anyone who knows anything about the Social Security Administration, calling its workforce “bloated” sounds like a sick joke. The truth is that the agency is hopelessly understaffed, and has been for years.

In November, then-Commissioner Martin O’Malley told a House committee that the agency was serving a record number of beneficiaries with staffing that had reached a 50-year low. (...)

Nearly 69 million Americans were receiving benefits as of Dec. 31, according to the agency. That figure encompassed 54.3 million retired workers, their spouses and their children, nearly 6 million survivors of deceased workers and more than 8.3 million disabled workers and their dependents. Agency employment peaked in 2009 at about 67,000, when it served about 55 million people. (...)

Not only beneficiaries could be affected by Trump’s raid on Social Security. About 183 million people pay Social Security taxes on their earnings. Their right to collect what they’re entitled to based on their contributions is dependent on the system recording those payments and calculating their benefits accurately, to the last penny. Any incursion by DOGE into the program’s systems or the scattershot firings that Dudek forecasts puts all that at risk.

In his testimony, O’Malley talked about how the agency had struggled to establish an acceptable level of customer service. In 2023, he said, wait times on the program’s 800 number had ballooned to nearly an hour. Of the average 7 million clients who called the number each month for advice or assistance, 4 million “hung up in frustration after waiting far too long.” The agency had worked the wait down to an average of less than 13 minutes, in part by encouraging customers to wait off the line for a call back.

Disability applicants faced the worst frustrations, O’Malley said. The backlog of disability determinations, which often require multiple rounds of inquiries, hearings and appeals, had reached a near-record 1.2 million. The program estimated that about 30,000 applicants had died in 2023 while awaiting decisions.

O’Malley had asked for a budget increase in fiscal 2025 to add at least 3,000 workers to the customer-service ranks, but it wasn’t approved.

Make no mistake: The starving of Social Security’s administrative resources, which is currently taking place under the guise of ferreting out fraud and waste, is no accident. It’s part of a decades-long Republican project aimed at undermining public confidence in the program.

Back in 1983, for example, the libertarian Cato Institute published an article by Stuart Butler and Peter Germanis calling for a “Leninist” strategy to “prepare the political ground” for privatizing Social Security on behalf of “the banks, insurance companies, and other institutions that will gain from providing such plans to the public.” Political opposition, as it happens, resulted in the death of George W. Bush’s push to privatize Social Security in 2005.

Germanis has since become a fierce critic of conservative economics and politics. Butler, who had spent 35 years at the right-wing Heritage Foundation before joining the Brookings Institution in 2014, told me by email he now advocates a private retirement system as an “add-on” private option rather than an alternative to Social Security. He also said he thinks “cutting staff and the claim that Social Security is rife with fraud and abuse are both ridiculous.” (...)

By the way, the search for waste, fraud and abuse — call it WFA — has a long and discreditable history. Ronald Reagan pledged to ferret out enough WFA to cut the federal budget by more than 6% (sometimes he said 10%). One of his first steps, however, was to fire 15 departmental inspectors general, whose jobs involved finding WFA. Sound familiar? One of Trump’s first orders upon taking office was to fire inspectors-general at 17 federal agencies. (...)

The truth is that Social Security is one of the most efficient agencies in the federal government. Its administrative costs are one-half of one-percent of its total costs, which include benefit payments.

by Michael Hiltzik, LA Times/Seattle Times | Read more:
Image: Wrecking Crew by Thomas Frank via
[ed. Like everything else they're doing to hobble government and make it seem inefficient (which it definitely will be once they're done with it, ripe for privatization). Reposting this image in case anyone missed it. The ultimate wet dream of Republicans and Wall Street.]

On Cousins

And  the great cousin decline

Perhaps you’ve heard: Americans are having fewer children, on average, than they used to, and that has some people concerned. In the future, the elderly could outnumber the young, leaving not enough workers to pay taxes and fill jobs. Kids already have fewer siblings to grow up with, and parents have fewer kids to care for them as they age.

Oh, and people also have fewer cousins. But who’s talking about that?

Within many families—and I’m sorry to have to say this—cousins occupy a weird place. Some people are deeply close to theirs, but others see them as strangers. Some cousins live on the same block; some live on opposite sides of the world. That can all be true about any family relationship, but when it comes to this one, the spectrum stretches especially far. Despite being related by blood and commonly in the same generation, cousins can end up with completely different upbringings, class backgrounds, values, and interests. And yet, they share something rare and invaluable: They know what it’s like to be part of the same particular family. 

... cousin connections can be lovely because they exist in that strange gray area between closeness and distance—because they don’t follow a strict playbook. That tenuousness means you often need to opt in to cousin relationships, especially as an adult. And the bond that forms when you do might not be easy to replace. (...)

Your “vertical,” intergenerational bonds can be tight and tremendously meaningful, but they also tend to come with care duties, and a clear hierarchy: Think of a grandparent babysitting their child’s toddler, or an adult tending to their aging parent. At the same time, siblings can easily develop fraught dynamics because of their intense familiarity: Perhaps in childhood you fight over toys, and in adulthood, you argue over an inheritance or your parents’ eldercare.

The classic cousin relationship, relative to that, is amazingly uncomplicated. ... Pop culture is full of sibling antics: bickering, pranking, sticking up for one another in school. Fewer models demonstrate how cousins are supposed to interact.

Without a clear answer, some cousins just … don’t interact often. Only about 6 percent of adult cousins live in the same census tract (typically about the size of a neighborhood); the rest live an average of 237 miles apart. Jonathan Daw, a Penn State sociologist, told me that the rate at which adults donate a kidney to a cousin is quite low: While siblings make up 25 percent of living kidney-donor relationships, cousins constitute less than 4 percent. That’s likely not because they’d decline to give up a kidney, but because many people wouldn’t ask a cousin for something that significant in the first place. Organ donations, he told me, raise the question “What do we owe to each other?” For cousins, the answer might be “Not much.”

Still, a bond that’s light on responsibility doesn’t need to be weak. Researchers told me that cousins can be deeply important—perhaps because of the potential distance in the relationship, not in spite of it. (...)

They might also play a specific role in your larger support network (even if you wouldn’t ask them for a kidney). In one study, Reed and her colleagues found that in the fall of 2020, in the midst of pandemic isolation, about 14 percent of participants reported increasing communication with at least one cousin. The relationship, she said, seemed to be “activated in this time of crisis.” She thinks the fact that cousins are less likely to depend on one another for material help might actually make them well suited to give emotional solace. That can be especially relevant when family difficulties come up; a cousin might be one of the few people who understand your relatives’ eccentricities, virtues, and role within the clan. When a parent dies, Verdery told me, many people bond with their cousins, who just get it in a way others don’t.

That’s the funny thing about cousins: In all other areas of your life, you might not be alike at all. But knowing the nuances of your family ties through decades of exposure—however sporadic—is a form of closeness in itself. The low stakes of your own relationship can make you perfect allies—but the potential for detachment also means you have to work for it. You can intentionally insert yourselves into each other’s lives, or you can slowly fade out of them.

The latter scenario can be understandable. A lot of people, when they’re kids, might run around with their cousins on special occasions—and then go months without seeing them. Perhaps they start to realize that their bonds are somewhat arbitrary; they grow less and less relevant, and ever more awkward. Consider this, though: In middle age and older, the cohesion of a whole family can begin to depend on the bonds between cousins. Along with siblings, cousins become the ones organizing the reunions and the Thanksgiving meals. The slightly random houseguests in your younger years become the stewards of the family in your older ones—as do you.

by Faith Hill, The Atlantic |  Read more:
Image: Stella Blackmon
[ed. Repost. Just got back from a family reunion last week that included tons of cousins that I haven't seen since we were kids (60+ years ago). So much fun, especially since many had memories I'd forgotten or never known.] 

Thursday, March 6, 2025

Luciano Caggianello, SAX

The Government Deficits Land in the Deepest Pockets

With our most reliable valuation measures more extreme than both the 1929 and 2000 market peaks, we continue to believe that the stock market is tracing out the extended peak of the third great speculative bubble in U.S. history. Since the initial January 2022 market peak, the equal-weighted S&P 500 has clocked a cumulative total return less than 2.4% ahead of Treasury bills, while the small-cap Russell 2000 has lagged T-bills by more than -10.6% since then. The capitalization-weighted S&P 500 Index has performed better during this period only by driving the price/revenue multiple of the information technology sector to levels that easily exceed the 2000 extreme.

While record valuations, unfavorable market internals, and recurring warning flags have held us to a bearish outlook since the June comment, You Can Ring My Bell, our investment discipline has benefited despite a further market advance since then, partly as a result of the hedging implementation we introduced in the fourth quarter (see the section titled “Good News and Good News” in the October comment, Subsets and Sensibility).

What appears to be an endless bull market advance is actually a classic two-tiered blowoff in speculative glamour stocks. If you missed Bill Hester’s excellent analysis of large-cap market concentration, Slimming Down a Top-Heavy Market, now may be a worthwhile opportunity to recognize how extreme the current situation has become. (...)

From 2016 through 2020, $8.4 trillion in new 10-year deficit spending was approved – a combination of tax cuts and net spending increases, with about $3.6 trillion of that related to pandemic response. From 2020 through 2024, another $4.3 trillion in 10-year deficit spending was approved, about $2.1 trillion of that being pandemic response (Committee for a Responsible Federal Budget). Government deficits, again, are always matched by surpluses in other sectors. In recent years, the primary beneficiaries have been business profits and household savings, though the household savings have gradually been spent down, largely becoming business profits as well.

With regard to the record ratio of financial market capitalization to GDP, the government deficits of the past eight years have bloated the corporate profits on which investors are placing extreme price/earnings multiples while calling it “stock market capitalization.” Meanwhile, the highest income earners have also accumulated the cash and securities that the government issued to finance the deficits. As a result, the massive deficits of recent years are a significant portion of what the deepest pockets presently call “wealth.”

While both the federal debt and federal deficits have declined since 2021, as a share of GDP, even deficits of the present size are unsustainable. Still, it matters enormously how the shortfalls are created. What matters isn’t only whether one borrows, but how revenue is obtained, and how the funds are spent. Long-time readers may recall that I advised members of Congress during the pandemic, and suggested that recovery of subsidies should be largely based on actual economic damage sustained during that crisis. Instead, many businesses received PPP subsidies even though they continued to operate without any shortfall in business, while profit margins soared. As I’ve detailed previously, much of the government support acted as a pure subsidy to corporate profits.

I’m sorry, but there’s a certain irony in the ruse of billionaires consolidating their power under the pretense of “reducing deficits” when it’s exactly the massive deficits of recent years that have boosted their income, profits, and financial market “wealth.” There’s a certain irony to see foxes with mouths full of feathers claiming that they’re defending the henhouse; minding the store while their fingers are deep in the cookie jar of government contracts and foreign quid pro quo. They’re selling a house of cards, and everybody’s merrily picking out furniture.

It adds insult to injury to propose that corporate taxes should be reduced even further, despite the fact that higher corporate profit margins in recent decades have been associated with decidedly lower net business investment as a percentage of revenues. I’ll leave it to conscience to justify it as reasonable to cut meager support for humanitarian aid, social services, medical research, students with disabilities, and other vulnerable populations in order to finance further tax reductions, disproportionately for the wealthy, when 67% of U.S. net worth is already held by the top 10% of Americans and only 2.5% is held by the bottom 50%. Meanwhile, 50% of U.S. equities are held by the top 1% of income earners, with 87% held by the top 10%. Only 1% of U.S. equities are held by the bottom 50%.

Among the pseudo-analyses making the rounds is that a report last year from the General Accounting Office “found” government fraud amounting to $233-521 billion a year. The fascinating aspect, if one actually reads the report, is that this figure isn’t actually a finding at all. It’s a model simulation, based on arbitrary assumptions that explicitly rejected informed input, statistically valid sampling, or data analytics.

That’s not to say that there isn’t fraud in government spending. Indeed, confirmed fraud, as reported by the Office of Management and Budget, amounts to several billion a year: $7.3 billion in the pandemic year of 2020, $4.5 billion in 2021, and $4.4 billion in 2022. Given federal outlays on the order of $7 trillion, much more likely goes undetected. That’s certainly a reason for careful accounting and auditing. It’s not a justification for slash-and-burn by a brigade of drunk guys from the end of the bar ranting about the guvmint. I’ll say this – if one is looking for waste in government, the place to start would be billion-dollar government contracts with corporations, not experienced, career federal workers that comprise less than 1% of the population and less than 2% of the civilian labor force – half the 1950 level and close to the smallest share in U.S. history. The point of the firing isn’t to save money, it’s to install loyalists.

This isn’t who we are

Some people prefer me to “stay in my lane,” but here’s the thing – I went into finance over 40 years ago for two reasons: to serve long-term investors, and to fund efforts to reduce the suffering of vulnerable populations, much of it toward disease eradication, disability, poverty, basic education, homelessness, hospice care, and communities uprooted by conflict. Most of what I’ve earned across decades of market cycles – becoming a leveraged bull after the 1990 plunge, anticipating the tech bubble collapse, navigating the mortgage bubble and its collapse – has gone to those efforts. We’ve been able to do less during the bubble of recent years, though we’ve adapted enough that I expect our work to thrive over the completion of this cycle. In the meantime, as long as I have a voice, I would consider silence in this moment to be a betrayal of the vulnerable communities that we call partners.

Call me naïve, but I still believe that Americans are more alike than different. I think we’re getting played, misdirected by wildly-amplified divisions that have made many of us willing to surrender rule of law, balance of power, human rights, and the defense of democratic allies, while allowing and even encouraging billionaires to enrich themselves at the expense of the vulnerable. We’re fed a constant slideshow of extremes, curated by algorithms, bots, and outrage theatre masquerading as news, that convince us that the worst extremes are representative of the other “side.” Our world is stretched, bent, and distorted by fun house mirrors, and we tell ourselves that we’re only believing our own eyes.

By now, some of us are so angry at the other “side” that we’re perfectly willing to dehumanize others if we can be sold on the idea that cruelty is justice. Retribution. Somehow our divisions have allowed us to buy into the idea that greatness means tolerating an America that abandons its allies, embraces authoritarianism, contemplates the potential benefits of ethnic cleansing and territorial expansionism, elevates parasitic extraction to a core value in foreign relations, and approaches change as a matter of torches, pitchforks, and chainsaws, wielded against the enemy du jour.

This isn’t who we are.

Last week, Theunis Bates, the Editor-in-Chief of The Week observed, “it has become apparent that America is not simply moving past the excesses of progressivism – the compulsory stating of pronouns, the hawking of anti-racism books for babies, the pretending that Emilia Perez is a good movie – but beyond the idea that it’s good to care for others at all. On social media, people have rejoiced at the slashing of U.S. food aid and medicine for people suffering genocide and famine. Musk once told his biographer how his favorite video game had taught him the ‘life lesson’ that ‘empathy is not an asset.’ We’re now seeing what happens when that mantra becomes a governing philosophy.”

This isn’t who we are.

As imperfectly as we’ve pursued our ideals, America has fundamentally stood up for the idea that we share a common humanity, that all of us are connected by common thread that asks for little but decency toward each other, and to promote the general welfare, realizing that “us” could easily have been “them” if the circumstances, wealth, race, or simple realities of our of birth had been different. The world has admired America, and we’ve had pride in ourselves, not because of race, religion, party, wealth, or even common roots, but because we stood for the idea that government by the people – not a despotic, throne-obsessed monarch – and respect for the equal rights of other human beings were self-evident. We’ve failed at that vision a thousand ways, but we’ve never abandoned it. We can’t abandon it now if we hope for America to endure as anything approaching “great.”

“Long live the King.” Sure, and I’m Batman. This isn’t who we are.

My friend Ben Hunt recently described the situation this way: “For the better part of two centuries (I put the Monroe Doctrine of 1823 as my starting point) the United States has tried – sincerely tried as a common goal and as a prominent semantic signature existing through and across political lines – to be both a Great Power and a Good Power. Trumpism is an embrace of America as a Great Power and a rejection of America as a Good Power, in all its forms, both domestic and internationally. More than that, it is an ideological embrace of America as a Great Power, that this is everything America should be, and an ideological rejection of America as a Good Power, that this is something America should never be… I absolutely think this is a tragedy, because the pursuit of great power for great power’s sake transforms every American policy, both foreign and domestic, into a protection racket of one form or another.”

This isn’t who we are.

Only by saying that, out loud, and standing up for it again and again, can we avoid becoming something else entirely.

by John P. Hussman Ph.D., Hussman Funds |  Read more: 
Image: uncredited
[ed. I'm as guilty as anyone for railing against the stupidity, avarice and profiteering we see prevalent in politics and human relations today. And while I rail against MAGA true believers, I once was as idealist and true believing as they are now, convinced everything was as binary as I imagined. Age and a careful study of history cured that mistake. But now I'm convinced this infighting is more dangerous (and distractive) than we realize. We might be sliding into authoritarianism, but if so it'll be a short-lived version of what we've historically experienced. Instead, I fear it'll be more like a blow off top for humanity. While we're fighting each other over scraps, billionaires are building bunkers and tech bros are building AI overlords. Guess who survives and rules in the end? Hint: it ain't us. See also: Democrats Must Become the Workers’ Party Again (TNR) excellent advice from former Ohio Senator Sherrod Brown (recently defeated).]

Etel Adnan, Unshudat al-Matar, (leporello (detail), watercolour and Indian ink on Japanese paper, 20 pages), 2001 [poetry by Badr Shakir al-Sayyab, drawing and handwriting by Etel Adnan]

Paul Celan, Memory Rose into Threshold Speech. The Collected Earlier Poetry: A Bilingual Edition, Mohn und Gedächtnis (1952)

Yes, Shrimp Matter

I left private equity to work on shrimp welfare. When I tell anyone this, they usually think I've lost my mind. I know the feeling — I’ve been there. When I first read Charity Entrepreneurship's proposal for a shrimp welfare charity, I thought: “Effective altruists have gone mad — who cares about shrimp?”

The transition from analyzing real estate deals to advocating for some of the smallest animals in our food system feels counterintuitive, to say the least. But it was the same muscle I used converting derelict office buildings into luxury hotels that allowed me to appreciate an enormous opportunity overlooked by almost everyone, including those in the animal welfare space. I still spend my days analyzing returns (though they’re now measured in suffering averted). I still work to identify mutual opportunities with industry partners. Perhaps most importantly, I still view it as paramount to build trust with people who — initially — sit on opposite sides of the table.

After years of practicing my response to the inevitable raised eyebrows, I now sum it up simply: ignoring shrimp welfare would have been both negligent and reckless.

This may seem like an extreme stance. Shrimp aren't high on the list of animals most people think about when they consider the harms of industrial agriculture. For a long time — up until the last few years — most researchers assumed shrimp couldn't even feel pain. Yet as philosopher Jonathan Birch explains in The Edge of Sentience, whenever a creature is a sentience candidate and we cannot rule out its capacity for conscious experience, we have a responsibility to take its potential for suffering seriously.

We don’t know what it is like to be a shrimp. We do know that if shrimp can suffer, they are doing so in the hundreds of billions.

Counting billions

Why worry about shrimp in a world where so many mammals and birds live in torturous conditions due to industrial agriculture? The answer is that shrimp farming dwarfs other forms of animal agriculture by sheer numbers. An estimated 230 billion shrimp of various species are alive in farms at any given moment — compared to the 779 million pigs, 1.55 billion cattle, 33 billion chickens, and 125 billion farmed fish.

Shrimp are harvested at around 6 months of age, which puts the estimated number slaughtered annually for human consumption at 440 billion. For perspective: that’s more than four times the number of humans who have ever walked the earth. At sea, the numbers are even more staggeringly shrimpy. Globally, 27 trillion shrimp are caught in the wild every year, compared to 1.5 trillion fish.

Despite their size, shrimp are the proverbial “elephant in the room” when discussing animal welfare in food systems. (...)

Shrimp’s nervous system, behavior, and estimated welfare capacity all point toward meaningful sentience. The fact that they haven't been studied as extensively as some other animals should not blind us to the evidence we do have, nor to their evident similarities with better-studied relatives. (...)

Beyond the water

In modern shrimp farming, life begins in a hatchery born to a mother who has endured one of the industry's most severe practices: eyestalk ablation. This procedure involves physically cutting off the appendage from which her eyes protrude — imagine having your optic nerve severed and your entire eye removed, all without anaesthesia. This mutilation, designed to induce spawning, sets the tone for a life marked by intensive farming practices.

At just a few days old, the young shrimp is transferred to a grow-out pond where it will spend the next three to six months of its life. In super-intensive systems, which represent a non-negligible portion of the industry in some regions, 500 to 1,000 shrimp are packed into each square meter. For a creature that grows to 13 centimeters in length, this density makes it impossible to perform natural behaviors like burrowing or resting on the bottom. Instead, the shrimp must swim continuously in the crowded space.

Poor water quality poses a persistent threat, regardless of stocking density. Just as humans need clean air to breathe, shrimp require clean water to survive. Their environment is often impacted by fluctuating oxygen levels and the presence of toxic gases — ammonia and hydrogen sulfide from accumulated waste, to name two. In densely packed systems, these challenges become especially dangerous as a single water quality mishap can rapidly cascade into a mass mortality event. These conditions weaken their immune systems, leading to widespread disease. In what is considered a successful harvest, 20 to 30% of the population may die before reaching market size.

Those who survive face an ending that recent research suggests may be more cruel than previously thought. In the best-case scenario, they're immersed in ice slurry, a practice long considered humane. However, emerging evidence from EEG studies indicates this method merely paralyses the shrimp while still leaving them conscious and capable of feeling pain for an extended period. In many cases, the reality is even harsher — some are left on crates for several minutes to drain excess water weight, while others are forced to endure long-distance transport in severely under-oxygenated barrels for up to eight hours, journeys that can stretch hundreds of kilometers from farm to market.

To be clear, not all shrimp experience every welfare concern listed here, nor do most shrimp suffer from all of them simultaneously. However, these issues arise so frequently that nearly all shrimp will likely experience at least two or more of these welfare violations during their lives.

by Andrés Jiménez Zorrilla, Asterisk |  Read more:
Image: uncredited
[ed. See also: Shrimp: The animals most commonly used and killed for food production (RP).]

The Dragon in My Garage

"The Dragon in My Garage" is a chapter in Carl Sagan's 1995 book The Demon-Haunted World, which presents an analogy where the existence of God is equated with a hypothetical insistence that there is a dragon living in someone's garage. This is similar to Russell's Teapot in the way it forms an apt analogy for the concepts of the burden of proof and falsifiability. The main thrust of how Sagan develops the garage-dwelling dragon example is that the proponent employs increasingly ad hoc reasoning to describe their belief in the face of further questions. Eventually, the goalposts are moved in such a way as to render the initial assertion practically unfalsifiable. In a more general sense, this part may be done during the initial definition of the belief, or as when replying to critical examination of the belief in question.

Dragon-style arguments originate in what Daniel Dennett terms "belief in belief": rather than actually holding a belief, you think you should hold the belief — or "fake it till you make it". The post hoc justifications come from cognitive dissonance between what the believers think they should believe and how these beliefs would actually manifest in practical terms. While such justifications need to be made quickly on an ad hoc basis, someone declining all these tests must, somewhere in their head, have a model that makes them not expect to see this sort of evidence at all. This is tantamount to not really holding the belief (since you'd expect to see something if you really did believe), but just thinking that they do, hence "belief in belief". This is often rationalised away in much the same manner that the metaphorical dragon is, by changing the rules to say that the dragon doesn't really need to have a real effect on our lives to have a real effect on our lives. What? Exactly.

In the case of the dragon, we expect footprints and flames, in the case of miracles and prayer we expect the ability to test them — and proponents subsequently attempt to hide these things from experimental scrutiny.

Sagan described the discussion as follows:

"A fire-breathing dragon lives in my garage"

Suppose I seriously make such an assertion to you. Surely you'd want to check it out, see for yourself. There have been innumerable stories of dragons over the centuries, but no real evidence. What an opportunity!

"Show me," you say. I lead you to my garage. You look inside and see a ladder, empty paint cans, an old tricycle — but no dragon.

"Where's the dragon?" you ask.

"Oh, she's right here," I reply, waving vaguely. "I neglected to mention that she's an invisible dragon."

You propose spreading flour on the floor of the garage to capture the dragon's footprints.

"Good idea," I say, "but this dragon floats in the air."

Then you'll use an infrared sensor to detect the invisible fire.

"Good idea, but the invisible fire is also heatless."

You'll spray-paint the dragon and make her visible.

"Good idea, but she's an incorporeal dragon and the paint won't stick."

And so on. I counter every physical test you propose with a special explanation of why it won't work.

Now, what's the difference between an invisible, incorporeal, floating dragon who spits heatless fire and no dragon at all? If there's no way to disprove my contention, no conceivable experiment that would count against it, what does it mean to say that my dragon exists? Your inability to invalidate my hypothesis is not at all the same thing as proving it true. Claims that cannot be tested, assertions immune to disproof are veridically worthless, whatever value they may have in inspiring us or in exciting our sense of wonder. What I'm asking you to do comes down to believing, in the absence of evidence, on my say-so.

How do I do this?

It's easy to create your own unfalsifiable belief. Just follow these steps:
  1. Express a belief
  2. Someone proposes a way in which the belief can be tested
  3. Add or change an attribute of the belief to render the proposed test invalid, and simply reiterate step 1
by RationalWiki |  Read more:
Image: YouTube via
[ed. Applicable to a variety of situations and professions (lawyers and politicians in particular) but especially prevalent among MAGA supporters (lawyers and politicians know they're purposely being evasive). Facts are useless. They've already drunk deeply from the religion of Trump, and like any religion, their support and commitment is grounded in faith. Facts don't and won't matter until they're the ones getting shafted (which'll happen soon enough), and even then they'll dragonize them away. For fun, just ask them to define "Make", "Great" and "Again" and watch the froth fly. See also: Elon Musk and the Useless Spending-Cut Theater of DOGE (NYT):]
***
Riedl: I think Donald Trump is a big government populist who reflects where the Republican Party is today. Today’s Republican Party is older, lower income, more dependent on not just Social Security and Medicare, but programs like Medicaid and SNAP. It also includes a lot of veterans who want veteran spending and a lot of people concerned about defense.

So, overall, you have a big government populist party. But what’s interesting in this populism is, while they’re definitely more comfortable with government spending than past Republicans, they’re also accelerating the tax cut rhetoric. And as an economist, I look at that and say something’s got to give. (...)

French: So, what is your best estimate about the DOGE savings right now?

Riedl: Perhaps $2 billion, which they claim is $55 billion. Even that $2 billion may not ultimately happen because technically speaking, DOGE cannot impound and unilaterally reduce federal spending. Any spending cuts legally have to be reprogrammed elsewhere unless Congress goes in and reduces the spending levels. So right now I would say DOGE has saved $2 billion, which, to put it in context, is one-thirty-fifth of 1 percent of the federal budget, otherwise known as budget dust. [ed. emphasis added]

Riedl: The budget resolution mostly consists of $4.5 trillion in tax cuts over 10 years.

They’re also indicating they’ll offset this with cuts to Medicaid, SNAP and other nutrition spending, and likely student loans. I’m skeptical that Congress can actually pass this. If they don’t, it will be a $4.5 trillion cost over 10 years.

The budget also promises discretionary savings far into the future, but there’s nothing enforcing that and there’s no reason to take it seriously. The budget also assumes a huge growth in tax revenues from economic growth. That is more of a gimmick. It’s not going to happen.

French: It’s been a while since I’ve had a math class, but it sounds like what you’re saying is they’re cutting $2 billion for savings but they’re adding $4,500 billion in deficit. It’s $2 billion versus $4,500 billion. Those are very, very different numbers.

Trump and DOGE have been focused on reducing the number of federal employees. What would the impact be on the federal deficit of, say, cutting 300,000 or 400,000 federal employees?

Riedl: Here’s one way to look at it: There are 2.3 million civilian employees. If we eliminated one quarter of them — which would be remarkable, that would be laying off nearly 600,000 workers and not replacing them — you would save 1 percent of federal spending. (...)

French: Suppose you wanted to be serious about cutting the deficit. Where does federal money go? And as a corollary to that, what has to be cut or what kind of revenue has to be raised to meet these obligations?

Riedl: When I explain where the money goes, it’ll be clear why we’re not cutting it.

Seventy-five percent of all federal spending goes to six items: Social Security, Medicare, Medicaid, defense, veterans and interest. That is 75 cents of every dollar. Everything the government does besides that — education, health research, housing, justice, homeland security — that’s all the other 25 percent. But Social Security, Medicare and Medicaid are the big drivers. That’s really the ballgame.

Wednesday, March 5, 2025

How To Make Superbabies

Working in the field of genetics is a bizarre experience. No one seems to be interested in the most interesting applications of their research.

We’ve spent the better part of the last two decades unravelling exactly how the human genome works and which specific letter changes in our DNA affect things like diabetes risk or college graduation rates. Our knowledge has advanced to the point where, if we had a safe and reliable means of modifying genes in embryos, we could literally create superbabies. Children that would live multiple decades longer than their non-engineered peers, have the raw intellectual horsepower to do Nobel prize worthy scientific research, and very rarely suffer from depression or other mental health disorders.

The scientific establishment, however, seems to not have gotten the memo. If you suggest we engineer the genes of future generations to make their lives better, they will often make some frightened noises, mention “ethical issues” without ever clarifying what they mean, or abruptly change the subject. It’s as if humanity invented electricity and decided the only interesting thing to do with it was make washing machines.

I didn’t understand just how dysfunctional things were until I attended a conference on polygenic embryo screening in late 2023. I remember sitting through three days of talks at a hotel in Boston, watching prominent tenured professors in the field of genetics take turns misrepresenting their own data and denouncing attempts to make children healthier through genetic screening. It is difficult to convey the actual level of insanity if you haven’t seen it yourself.

As a direct consequence, there is low-hanging fruit absolutely everywhere. You can literally do novel groundbreaking research on germline engineering as an internet weirdo with an obsession and sufficient time on your hands. The scientific establishment is too busy with their washing machines to think about light bulbs or computers.

This blog post is the culmination of a few months of research by myself and my cofounder into the lightbulbs and computers of genetics: how to do large scale, heritable editing of the human genome to improve everything from diabetes risk to intelligence. I will summarize the current state of our knowledge and lay out a technical roadmap examining how the remaining barriers might be overcome.

We’ll begin with the topic of the insane conference in Boston; embryo selection.

by Gene Smith, Less Wrong |  Read more:
Image: via

via:

What We're Fighting For

[ed. Or more to the point, what we're fighting against - the Rot Economy, as the author terms it (also known as enshittification).]

A great deal of what I write feels like narrating the end of the world — watching as the growth-at-all-costs, hyper-financialized Rot Economy seemingly tarnishes every corner of our digital lives. My core frustration isn't just how shitty things have gotten, but how said shittiness has become so profitable for so many companies. (...)

The business of making our shit worse to increase revenue growth year-over-year is booming. The products you use every day are more confusing and frustrating to use because everything must grow, which means that product decisions are now driven, in many cases, by companies trying to make you do something rather than do something for you, which in turn means that basic product quality — things like "usability" or "functionality" — are secondary considerations.
 
It’s why your Facebook newsfeed doesn’t show you posts from friends and family, but is happy to bombard you with AI-generated images of weirdly shiny-faced old people celebrating their birthday alone, replete with a heartstring-tugging caption. It’s why whenever you search for something — not just on Google, but anywhere — the keywords you provide aren’t treated as an explicit instruction of something you want to see, but randomly disregarded with no rhyme or reason.

We do not "use" the computer — we negotiate with it to try and make it do the things we want it to do, because the incentives behind modern software development no longer align with the user.

Too often when you open an app you start bargaining with the company behind it — like a popup from Dropbox saying you could save money switching to an annual plan, securing annual recurring revenue and locking you into something it hopes you'll forget. Tech companies have the perseverance and desperate hunger for your money of a timeshare salesman, and they’re not sorry.

And that’s assuming it even loads. We’re all familiar with the tense moment where you open Microsoft Teams and hope that it doesn't crash, or that your audio or video works. We live in a constant state of digital micro-aggressions, and as I wrote last year, it's everywhere banking apps that now have "helpful assistants" that get in the way of, well, banking, pop-ups during online shopping that promise discounts in exchange for our emails and phone numbers so they can spam us, notifications from apps that are built to push us to interact further rather (like Instagram's "someone just posted a comment on someone else's post" notifications), or the emails we get from Amazon about an order shipping that don't include any of the actual information about the purchase — a product decision allegedly made to stop Google from scraping your emails and selling that info to other parties, which is Amazon's business, not Google's.

Yet my — and I'd imagine your — frustration isn't borne of a hatred of technology, or a dislike of the internet, or a lack of appreciation of what it can do, but the sense that all of this was once better, and that these companies have turned impeding our use of the computer into an incredibly profitable business.

So much of the pushback I get in my work — and the pushback I've seen toward others — is that I "hate" technology, when I'd like argue that my profound disgust is borne of a great love of technology, and a deep awareness of the positive effects it's had on my life. I do not turn on my computer every day wanting to be annoyed, and I don't imagine any of you do either. We're not logging onto whatever social networks we're on because we are ready to be pissed off. If anything, we'd love to be delighted by the people we chose to connect with and the content we consume, and want to simply go about our business without a litany of microaggressions created by growth-desperation and a lack of responsibility toward the user. (...)

The problem is that we, as a society, still act like technology is some distinct thing separate from our real lives, and that in turn “technology” is some sort of hobbyist pursuit. Mainstream media outlets have a technology section, with technology reporters that are hired to cover “the technology industry,” optimizing not for any understanding or experience in using technology, but 30,000 foot view of “what the computer people are doing.”

This may have made more sense 20 years ago — though I’d add that back in 2008 you had multiple national newspapers with technology columnists, and computers were already an integral part of our working and personal lives — but in the year 2025 is a fundamental failure of modern media. Every single person you meet in every single part of your life likely interfaces with technology as much as if not more than they do with other people in the real world, and the technology coverage they read in their newspaper or online doesn’t represent that. It’s why a relatively modest software update for Android or Windows earns vastly more column inches than the fact that Google, a product that we all use, doesn’t really work anymore.

As a result, it’s worth considering that billions of people actually really like what technology does for them, and in turn are extremely frustrated with what technology does to them.

The problem is that modern tech media has become oriented around companies and trends rather than the actual experience of a person living in reality. Generative AI would never have been any kind of “movement” or “industry” if the media had approached it from the perspective of a consumer and said “okay, sure, but what does this actually do?” and the same goes for both the metaverse and cryptocurrency. (...)

Worse still, regular people are also furious at the state of software, and are fully aware that they’re being conned. The tech media continually frames the “growing distrust” of the tech industry as some result of political or social change or a cumulation of scandals, rather than the big, unspoken scandal called “how the tech industry made things worse in the pursuit of growth,” and the greater scandal of exactly how much contempt tech regularly treats their customers with.

And more importantly, regular people feel like they’re being gaslit by the tech media. I am regularly told that people are glad to have *someone* say simple things like “hey the apps you use that feel like they’re fucking with you? They are actually doing that!” with regularity. The feedback I regularly receive is that there are too many articles about technology that seem fundamentally disconnected from reality, or at the very least disconnected from the people at the receiving end of the product.

by Ed Zitron, Where's Your Ed At |  Read more:
Image: uncredited/via
[ed. Also known as enshittification. Unfortunately, it looks like there's no turning back, we're all junkies now, forced to accept whatever bad product is around because there's no alternative. See also: As Internet enshittification marches on, here are some of the worst offenders (Ars Technica).]

Tuesday, March 4, 2025

Sesame: Eerily Realistic AI Voice Demo

In late 2013, the Spike Jonze film Her imagined a future where people would form emotional connections with AI voice assistants. Nearly 12 years later, that fictional premise has veered closer to reality with the release of a new conversational voice model from AI startup Sesame that has left many users both fascinated and unnerved.

"I tried the demo, and it was genuinely startling how human it felt," wrote one Hacker News user who tested the system. "I'm almost a bit worried I will start feeling emotionally attached to a voice assistant with this level of human-like sound."

In late February, Sesame released a demo for the company's new Conversational Speech Model (CSM) that appears to cross over what many consider the "uncanny valley" of AI-generated speech, with some testers reporting emotional connections to the male or female voice assistant ("Miles" and "Maya").

In our own evaluation, we spoke with the male voice for about 28 minutes, talking about life in general and how it decides what is "right" or "wrong" based on its training data. The synthesized voice was expressive and dynamic, imitating breath sounds, chuckles, interruptions, and even sometimes stumbling over words and correcting itself. These imperfections are intentional.

"At Sesame, our goal is to achieve 'voice presence'—the magical quality that makes spoken interactions feel real, understood, and valued," writes the company in a blog post. "We are creating conversational partners that do not just process requests; they engage in genuine dialogue that builds confidence and trust over time. In doing so, we hope to realize the untapped potential of voice as the ultimate interface for instruction and understanding." (...)

Browsing reactions to Sesame found online, we found many users expressing astonishment at its realism. "I've been into AI since I was a child, but this is the first time I've experienced something that made me definitively feel like we had arrived," wrote one Reddit user. "I'm sure it's not beating any benchmarks, or meeting any common definition of AGI, but this is the first time I've had a real genuine conversation with something I felt was real." Many other Reddit threads express similar feelings of surprise, with commenters saying it's "jaw-dropping" or "mind-blowing."

While that sounds like a bunch of hyperbole at first glance, not everyone finds the Sesame experience pleasant. Mark Hachman, a senior editor at PCWorld, wrote about being deeply unsettled by his interaction with the Sesame voice AI. "Fifteen minutes after 'hanging up' with Sesame's new 'lifelike' AI, and I'm still freaked out," Hachman reported. He described how the AI's voice and conversational style eerily resembled an old friend he had dated in high school.

Gavin Purcell, co-host of the AI for Humans podcast, posted an example video on Reddit where the human pretends to be an embezzler and argues with a boss. It's so dynamic that it's difficult to tell who the human is and which one is the AI model. Judging by our own demo, it's entirely capable of what you see in the video.

by Benji Edwards, Ars Technica |  Read more:
Image: : Moor Studio via Getty Images
[ed. Been hearing about this, and...wow. Pretty realistic (and just getting started). Give it a try (here).]

America Is Pushing Its Workers Into Homelessness

At 10 p.m., a hospital technician pulls into a Walmart parking lot. Her four kids — one still nursing — are packed into the back of her Toyota. She tells them it’s an adventure, but she’s terrified someone will call the police: “Inadequate housing” is enough to lose your children. She stays awake for hours, lavender scrubs folded in the trunk, listening for footsteps, any sign of trouble. Her shift starts soon. She’ll walk into the hospital exhausted, pretending everything is fine.

Across the country, men and women sleep in their vehicles night after night and then head to work the next morning. Others scrape together enough for a week in a motel, knowing one missed paycheck could leave them on the street.

These people are not on the fringes of society. They are the workers America depends on. The very phrase “working homeless” should be a contradiction, an impossibility in a nation that claims hard work leads to stability. And yet, their homelessness is not only pervasive but also persistently overlooked — excluded from official counts, ignored by policymakers, treated as an anomaly rather than a disaster unfolding in plain sight.

Today, the threat of homelessness is most acute not in the poorest regions of the country, but in the richest, fastest-growing ones. In places like these, a low-wage job is homelessness waiting to happen.

For an increasing share of the nation’s work force, a mix of soaring rents, low wages and inadequate tenant protections have forced them into a brutal cycle of insecurity in which housing is unaffordable, unstable or entirely out of reach. A recent study analyzing the 2010 census found that nearly half of people experiencing homelessness while staying in shelters, and about 40 percent of those living outdoors or in other makeshift conditions, had formal employment. But that’s only part of the picture. These numbers don’t capture the full scale of working homelessness in America: the many who lack a home but never enter a shelter or who wind up on the streets.

I’ve spent the past six years reporting on men and women who work in grocery stores, nursing homes, day care centers and restaurants. They prepare food, stock shelves, deliver packages and care for the sick and elderly. And at the end of the day, they return not to homes but to parking lots, shelters, the crowded apartments of friends or relatives and squalid extended-stay hotel rooms.

America has been experiencing what economists described as a historically tight labor market, with a national unemployment rate of just 4 percent. And all the while, homelessness has soared to the highest level on record.

What good is low unemployment when workers are a paycheck away from homelessness?

A few statistics succinctly capture why this catastrophe is unfolding: Today there isn’t a single state, city or county in the United States where a full-time minimum-wage worker can afford a median-priced two-bedroom apartment. An astounding 12.1 million low-income renter households are “severely cost burdened,” spending at least half of their earnings on rent and utilities. Since 1985, rent prices have exceeded income gains by 325 percent.

According to the National Low Income Housing Coalition, the average “housing wage” required to afford a modest two-bedroom rental home across the country is $32.11, while nearly 52 million American workers earn less than $15 an hour. And if you’re disabled and receive S.S.I., it’s even worse: Those payments are currently capped at $967 a month nationwide, and there is hardly anywhere in the country where this form of fixed income is enough to afford the average rent.

But it’s not just that wages are too low; it’s that work has become more precarious than ever. Even for those earning above the minimum wage, job security has eroded in ways that make stable housing increasingly out of reach.

This results in a devastating pattern: As cities gentrify and become “revitalized,” the nurses, teachers, janitors and child care providers who keep them running are being systematically priced out. Unlike in earlier periods of widespread immiseration, such as the recession of 2008, what we’re witnessing today is a crisis born less of poverty than of prosperity. These workers aren’t “falling” into homelessness. They’re being pushed. They’re the casualties not of a failing economy but of one that’s thriving — just not for them.

And yet, even as this calamity deepens, many families remain invisible, existing in a kind of shadow realm: deprived of a home, but neither counted nor recognized by the federal government as “homeless.”

More and more workers now face volatile schedules, unreliable hours and a lack of benefits such as sick leave. The rise of “just in time” scheduling means employees don’t know how many hours they’ll get week to week, making it impossible to budget for rent. Entire industries have been gigified, leaving ride-share drivers, warehouse workers and temp nurses working without benefits, protections or reliable pay. Even full-time jobs in retail and health care — once seen as dependable — are increasingly contracted out, turned into part-time roles or made contingent on meeting ever-shifting quotas.

For millions of Americans, the greatest threat isn’t that they’ll lose their jobs. It’s that the job will never pay enough, never provide enough hours, never offer enough stability to keep them housed. (...)

This exclusion was by design. In the 1980s, as mass homelessness surged across the United States, the Reagan administration made a concerted effort to shape public perception of the crisis. Officials downplayed its severity while muddying its root causes. Federal funding for research on homelessness was steered almost exclusively toward studies that emphasized mental illness and addiction, diverting attention from structural forces — gutted funding for low-income housing, a shredded safety net. Framing homelessness as a result of personal failings didn’t just make it easier to dismiss; it was also less politically threatening. It obscured the socioeconomic roots of the crisis and shifted blame onto its victims. And it worked: By the late 1980s, at least one survey showed that many Americans attributed homelessness to drugs or unwillingness to work. Nobody mentioned housing.

Over the decades, this narrow, distorted view persisted, embedding itself in the federal government’s annual homeless census. Before something can be counted, it must be defined — and one way the United States has “reduced” homelessness is by defining entire groups of the homeless population out of existence. Advocates have long decried the census’ deliberately circumscribed definition: only those in shelters or visible on the streets are tallied. As a result, a relatively small but conspicuous fraction of the total homeless population has come to stand, in the public imagination, for homelessness itself. Everyone else has been written out of the story. They literally don’t count.

The gap between what we see and what’s really happening is vast. Recent research suggests that the true number of people experiencing homelessness — factoring in those living in cars or motel rooms, or doubled up with others — is at least six times as high as official counts. As bad as the reported numbers are, the reality is far worse. The tents are just the tip of the iceberg, the most glaring sign of a far more entrenched crisis.

This willful blindness has caused incalculable harm, locking millions of families and individuals out of vital assistance. But it’s done more than that. How we count and define homelessness dictates how we respond to it. A distorted view of the problem has led to responses that are inadequate at best and cruelly counterproductive at worst. (...)

Because when work no longer provides stability, when wages are too low and rents are too high, when millions of people are one medical bill, one missed paycheck, one rent hike away from losing their homes — who, exactly, is safe?

by Brian Goldstone, New York Times | Read more:
Image: Derek Miller Hurtado
[ed. Wait until Medicaid gets gutted (it's coming), and all the federal workers currently being dismissed are suddenly unable to pay their bills. The homelessness problem in America is going to explode.] 

Good Advice

Monday, March 3, 2025

We Really Are Entering a New Age of Romanticism

An update on the war against algorithms and technocratic manipulation

Two hundred years ago, people got fed up with algorithms. And they went to war against them.

That’s a prototype for what we need today. And we will get it.

Buckle up, my friends, it will happen again! That’s because people now see the sterility and human waste created by a culture of brutal algorithms—imposed by a consortium of billionaires operating without accountability or constraint.

The backlash is already underway, and will gain momentum with each passing year—maybe even with each passing month.

This is the new Age of Romanticism that I predicted more than a year ago.

Since I made this bold forecast, several other thinkers have joined with me. This is no longer a hypothesis—it’s an actual movement.

Here are some of the participants:
  • Ross Barkan has taken a leading role in defining the New Romanticism. He wrote about it in The Guardian shortly after my essay was published. Barkan shares my sense that it represents an inevitable backlash to overreaching tech, but he adds several new twists drawn from his own experience on the media frontlines. He revisited the subject a few days ago, in a smart assessment of how this emerging worldview is reflected in a wide range of trends and attitudes.
  • Dr. Anjan Chatterjee followed up a few months later with his article “The New Romantics” in Psychology Today. “Like two centuries years ago, an unlikely group of people are converging to combine science, nature, art, and aesthetics,” he declares. They are committed to the “ideas that nature can be restorative and the arts can be transformative.”
  • Megha Lillywhite offered her endorsement in January in an essay entitled “Rx: Romanticism.” She sees this new attitude as a valuable way of dealing with pressing problems “without devolving into the simplistic dichotomy of bipartisan politics. It addresses a “longing for the human” in an age of degrading and manipulative digital technology.
  • Campbell Frank Scribner added to the conversation one week later with his article “Romanticism and the Soul of Learning.” He also see Romanticism as a pathway out of current deadlocks in society. He believes that conservatives have typically opposed Romanticist worldviews, but argues that this might be the time they reconsider allegiances—especially when educating the next generation.
  • Kate Alexander released a video on the momentum building for a resurgent Romanticism in December. And it got almost a half million views. So clearly this is more than a fringe attitude among demented Substackers. (...)
For a quick summary (with views both pro and con), you can consult an update on the “Hopeful Romantics” published a week ago by the Wisdom of Crowds substack. (...)

In the old days, movie villains were mobsters or crime syndicates. Nowadays they are tech innovators. This kind of shift in the popular imagination does not happen by chance.

Now let’s revisit the (even older) history.

Back in the 1700s, ruthless algorithms had a different name. They called them Rationalism—and the whole Western world was under the sway of the Age of Reason. But like today’s algorithms, the new systems of the Rationalists attempted to replace human wisdom and experience with intrusive and inflexible operating rules.

It didn’t work.

“This rationalistic philosophy, which had been expected to solve all the problems, had failed to rescue society from either despotism and poverty,” explains Edmund Wilson in his masterful study To the Finland Station.

“The mechanical inventions of which it had been expected that they would vastly improve the lot of humanity were obviously making many people miserable,” he continues.

(By the way, it’s no coincidence that recent tech overreach has been accompanied by a New Rationalism, championed by crypto swindler Sam Bankman-Fried and his many fellow travelers. But that subject deserves a whole article of its own….Now let’s return to history.)

The Rationalists of the 1700s (and today) put their faith in three things—and they all backfired.

(1) The most obvious failure was the attempt to impose rational rules on the political system. This led to the French Revolution, which soon collapsed in terrible bloodshed, and resulted in the dictatorship of Napoleon.

Millions of people died because the dominant algorithms didn’t work.

(2) The second obsession of the Rationalists in the 1700s was the total systematization of all knowledge. (Does that sound familiar?)

They didn’t have ChatGPT back then. But they did the best they could with the immense efforts of the French Encyclopedists and German taxonomists.

Everything got classified, codified, quantified, named, and placed on a chart. Foucault later mocked this as an “archeology of human sciences.”

That’s because this way of understanding the world failed to grasp anything that evolved or grew or changed or lived. Like the tech-gone-wild ethos of the current day, the messy human element was removed from the Rationalist systems.

(3) But the Rationalists of the 1700s made one more mistake—and it reminds us again of our current situation. They let a brutal technocracy destroy people’s lives—driven by dreams of profit maximization, and ignoring the human cost.

It wasn’t called Silicon Valley back then. The name given to the technocracy in the 1700s was the Industrial Revolution.

We don’t fully grasp the horrors of the factory sweat shops today—because the Romanticists worked on fixing the problems of industrialism in the 1820s and 1830s. This new generation of artists, humanists, and compassionate critics of the technocracy passed laws against child labor, unsafe working conditions, abusive hours, and other exploitative practices.

In other words, the Romanticists replaced the algorithm with humanist values. Rationalism on its own would never do that.

I want to emphasize this next point—so I am putting it in boldface.

The last age of Romanticism did not destroy the technology—it merely prevented technocrats from abusing people in their pursuit of profits.

by Ted Gioia, The Honest Broker |  Read more:
Image: uncredited

No Good Deed Should Go Unpunished

When Victor Wembanyama swapped jerseys with a young fan after the San Antonio Spurs’ 96-87 win over the Brooklyn Nets at the Barclays Center on Dec. 27, it created a viral image that delighted the internet.

What’s followed has turned a sweet interaction into a hotly debated issue that reached New York’s Supreme Court.

Here’s what we know about the situation.

How did the jersey swap happen?

The five-year-old boy, who was wearing a tiny Spurs Wembanyama jersey, and his father attended the game with a sign that read “Victor Wembanyama will you swap jerseys with me?” After the game, they were invited onto the court where the swap was carried out in front of cameras, with the boy getting Wembanyama’s game-worn City Edition top and the 7-foot-3 Wembanyama getting the comically small replica.

What happened to the game-used jersey after it was given to the boy?

On Jan. 14, it was announced the jersey would be a featured item in the Goldin 100 auction opening on Jan. 22, with a starting bid of $10,000. This sparked a public backlash. Many people who found the original interaction so endearing felt it was distasteful to cash in on the gifted jersey so quickly. However, those who defended the move pointed out that the jersey would likely bring in a significant amount of money for the family and potentially have a life-changing impact for the child.

What did Wembanyama think of his jersey going to auction?

The day after the auction was announced, Wemby quote tweeted the news and added a crying emoji. So he didn’t seem pleased by the decision.

What did the seller think of the sale?

On Monday, after the auction had concluded, Frankie Desideri Sr., the father of the boy who swapped jerseys with Wembanyama, filed a lawsuit for a temporary restraining order against Goldin Auctions through the New York State Supreme Court to stop the sale, as first reported by Cllct. In the documents Desideri filed, he said that “multiple attempts to withdrawal from auction (sic)” were made. He also said Goldin used images of him and his son to promote the auction “despite clear, prior instructions that no images be attached to the sale,” which caused “widespread exposure and emotional harm” as a result of “public scrutiny and harassment” that led to his son suffering “severe emotional distress, avoiding basketball games and believing his favorite player dislikes him after he posted about the sale also.” Desideri said it was the use of their images that prompted him to revoke consent to the auction prior to it concluding. In addition, the filing said the jersey was gifted to the boy, making him sole owner of it and that “under New York law, contracts involving minors are voidable at the discretion of the legal guardian.”

In Goldin’s court filings in response, the company said Desideri contacted Goldin two days after the swap took place and “voluntarily and without solicitation” entered into a consignment agreement to auction the jersey. It also accused Desideri of “experiencing seller’s remorse.” The filing went on to say the jersey was “sold, paid for, and shipped to the buyer” before Goldin was made aware of the suit. It also contended the company should not be subject to New York law since it is based in New Jersey.

by Brooks Peck, The Athletic |  Read more:
Image: X
[ed. Poor kid (and Wembanyama). Just props in another loser's money-making scam. Also not buying the "life-changing" blah blah blah...if they have enough money for courtside seats the kid will never see a dime.]