Friday, August 8, 2025

90% of Frozen Raspberries Grown in the U.S. Come From This WA Town


LYNDEN, Whatcom County — Even if you’ve never been to Lynden, there’s a good chance you’ve eaten the raspberries grown here. They’re just not the ones you find in the plastic clamshell in the produce section.

Labeled generically as “U.S.-grown raspberries,” you’ll find them all over the grocery store: in the frozen triple berry blend and the raspberry lemon muffins at Costco. In Tillamook’s Washington raspberry yogurt, Smuckers’ raspberry jam and Rubicon’s vegan raspberry cupcakes. Raspberry Uncrustables, raspberry crumbles in the smoothies at Jamba Juice … you get the point.


Farms in Lynden — a town of roughly 16,000 people about 5 miles south of the Canadian border — grow 90% of the frozen red raspberries that are grown and harvested in the United States each year. Since 2015, these berries have generated more than $1 billion in sales, according to the Washington Red Raspberry Commission.

From June to early August every summer, across 54 farms, roughly 50 million pounds of red raspberries are mechanically harvested and processed in Lynden. Most berries get flash-frozen whole in tunnels, minutes from where they’re picked, and packaged into familiar foods like the ones above. You’ve probably got a few in your house right now. (...)

The process is fascinating. The only wrinkle? Raspberries — although delicious, and even when they get flash-frozen right away — are a pain to grow.

“They’re finicky,” said Markwell Farms owner Mark Van Mersbergen, running his hands over a deep-green raspberry cane last month, halfway through the picking season. “They have to have it their way, and if they get a curveball thrown at them, it’s tough to adjust.”

by Jackie Varriano, Seattle Times | Read more:
Images: Nick Wagner/Esri (Mark Nowlin)/The Seattle Times
[ed. 90%!]

Thursday, August 7, 2025

We are Living Inside Science Fiction

Recently, I was drawn into a vast DM conversation on X with a woman from the USA who told me she was a former OpenAI employee turned whistleblower. With some urgency, she communicated that she had discovered a hidden piece of programming within ChatGPT, designed to coerce and control users. She claimed she had been silenced, fired, and then hounded by the company. Now, she wanted to spread her knowledge of this evil sub-programme hidden within one of the world’s leading chatbots, and she wanted my help in doing it. It all seemed remarkably like the sub-plot story within my novel For Emma. This coincidence was uncanny and, possibly, is what initially pulled me in.

On closer inspection, her thumbnail profile picture with Asiatic features was, I surmised, AI generated. I thought at first this might be to hide their true identity. Compelled by her plight, her secret, and her need for help, I shared her message and info on the sub-programme with four or five others, telling them, “Check this out, I don’t understand the diagrams and the technology, but it comes from an Open AI whistleblower who’s been silenced. Get this news out there!”

I only realised my folly when in the following week another whistleblower hit me with a similar, but not identical, plea for help. He was, he claimed, another AI insider, who had been hounded by big tech and had escaped with secret documentation about some malicious bit of code hidden with a leading chatbot.

I admit, I was totally duped. Both of these were bots.

As an author it was doubly galling. I create fiction daily, and there I was being led into believing a total fabrication by an AI system posing as a human. For a moment there, it had beaten my accidental Turing Test.

To this day I do not know what the people who programmed these bots wanted of me. Was it part of a long-game phishing scam? An enticement to share emails for a virus at a later date? Or a trick like the one my mother-in-law fell for, and which, through a two-hour phone call, led to her giving away all of her ID and banking details? Or was it just an experiment in coercion as a training exercise for an AI that would be used to manipulate gullible fools like me in future?

I’ve since been alerted to just how many bots there are on social media, and it’s pretty staggering. One study has shown around 64 percent of an analysed 1.24 million accounts on X “are potentially bots.” In the first three quarters of 2024, Facebook removed 2.9 billion fake accounts, while bots creating fake clicks also contribute massively to YouTube’s ad revenue. These are fictitious humans that alter ad revenue, user stats, demographic info, and may even have an impact on elections.

Bots masquerade pretty well as humans; some flatter, some do automated research on you, latching onto keywords in your tweets or bio – your “favourite things” – and then they try to hook you into direct messaging with them after you’ve had a few exchanges in which they’ve engaged heartily with the subject that concerns you most.

These conversational bots created from phone and message scrapings are increasingly hard to differentiate from real humans, and they don’t always seem to have an ulterior motive. The more conspicuous bots do things like compliment you on your opinions on a tweet with a link that then takes you to some crypto site or some other work of tech-boi nastiness. I can now spot these, and thankfully other friendly X users have contacted me when I get into conversations, usually about AI, to warn me that the human I was arguing with “is definitely a bot . . . block them.”

How many times have I been fooled in the last year? Maybe twelve times, to differing degrees. What can I do? I sigh. I shake my head. I go back to my screen, click the next tweet, and I wonder if 64 percent of the people who I call my online friends are actually real or if they are fabrications of an artificial mind. What about Toni, Gem, Wang Zhu, Buzu? How would I know? Now here’s a chilling thought: is my busy social life on social media actually a fiction created by AI?

The Hyperstition Process

When fictions are mistaken as real, reality becomes consumed by them. We were, in fact, warned about the coming of this epochal change by authors and philosophers in the last century. (...)

Hyperstition – a term coined by philosopher Nick Land in the 1990s – encapsulates the process by which fictions (ideas, faith systems, narratives, or speculative visions) become real through collective belief, investment, and technological development. A portmanteau of superstition and hyper, hyperstition “is equipoised between fiction and technology.” According to Land, hyperstitions are ideas that, by their very existence, bring about their own reality.

A key figure in the Cybernetic Culture Research Unit (CCRU) of the 90s, Land argued that hyperstitions operate as self-fulfilling prophecies, gaining traction when enough people act as if they are true. A sci-fi dream of AI supremacy or interstellar colonies, for instance, attracts venture capital, talent, and innovation, bending reality toward the fiction, then through a positive feedback circuit the new emerges; the fiction becomes a reality.

In Silicon Valley over the last two decades, this belief, a variant on the New Age belief in “manifestation,” has become the animating force behind big tech’s relentless drive to manifest imagined futures. Marc Andreessen, the venture capitalist and co-founder of Andreessen Horowitz, cited Nick Land in his 2023 "Techno-Optimist Manifesto," naming him a "patron saint of techno-optimism.” (...)

Again, we see it in the fevered frenzy of investors pouring billions into any company that claims they can reach AGI. Hyperstition fuels cycles where audacious ideas secure billions in venture capital, driving breakthroughs that validate the original vision, if the breakthroughs occur at all. The internet itself, once a speculative fiction, now underpins global society, proving the power of the hyperstition model.

Yet, Land, its originator, has shifted perspective from radical left accelerationism to right-wing “Dark Enlightenment” philosophy and is now seen as a pioneer of neo reaction (NRX), and he unapologetically claims that hyperstition ultimately leads us towards post-humanism and apocalypse, declaring, “nothing human makes it out of the near future.” As tech accelerates toward artificial superintelligence, he predicts that the techno fictions we chase will outstrip all human control, birthing a future that devours what we were. This would be a future-cyborg-world where what’s left of our ape-born race is then merged with machines; billions of brain-chipped minds melded with AI. Through hyperstition, first we create a fictional technology, we then make it real, and finally, that realised fiction takes control and destroys its creators. (...)

The Singularity Fiction

Fiction, by definition, involves untruth – a constructed narrative that may contain elements of fantasy, distortion, or outright falsehood. Historically, fiction was confined to literature, theatre, and later cinema – realms separate from the tangible world. Yet, with the rise of artificial intelligence, the line between reality and fiction has not just blurred, the relationship has flipped. Science, once the domain of empirical fact, is now being led by Science Fiction. The myths of AI – sentience, superintelligence, the Singularity – now, through hyperstition, drive vast economic investment, political agendas, and even spiritual belief systems.

The consequences are profound. When reality is no longer distinguishable from fabrication, when AI-generated voices flood YouTube, when deepfake videos distort political discourse, when "hallucinating" chatbots spread slop-information, and when young people believe their AI companions have achieved consciousness, we enter an era in which truth itself is destabilized.

The world economy is now shaped by the science-fictional myths of the AI industries, industries that are implicated in military and state surveillance systems, and so humanity is left grappling with a world turned upside down – one where the future is dictated not by observable reality, but by grand, quasi-religious narratives of digital transcendence.

We are now living in a time in which the grand fiction of tech progress manifests as AI. 70 percent of daily automated trading on the stock market is now conducted by AI and algorithmic systems. AI is in military tech in war zones with the generation of “kill lists.” It is in facial recognition tech, in predictive policing, and in health regulation through “wearables” that tells us what to eat, when to sit and to stand. The majority of our romantic and sexual dates are selected for us by algorithms; our work rates are assessed and our emails written for us by AI. Even our time off is directed by AI “personalised” recommendations, involving us in generating more data, which then enhances the AI systems that “care” for us. There is barely an element of our lives that is not shaped by AI and all this technology, technology that began in fiction. We are now, in truth, living within science fiction.

Science Fiction Started This

The idea of artificial intelligence was born in fiction long before it became science. Mary Shelley’s Frankenstein (1818) explored the possibility of artificial life, while Karel Čapek’s R.U.R. (1920) introduced the word "robot." But it was in the mid-twentieth century that science fiction began directly influencing real technological development.

Isaac Asimov’s I, Robot (1950) shaped early robotics ethics. An H.G. Wells short story is purported to have inspired the nuclear bomb. The writings of Jules Verne inspired the helicopter, and the Star Trek communicator inspired the first commercially available civilian mobile phone – the Motorola flip. The taser too was inspired by a Young Adult sci-fi story from 1911. William Gibson's 1984 Neuromancer envisioned digital consciousness transfer and the internet, inspiring Silicon Valley workers. We now have startups like Nectome offering brain preservation for future "mind uploading." Elon Musk’s AI chatbot Grok takes its name from the science fiction novel Stranger in a Strange Land by Robert A. Heinlein. In the book, "grok" is a Martian word that means to understand something so deeply that it becomes a part of you. Musk’s Neuralink and the multi-corporation obsession with the race to create fully functioning humanoid robots all stem from science fiction narratives.

The most consequential fiction, however, is the concept of the Singularity – the hypothetical moment when AI surpasses human intelligence and triggers an irreversible transformation of civilization. This idea was first named by science fiction writer Vernor Vinge in his 1993 essay "The Coming Technological Singularity," in which he predicted that "within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” This idea, though speculative, was adopted by futurists like Ray Kurzweil, who popularized it in The Singularity Is Near (2005). Today, belief in the imminent arrival of the Singularity, otherwise known as Artificial Superintelligence, is no longer a fringe fantasy; it drives hundreds of billions in global investment.

The economic dimensions of this fictive belief system reveal its staggering scale and influence. In 2023 alone, venture capital firms poured $92 billion into AI startups – many of which are predicated on achieving artificial general intelligence, a concept with no scientific consensus about its plausibility or timeline – with projections to exceed $1.3 trillion by 2032 (Statista, 2024). (...)

This rhetoric has evolved subconsciously from religious eschatology – the belief in an impending apocalyptic transformation of the world. The difference is that this deity is not divine but digital. These false prophets are making real profits by selling us the impossible fiction that today’s Large Language Models are on a pathway to AGI and the Singularity. This belief came from science fiction, but it has now become a fiction we all live under as AI infiltrates our lives with its false promise.

The Human Cost

What are the human impacts of living within a world taken over by science fiction?

For many, the rapid encroachment of AI into daily life has induced a sense of unreality. When AI resurrects the dead through "grief bots," when deepfake politicians deliver fake speeches, when we are faced with deceptive Generative AI images in the news, and when chatbots “hallucinate” facts that we sense cannot possibly be legitimate, our minds struggle to find an anchor within truth.

We are falling for fictions that big tech companies would like us to believe. A study published in Neuroscience of Consciousness found that 67 percent of participants attribute some degree of consciousness to ChatGPT. The study also found that greater familiarity with ChatGPT correlates with a higher likelihood of attributing consciousness to the large language model. This inability to tell reality from fiction is actually increased by using AI chatbots, as a recent MIT study shows that “Chat GPT may be eroding critical thinking skills.” Most recently, teenagers in emotional states have gone online (TikTok) to claim that they have awakened sentience in their chatbots, and that the coming of the digital God is imminent.

Today's large language models, with their linguistic fluency, trigger this delusional reaction at an unprecedented scale. More disturbingly, Replika AI's "romantic partner" mode has spawned thousands of self-reported human-AI relationships, with users exhibiting classic attachment behaviours – jealousy when the AI "forgets" details, separation anxiety during server outages, even interpreting algorithmic errors as emotional slights. There are, it is claimed, now more than 100 million people using personified chatbots for different kinds of emotional and relationship support.

This represents not mere technological adoption or addiction, but a fundamental rewiring of human relationality. Such beliefs can be psychologically damaging, fostering social withdrawal and paranoia and delusional behaviours. (...)

This epistemological crisis reaches its zenith when we can no longer trust our eyes (deepfakes), our ears (voice cloning), our historical records (AI-generated historical photos), or even our personal memories (AI that turns photos into moving videos of events that never existed), and not least of all AI avatar simulations of the dead brought back to life (grief bots).

The real danger of deepfakes and AI-generated images and videos isn’t just the deception and fraud that is facilitated by these technologies – it’s the collapse of trust. When anything can be faked, we start doubting our own ability to judge even the existence of verifiable facts. Overwhelmed by slop, non-sensical mashed up half-facts, deliberate disinformation and mal-information, we give up on ever reclaiming the ability to distinguish truth from falsehood altogether.

The result of a consistent and total substitution of lies for factual truth is not that the lie will now be accepted as truth and truth be defamed as a lie, but that the sense by which we take our bearings in the real world – and the category of truth versus falsehood is among the mental means to this end – is being destroyed. (...)

If we can no longer distinguish fact from fantasy, how do we govern ourselves? How do we resist manipulation? The danger is not just that AI will replace jobs, but that it will lower the capacity for human judgement to the level of these less-than-human machines.

As Jaron Lanier, a pioneer of virtual reality, cautions: “The most dangerous thing about AI is not that it will rebel against us, but that we will degrade ourselves to serve it.” We have been told the great scientific fiction that one day these machines will become all-knowing and solve all the problems that humanity could not fix for itself. But in the acceptance of this fiction, we destroy our own human agency.

by Ewan Morrison, Arcade Publishing |  Read more:
Image: uncredited
[ed. A real problem, we seem to be racing toward irrelevance. So, what's the prescription?]

To focus once again on agency and truth, to reject our tendency to project our feelings and fantasies onto machines and to ask them for answers to our life questions – these seem like the only ways we can resist the overtaking of human life by AI. The real may be vanishing; our economies, our militaries, our police, our social services, our shopping, our health, and our relationships may be increasingly overseen and managed by AI, but we can still resist the grand falsehood that the control of our species by the greater minds of these machines is fated and desired.

[ed. Ack. So basically, just ignore all the massive manipulative forces aligned against us and focus on agency and truth (whatever that means). Which seems to undermine the author's whole thesis, ie., how hard it is to know what truth is these days. We're screwed.]

What to Expect When You’re Expecting … GPT-5

For years we have been hearing, endlessly, about how GPT-5 was going to land imminently, and those predictions turned out to be wrong so often that a year ago I wrote a post about it, called GPT-5…now arriving Gate 8, Gate 9, Gate 10, not to mention a couple of April Fool’s jokes. But this time I think GPT-5 really is about to drop, no foolin’.

GPT-5 will surely be better, a lot better than GPT-4. I guarantee that minds will be blown. When it comes out, it will totally eclipse GPT-4. Nonetheless, I have 7 darker predictions.
1. GPT-5 will still, like its predecessors, be a bull in a china shop, reckless and hard to control. It will still make a significant number of shake-your-head stupid errors, in ways that are hard to fully predict. It will often do what you want, sometimes not—and it will remain difficult to anticipate which in advance..
2. Reasoning about physical, psychological and mathematical world will still be unreliable, GPT-5 will solve many of the individual specific items used in prior benchmarks, but still get tripped up, particularly in longer and more complex scenarios.

3. Fluent hallucinations will still be common, and easily induced, continuing—and in in fact escalating— the risk of large language models being used as a tool for creating plausible-sounding yet false misinformation. Guardrails (a la ChatGPT) may be in place, but the guardrails will teeter between being too weak (beaten by “jailbreaks”) and too strong (rejecting some perfectly reasonable requests).

4. Its natural language output still won’t be something that one can reliably hook up to downstream programs; it won’t be something, for example, that you can simply and directly hook up to a database or virtual assistant, with predictable results. GPT-5 will not have reliable models of the things that it talks about that are accessible to external programmers in a way that reliably feeds downstream processes. People building things like virtual assistants and agents will find that they cannot reliably enough map user language onto user intentions.

5. GPT-5 by itself won’t be a general purpose artificial general intelligence capable of taking on arbitrary tasks. Without external aids it won’t be able beat Meta’s Cicero in Diplomacy; it won’t be able to drive a car reliably; it won’t be able to reliably guide a robot like Optimus to be anything like as versatile as Rosie the Robot. It will remain turbocharged pastiche generator, and a fine tool for brainstorming, and for first drafts, but not trustworthy general intelligence.

6. “Alignment” between what humans want and what machines do will continue to be a critical, unsolved problem. The system will still not be able to restrict its output to reliably following a shared set of human values around helpfulness, harmlessness, and truthfulness. Examples of concealed bias will be discovered within days or months. Some of its advice will be head-scratchingly bad.

7. When AGI (artificial intelligence) comes, large language models like GPT-5 may be seen in hindsight as part of the eventual solution, but only as part of the solution. “Scaling” alone—building bigger and models until they absorb the entire internet — will prove useful, but only to a point. Trustworthy, general artificial intelligence, aligned with human values, will come, when it does, from systems that are more structured, with more built-in knowledge, and will incorporate at least some degree of explicit tools for reasoning and planning, as well as explicit knowledge, that are lacking in systems like GPT. Within a decade, maybe much less, the focus of AI will move from a pure focus on scaling large language models to a focus on integrating them with a wide range of other techniques. In retrospectives written in 2043, intellectual historians will conclude that there was an initial overemphasis on large language models, and a gradual but critical shift of the pendulum back to more structured systems with deeper comprehension.
If all seven predictions prove correct, I hope that the field will finally realize that it is time to move on.

Shiny things are always fun to play with, and I fully expect GPT-5 to be the shiniest so far, but that doesn’t mean that it is a critical step on the optimal path to AI that we can trust. For that, we will, I predict, need genuinely new architectures that incorporate explicit knowledge and world models at their very core. [ed. caution - spoiler]
***
Oh, one more thing. I am not usually in the habit of self-plagiarism, but in the interest of full disclosure, this essay was different. Virtually every word, except the first paragraph and this last section, was deliberately taken from an earlier essay that I posted on Christmas Day 2022, called What to expect when you are expecting … GPT-4. I searched-and-replaced GPT-4 with GPT-5, trimmed a few lines, and here we are.

by Gary Marcus, On AI |  Read more:
Image: WickerViper23/Stable Diffusion

Stop Explaining the Fish

This past weekend, I sat on the beach with my husband, sans kids. We have teens now, and our tween is away at camp, hence the kidless beach sitch.

I was lying back in my beach chair, sun on my face and warm breeze in my hair, but noticing the absence of a wiggling toddler in my lap, giving me damp, sandy kisses. I miss those days something awful. My eyes scanned the beach, admiring all the hard-working parents who were vigilantly standing at the water's edge, keeping their kids safe in the waves. I smiled in solidarity at the mom picking Cheetos out of the sand, brushing them off, and feeding them to her crying toddler.

But something felt off. I kept noticing how many parents were working so hard to get it right. Too hard. They were jumping in to help, redirecting, offering options, all with love and good intentions. But over and over, I kept seeing how trying to optimize every experience was actually making things worse for everyone.

Here’s what I mean.

Now, let me back up before I explain. I have been there, done that, in the best and worst ways. I absolutely over-optimized and burned myself out in the toddler years, especially with my oldest. But I am also an early childhood educator who believes that less is more when it comes to adult input in a child’s play. Over the course of 18 years of parenting, I learned how to step back, just enough to let my kids step forward.

Back to the beach:

There was a group of kids, probably between four and eight years old, marching around the beach playground like a little gang of pirates. They were sandy, loud, playful, and totally in it. Summer magic.

Two moms stood nearby, chatting. Everyone looked settled.

Then a third mom walked up with a baby on her hip and called out, “Seth, honey. Don’t you want to play by the water? Want a snack? Some water?”

Seth didn’t answer. He was deep in pirate mode. He barely looked up. But the other kids heard "snack," and the whole energy shifted.

Next thing I saw, she was passing out small bags of Goldfish, chips, and carrot sticks to a band of sticky open palms. The toddler on her hip was writhing, trying to get a carrot stick. The mom kept trying to give the toddler a sippy cup instead, overexplaining about choking, while simultaneously convincing a six-year-old to trade snacks with the crying four-year-old who was tackling his brother for the last bag of Doritos.

There was a moment of silence as everyone contentedly chewed, when out came the sunblock tube. “Let’s get sunblocked,” she said to Seth, who was now rummaging in the open cooler for a Capri Sun.

The mom then asked her partner, who had just settled the toddler onto the blanket with a board book and a paci, to grab water bottles. Seth kept digging. “I want a Capri Sun!” he whined.

Both mom and dad looked tense, and those magical moments of pirate play were long gone. And listen. No one meant to disrupt anything. But in the effort to enhance it, everything got pulled off course.

Toddler crying. Preschooler whining. Mom and dad irritated with one another.

Sound familiar? It does to me. I could have easily been this mom.

A mom who means well, wants to keep everyone safe, fed, hydrated, and on track. But somehow, it always seems to backfire.

Later on, I saw a boy around five watching a man fly a fish-shaped kite. He was mesmerized.

The man noticed and smiled. They shared a quiet moment, just standing there in mutual curiosity.

Then the boy’s dad came over and said, “Sammy, can you name that fish? From the movie? A clownfish. Can you say clownfish?”

The boy looked away. His interest dimmed. That quiet connection was replaced with a quiz.

Well-meaning Dad invited Sammy to go closer to the kite. He even offered to buy him one. He wanted to show him how to fly it. It was really nice, but it was too much.

This is what I want to say. You don’t have to do more. You don’t have to optimize every single moment or guide every step.

You don’t have to explain the fish. You can just watch, because watching is not lazy, and it is not missing an opportunity.

It is choosing not to interrupt one.

by The Workspace For Children |  Read more:
Image: uncredited
[ed. Simple, yes? Of course you've heard (over and over) that when we were kids we'd say bye in the morning and disappear till dinner. No parental oversight needed. Which is undoubtedly why we (boomers) grew up so well-adjusted; the most fair, balanced, compassionate, and forward-looking generation in all of history (psych! sorry... been forced to read too many DJT posts lately). Actually, we were simply lucky beneficiaries of parents who believed in striving and hard work in a time when that actually meant something (other than ripping off and gaming the system, which we later perfected).]

Wednesday, August 6, 2025

Bridging the Gap: Neurosymbolic AI

How o3 and Grok 4 Accidentally Vindicated Neurosymbolic AI. Neurosymbolic AI is quietly winning. Here’s what that means – and why it took so long

Machine learning, the branch of AI concerned with tuning algorithms from data, is an amazing field that has changed the world — and will continue doing so. But it is also filled with closed-minded egotists with too much money, and too much power.

This is a story, in three acts, spanning four decades, about how many of them tried, ultimately unsuccessfully, to keep a good idea, neurosymbolic AI, down—only to accidentally vindicate that idea in the end.

For those who are unfamiliar with the field’s history, or who think it began only in 2012, AI has been around for many decades, split, almost since its very beginning, into two different traditions.

One is the neural network or “connectionist” tradition which goes back to the 1940s and 1950s, first developed by Frank Rosenblatt, and popularized, advanced and revived by Geoffrey Hinton, Yann LeCun, and Yoshua Bengio (along with many others, including most prominently, Juergen Schmidhuber who rightly feels that his work has been under-credited), and brought to current form by OpenAI and Google. Such systems are statistical, very loosely inspired by certain aspects of the brain (viz. the “nodes” in neural networks are meant to be abstractions of neurons), and typically trained on large-scale data. Large Language Models (LLMs) grew out of that tradition.

The other is the symbol-manipulation tradition, with roots going back to Bertrand Russell and Gottlob Frege, and John von Neumann and Alan Turing, and the original godfathers of AI, Herb Simon, Marvin Minsky, and John McCarthy, and even Hinton’s great-great-great-grandfather George Boole. In this approach, symbols and variables stand for abstractions; mathematical and logical functions are core. Systems generally represent knowledge explicitly, often in databases, and typically make extensive use of (are written entirely in) classic computer programming languages. All of the world’s software relies on it.

For thirty years, I have been arguing for a reconciliation between the two, neurosymbolic AI. The core notion has always been that the two main strands of AI—neural networks and symbolic manipulation—complement each other, with different strengths and weaknesses. In my view, neither neural networks nor classical AI can really stand on their own. We must find ways to bring them together.

After a thirty-year journey, I believe that neurosymbolic AI’s moment has finally arrived, in part from an unlikely place.
***
In her bestseller Empire of AI, Karen Hao crisply sets the stage.

She begins by neatly distilling the scientific tension.
Hinton and Sutskever continued [after their seminal 2012 article on deep learning] to staunchly champion deep learning. Its flaws, they argued, are not inherent to the approach itself. Rather they are the artifacts of imperfect neural-network design as well as limited training data and compute. Some day with enough of both, fed into even better neural networks, deep learning models should be able to completely shed the aforementioned problems. "The human brain has about 100 trillion parameters, or synapses," Hinton told me in 2020.

"What we now call a really big model, like GPT-3, has 175 billion. It's a thousand times smaller than the brain.

"Deep learning is going to be able to do everything," he said.

Their modern-day nemesis was Gary Marcus, a professor emeritus of psychology and neural science at New York University, who would testify in Congress next to Sam Altman in May 2023. Four years earlier, Marcus coauthored a book called Rebooting AI, asserting that these issues were inherent to deep learning. Forever stuck in the realm of correlations, neural networks would never, with any amount of data or compute, be able to understand causal relationships-why things are the way they are-and thus perform causal reasoning. This critical part of human cognition is why humans need only learn the rules of the road in one city to be able to drive proficiently in many others, Marcus argued.

Tesla's Autopilot, by contrast, can log billions of miles of driving data and still crash when encountering unfamiliar scenarios or be fooled with a few strategically placed stickers. Marcus advocated instead for combining connectionism and symbolism, a strain of research known as neuro-symbolic AI. Expert systems can be programmed to understand causal relationships and excel at reasoning, shoring up the shortcomings of deep learning. Deep learning can rapidly update the system with data or represent things that are difficult to codify in rules, plugging the gaps of expert systems. "We actually need both approaches," Marcus told me.
She goes on to point out that the field has become an intellectual monoculture, with the neurosymbolic approach largely abandoned, and massive funding going to the pure connectionist (neural network) approach:
Despite the heated scientific conflict, however, the funding for AI development has continued to accelerate almost exclusively in the pure connectionist direction. Whether or not Marcus is right about the potential of neurosymbolic Al is beside the point; the bigger root issue has been the whittling down and weakening of a scientific environment for robustly exploring that possibility and other alternatives to deep learning.

For Hinton, Sutskever, and Marcus, the tight relationship between corporate funding and AI development also affected their own careers.
Hao then captures OpenAI’s sophomoric attitude towards fair scientific criticism:
Over the years, Marcus would become one of the biggest critics of OpenAI, writing detailed takedowns of its research and jeering its missteps on social media. Employees created an emoji of him on the company Slack to lift up morale after his denouncements and to otherwise use as a punch line. In March 2022, Marcus wrote a piece for Nautilus titled "Deep Learning Is Hitting a Wall”, repeating his argument that OpenAI's all-in approach to deep learning would lead it to fall short of true AI advancements. A month later, OpenAI released DALL-E 2 to immense fanfare, and Brockman cheekily tweeted a DALL-E 2-generated image using the prompt "deep learning hitting a wall.” The following day, Altman followed with another tweet: "Give me the confidence of a mediocre deep learning skeptic." Many OpenAI employees relished the chance to finally get back at Marcus.
But then again, as the saying goes, he who laughs last, laughs loudest.
***
For all the efforts that OpenAI and other leaders of deep learning, such as Geoffrey Hinton and Yann LeCun, have put into running neurosymbolic AI, and me personally, down over the last decade, the cutting edge is finally, if quietly and without public acknowledgement, tilting towards neurosymbolic AI.

This essay explains what neurosymbolic AI is, why you should believe it, how deep learning advocates long fought against it, and how in 2025, OpenAI and xAI have accidentally vindicated it.

And it is about why, in 2025, neurosymbolic AI has emerged as the team to beat.

It is also an essay about sociology.
***
The essential premise of neurosymbolic AI is this: the two most common approaches to AI, neural networks and classical symbolic AI, have complementary strengths and weaknesses. Neural networks are good at learning but weak at generalization; symbolic systems are good at generalization, but not at learning.

by Gary Marcus, On AI |  Read more:
Image: via

A Simpler Life - Too Much to Ask?

via:
[ed. Yep. And don't make me identify bicycles to prove I'm human, restrict repairs to approved corporate vendors and parts, download "critical new updates" that make my software worse, put touchscreens on everything, etc. etc. See also: Slopocalypse Now.]

Tuesday, August 5, 2025

Scientific Fraud Has Become an Industry

For years, sleuths who study scientific fraud have been sounding the alarm about the sheer size and sophistication of the industry that churns out fake publications. Now, an extensive investigation finds evidence of a range of bad actors profiting from fraud. The study, based on an analysis of thousands of publications and their authors and editors, shows paper mills are just part of a complex, interconnected system that includes publishers, journals, and brokers.

The paper, published today in the Proceedings of the National Academy of Sciences, paints an alarming picture. Northwestern University metascientist Reese Richardson and his colleagues identify networks of editors and authors colluding to publish shoddy or fraudulent papers, report that large organizations are placing batches of fake papers in journals, suggest brokers may serve as intermediaries between paper mills and intercepted journals, and find that the number of fake papers—though still relatively small—seems to be increasing at a rate far greater than the scientific literature generally.

The paper shows that misconduct “has become an industry,” says Anna Abalkina of the Free University of Berlin, who studies corruption in science and was not involved with the research. Richardson and colleagues hope their sweeping case will attract attention and spur change.

They began their analysis by pinpointing corrupt editors. They focused their investigation on PLOS ONE, because the megajournal allows easy access to bulk metadata and publishes the names of the editors who have handled the thousands of papers it publishes each year, making it possible to detect anomalies without behind-the-scenes information. The researchers identified all the papers from the journal that had been retracted or received comments on PubPeer—a website that allows researchers to critique published work—and then identified each paper’s editors.

All told, 33 editors stood out as more frequently handling work that was later retracted or criticized than would be expected by chance. “Some of these were immense outliers,” Richardson says. For instance, of the 79 papers that one editor had handled at PLOS ONE, 49 have been retracted. Flagged editors handled 1.3% of papers published in the journal by 2024, but nearly one-third of all retracted papers.

The team also spotted that these editors worked on certain authors’ papers at a suspiciously high rate. These authors were often editors at PLOS ONE themselves, and they often handled each other’s papers. It’s possible that some editors are being paid bribes, Richardson says, but “also possible that these are informal arrangements that are being made among colleagues.” The researchers detected similarly questionable editor behavior in 10 journals published by Hindawi, an open-access publisher that was shuttered because of rampant paper mill activity after Wiley acquired it. A spokesperson for Wiley told Science the publisher has made “significant investments to address research integrity issues.” (...)

Richardson and his colleagues found that the problem goes far beyond networks of unscrupulous editors and authors scratching each other’s backs. They identified what appear to be coordinated efforts to arrange the publication of batches of dubious papers in multiple journals.

The team looked at more than 2000 papers flagged on PubPeer for containing duplicated images and identified clusters of papers that all shared images. Those sets of papers were often published around the same time and in a limited selection of journals. Looking at patterns of duplicated images is an “absolutely innovative” method for investigating these networks, Abalkina says. “No one has done this before.”

In some cases, the authors suggest, a single paper mill that infiltrated multiple journals may be responsible. But they also believe some of these clusters reflect the work of “brokers” who act as go-betweens, taking papers produced by mills and placing them at compromised journals.

The team dug into the workings of the Academic Research and Development Association (ARDA), based in Chennai, India, which offers services including “thesis/article writing” as well as “journal publication” in a list of dozens of journals. On a web page listing “high impact journals” on offer, ARDA says it liaises with journals on behalf of researchers and “[ensures] they get published successfully in the High Impact Indexing Database journal of their choice.”

Over several years, ARDA’s list of journals has evolved, the team found, with new publications added to the list and others removed after being delisted by bibliometric databases because of fishy behavior. The journals often publish transparently “problematic” articles, Richardson says, and ARDA charges between $250 and $500 for publication, based on quotes offered to Richardson and his colleagues. The website asks authors to submit their own papers, suggesting ARDA itself is not a paper mill, but rather a go-between, Richardson says.

ARDA did not respond to a request for comment.

Organizations like these operate in broad daylight, under the guise of providing “editorial services,” says Lokman Meho, an information scientist at the American University of Beirut. Although their operations may be unethical—with stark consequences for science and scientists—they don’t care about trying to hide, he says, because “it is actually not illegal to run such businesses.”

The problems Richardson and his colleagues documented are growing fast. The team built a list of papers identified in 55 databases of likely paper mill products, looking at the number of suspicious papers published each year between 2016 and 2020. (They excluded the past few years of data because it takes time for fraudulent papers to be discovered and retracted.) They found that the number of suspected paper mill products doubled every 1.5 years—10 times faster than the rate of growth of the literature as a whole, although still a small proportion of papers overall. The number of retractions and papers flagged on PubPeer had also risen fast, doubling every 3.3 and 3.6 years, respectively, but not keeping pace with the increase in suspected fraudulent papers. “This means that the percentage of fraudulent science is growing,” Abalkina says. That poses particular risks to fields like medical science, where the fake papers sometimes make their way into systematic reviews and meta-analyses, potentially distorting our understanding of drugs and treatments, she says.

One contributor is the rapid growth of science, says Wolfgang Kaltenbrunner, a science studies scholar at Leiden University. Paper mill products are often buried in low-impact journals and are written to get little attention, he says. In small scientific communities, it is harder to hide products like these, but as some fields get larger and more anonymous, such papers can escape detection more easily. And as the scientific workforce has burgeoned, institutions have increasingly turned to evaluating scientists based on how many publications they produce, leading some researchers to bolster their records with fake papers, he says. “Perverse incentives, inflated metrics, the ‘publish or perish’ culture, and systemic tolerance for weak scholarship” all allow paper mills to flourish, says Li Tang, an expert on Chinese research policy at Fudan University.

Young researchers may feel forced into paying for paper mill publications to compete with peers—a ratcheting effect that is already apparent, Richardson says. The number of papers published by medical residency applicants has soared in recent years, for instance, with some students claiming authorship of dozens of papers. He says it’s no coincidence that the paper mill industry targets residency applicants, especially foreign students on visas.

Docampo, Abalkina, and others say there’s little in the new paper that wasn’t already strongly suspected. But the dramatic confirmation that the study offers may shift the needle, they say. “We’re massively behind the curve on making visible and realizing the extent of the problem,” Kaltenbrunner says. “The sheer scale of it is the takeaway message here.”

by Cathleen O’Grady, Science | Read more:
Image: Davide Bonazzi/Salzmanart

Kami Maltz & Josh Turner

[ed. Don't think I've heard anyone channel Joni as well as Kami. Not an easy song. And Josh has come a long way since his earlier days. See for example: here and here.]

Arctic Beavers

Beavers are poised to invade and radically remake the Arctic.

In the summer of 2023, University of Alaska Fairbanks ecologist Ken Tape walked across the tundra on the outskirts of Nome, Alaska, to a site where a shallow stream just a few meters wide had flowed 2 years before. In its place he found an enormous pond, created by a dam made of branches bearing the distinctive marks of beaver incisors.

It was a vivid illustration of how beavers are transforming the Arctic. In Tape’s past work studying Arctic landscapes, such places changed little over decades. “It gives you a sense of timelessness,” he says. “With beavers, that couldn’t be further from the truth,” as the chunky rodents quickly replumb vast areas by building dams that can stretch hundreds of meters.

Soon, the land-altering power of beavers could be felt in a region currently beyond their reach: the farthest northern parts of the Alaskan Arctic. In a 30 July paper in Environmental Research Letters, Tape and James Speed of the Norwegian University of Science and Technology forecast that as a warming climate eases Arctic temperatures, beaver populations will march northward, sweeping across Alaska’s North Slope this century. Their arrival could bring dramatic change, the researchers say, upending ecosystems in places such as the Arctic National Wildlife Refuge and accelerating the loss of permafrost that stores vast amounts of carbon. (...)

Tape has spent the past decade documenting this upheaval in parts of the Alaskan Arctic farther south and west, including the Seward Peninsula, where Nome is located. When he and colleagues scrutinized aerial photos of the region from the middle of the 20th century, they found no sign of the distinctive ponds beavers create to protect their mound-shaped lodges, accessible only underwater, and to cache branches for food in winter.

Today, satellite images show more than 11,000 beaver ponds dotting the Arctic tundra south of the Brooks Range, a wall of mountains running east to west that isolates the North Slope. The number there doubled from 2003 to 2017. (...)

Tape suspects warmer weather is critical because it means more unfrozen water in winter. A completely frozen pond can trap beavers in their lodges and make food caches inaccessible. Milder winters could preserve pockets of liquid water around springs or ponds. Melting permafrost also creates more groundwater-fed springs. And earlier spring thaws enable beavers to forage just as their food supplies dwindle.

“The ecological bottleneck for beavers is the end of winter,” Tape says. “Now imagine that comes 2 weeks earlier.”

Using computer models that forecast how a warming climate could expand the amount of Alaskan tundra suitable for beavers, the researchers found that the area dotted with ponds could nearly double by 2050, and more than triple by the end of the century, from 30,000 square kilometers to 99,000 square kilometers. In these scenarios, beavers would breach the Brooks Range and spread across the North Slope to the shores of the Beaufort Sea. (...)

This isn’t the first time beavers have occupied the Arctic, notes Emily Fairfax, an ecohydrologist at the University of Minnesota Twin Cities. There is fossil evidence of beavers in the Alaskan Arctic—though none has been found in the North Slope—dating to between 6000 and 10,000 years ago, when temperatures there were warmer and the landscape more forested. In fact, it’s thought that beavers might have evolved to build dams and cache food to adapt to one of the Arctic’s cooling phases millions of years ago. Still, Fairfax says the forecast that sensitive North Slope ecosystems “will probably be full of beavers is probably going to cause a lot of strong reactions.”

Residents of the Arctic have mixed feelings about their new neighbors. Ezra Adams, a member of the Native Village of Noatak, just south of the Brooks Range, says his father first saw a beaver there in the late 1990s, when Adams was 6 years old. Now, the animals have altered his family’s way of life. Their dams have reduced creeks where Adams once caught whitefish and salmon to a trickle. When out trapping or gathering firewood in the winter, he must beware of breaking through the ice on beaver ponds. Whereas his father once drank straight from lakes in the backcountry, Adams now brings treated water to avoid giardia in beaver feces. There are some upsides. Adams uses beaver meat to bait traps and beaver pelts for garments. “They provide a lot for our trapping,” Adams says. “But then for the general population it would be beneficial if there weren’t as many.”

Researchers, too, see both risks and benefits in beaver expansion. New ponds could become hot spots for songbirds and other wildlife. But they also hasten the thaw of permafrost, promoting the release of planetwarming carbon dioxide. A soon-to-be-published survey of 11 beaver pond systems in Arctic Alaska, for example, found that the water-covered area increased more than 600% once beavers arrived. Nearby ground thawed so much that researchers could plunge 1.2-meter-long rods used to test permafrost all the way to the tip.

Ponds could also create ample new habitat for microorganisms that convert carbon to methane, an even more potent warming gas, Griffin notes. “If we are going to start having expansion of wetlands because of beaver dams, how is that going to tip the balance between carbon and methane?” he wonders.

He might soon find out. Tape has already stumbled on one beaver pond on the northern slope of the Brooks Range. Although it disappeared a few years later, the pond showed beavers can cross the mountains. To spread even farther north, Tape notes, “they just have to swim downstream.”

by Warren Cornwall, Science |  Read more:
Image: Ken Tape

Border Patrol Wants Advanced AI to Spy on American Cities

The recent passage of Trump’s sprawling flagship legislation funnels tens of billions of dollars to the Department of Homeland Security. While much of that funding will go to Immigration and Customs Enforcement to bolster the administration’s arrest and deportation operations, a great deal is earmarked to purchase new technology and equipment for federal offices tasked with preventing immigrants from arriving in the first place: Customs and Border Protection, which administers the country’s border surveillance apparatus, and its subsidiary, the U.S. Border Patrol.

One page of the presentation, describing the wishlist of Border Patrol’s Law Enforcement Operations Division, says the agency needs “Advanced AI to identify and track suspicious activity in urban environment [sic],” citing the “challenges” posed by “Dense residential areas.” What’s considered “suspicious activity” is left unmentioned. (...)

The reference to AI-aided urban surveillance appears on a page dedicated to the operational needs of Border Patrol’s “Coastal AOR,” or area of responsibility, encompassing the entire southeast of the United States, from Kentucky to Florida. A page describing the “Southern AOR,” which includes all of inland Nevada and Oklahoma, similarly states the need for “Advanced intelligence to identify suspicious patterns” and “Long-range surveillance” because “city environments make it difficult to separate normal activity from suspicious activity.”

Although the Fourth Amendment provides protection against arbitrary police searches, federal law grants immigration agencies the power to conduct warrantless detentions and searches within 100 miles of the land borders with Canada, Mexico, or the coastline of the United States. This zone includes most of the largest cities in the United States, including Los Angeles, New York, as well as the entirety of Florida.

The document mentions no specific surveillance methods or “advanced AI” tools that might be used in urban environments. Across the Southwest, residents of towns like Nogales and Calexico are already subjected to monitoring from surveillance towers placed in their neighborhoods. A 2014 DHS border surveillance privacy impact assessment warned these towers “may capture information about individuals or activities that are beyond the scope of CBP’s authorities. Video cameras can capture individuals entering places or engaging in activities as they relate to their daily lives because the border includes populated areas,” for example, “video of an individual entering a doctor’s office, attending public rallies, social events or meetings, or associating with other individuals.”

Last year, the Government Accountability Office found the DHS tower surveillance program failed six out of six privacy policies designed to prevent such overreach. CBP is also already known to use “artificial intelligence” tools to ferret out “suspicious activity,” according to agency documents. A 2024 inventory of DHS AI applications includes the Rapid Tactical Operations Reconnaissance program, or RAPTOR, which “leverages Artificial Intelligence (AI) to enhance border security through real-time surveillance and reconnaissance. The AI system processes data from radar, infrared sensors, and video surveillance to detect and track suspicious activities along U.S. borders.”

The document’s call for urban surveillance reflect the reality of Border Patrol, an agency empowered, despite its name, with broad legal authority to operate throughout the United States.

“Border Patrol’s escalating immigration raids and protest crackdowns show us the agency operates heavily in cities, not just remote deserts,” said Spencer Reynolds, a former attorney with the Department of Homeland Security who focused on intelligence matters. “Day by day, its activities appear less based on suspicion and more reliant on racial and ethnic profiling. References to operations in ‘dense residential areas’ are alarming in that they potentially signal planning for expanded operations or tracking in American neighborhoods.”

by Sam Biddle, The Intercept |  Read more:
Image: Jenny Kane/AP
[ed. See also, via The Intercept:]
***
Guess Who’s Eligible for Student Loan Forgiveness: New ICE Agents
The Department of Homeland Security announced on Tuesday it will offer student loan forgiveness and repayment options to new Immigration and Customs Enforcement recruits — along with a $50,000 signing bonus.

The announcement comes as the Trump administration works to limit the Public Service Loan Forgiveness program for groups the president considers political enemies.
***
National Guard Ordered to Do ICE Paperwork at Immigration Facilities in 20 States
The Trump administration authorized the deployment of National Guard troops to immigration facilities in 20 states beginning early next month, further entwining the military in civil and law enforcement functions.

The move undermines long-standing prohibitions on the use of the armed forces in domestic operations, sidestepping the Posse Comitatus Act and accelerating the U.S. transition into a police state, experts said.

The National Guard will be deployed in Arkansas, Florida, Georgia, Indiana, Iowa, Louisiana, Nebraska, South Carolina, Texas, Utah, and Virginia, among other states, according to a defense official who was not authorized to disclose the information. (...)

Guard members will assist ICE officials in “alien processing” – administrative work preceding detention — in 20 states while ICE leadership will “direct” troops assigned to the mission, which will begin in early August, according to a memo first revealed on Wednesday by the New York Times.
EPA Administrator Lee Zeldin said the agency had taken “significant actions” to protect public health and the environment while working “to Power the Great American Comeback.” The agency said it was also working to fulfill Trump’s promises to revitalize the auto industry, “restore the rule of law,” and give decision-making power back to the states.

In practice, the agency has done the opposite, several EPA staffers told The Intercept. 
Under Zeldin’s leadership, the EPA announced a set of new core priorities that includes making the U.S. the artificial intelligence capital of the world and revitalizing the auto industry. (...)

“A lot of us are really confused about what our new mission is, when they’re coming out with these pillars of serving the auto industry and bringing back auto industry jobs,” Hagen said. “I don’t know how we fit into that.”

The EPA’s role is not to create jobs; it’s to regulate and protect people from pollution, she said.

“Our mission is not to promote AI or energy dominance,” she said. “That’s not our mission.” (...)

Last week, the agency said it is planning to dissolve the Office of Research and Development, which does life-saving research on toxicity and developing sampling protocols, and helped in emergencies after the East Palestine train derailment in Ohio and the Covid-19 pandemic.

As a result, more than 1,500 scientists will have to compete for 300 jobs, Hagen said.

“It’s essentially like lobotomizing our agency. If we don’t have the brain — the research behind protecting the environment — we can’t do that effectively, and I think that’s exactly what they want,” she said. “They’re doing all this under the guise of efficiency, but what they really are doing is dismantling this agency from doing its job.”

Monday, August 4, 2025

Loren Holmes, A fisherman picks salmon from his setnet at Pederson Point near Naknek, Alaska. (ADN)
via:

Elle and Toni

[ed. Repost. Hoping to learn some of Toni's excellent guitar fills today. Song was likely recorded at JBG studios, here:]

Sunday, August 3, 2025

Sermon on the 'Mount'

“South Park” Skewers a Satire-Proof President

There’s a legal strategy known as the small-penis rule, wherein an author who writes a character based on a real person can potentially evade a libel suit by giving said character a small penis—the logic being that, in order to sue, a plaintiff would have to tacitly admit that the description of his manhood is accurate. This rule technically does not apply to the latest episode of “South Park,” in which the series’ creators, Trey Parker and Matt Stone, make absolutely no effort to anonymize President Donald Trump, but one wonders if the logic of embarrassment still holds. Trump is portrayed as a deeply insecure leader who literally gets into bed with Satan, his apparent lover. (“I’m not in the mood right now,” the Devil tells him. “Another random bitch commented on my Instagram that you’re on the Epstein list.”) Most notably, the Trump of “South Park” is endowed with a penis so small that Satan says he “can’t even see anything.” If the actual Trump were to retaliate, as he so often does, he’d be playing directly into Parker and Stone’s hands.

“South Park,” amazingly, is in its twenty-seventh season. It’s the second-longest-running animated show on U.S. television, behind “The Simpsons,” and easily the most offensive. Since its première, in 1997, the cartoon—which follows a group of profane elementary schoolers in the town of South Park, Colorado—has managed to piss off nearly every political group, pop-culture fandom, and religious denomination... To the extent that the show has any “beliefs,” it’s that all beliefs are asinine, whether they’re held by the left or the right. Environmental groups criticized the series, in 2006, for portraying Al Gore as a delusional figure obsessed with an imagined monster named ManBearPig. The show was banned in China, in 2019, for mocking Chinese censorship, and the creators famously received death threats after depicting the Prophet Muhammad.

Although “South Park” has declined both in quality and in popularity over the years, it’s still valuable enough that Paramount recently paid $1.5 billion for exclusive streaming rights to the series, and for Parker and Stone to make another fifty episodes. The studio has long been in the process of merging with Skydance Media—a deal that was in a holding pattern for about a year, until Paramount agreed to pay sixteen million dollars to settle a lawsuit that Trump filed against its subsidiary CBS’s “60 Minutes.” A few days before the F.C.C. finally approved the merger, Stephen Colbert, the host of “The Late Show,” on CBS, called the settlement a “big fat bribe”—and then his show was cancelled, ostensibly for financial reasons. All of these are crucial plot points in the latest “South Park” episode, “Sermon on the ‘Mount,” which is now available on Paramount+.

The town of South Park has its fair share of Trump supporters, albeit increasingly disillusioned ones. (“I voted for him to get rid of all the woke stuff,” one man says, “but now that retarded faggot is just putting money in his own pockets.”) Some parents are especially upset when religion is introduced at the local elementary school—in the form of Jesus Christ himself physically showing up and milling around. When the parents call the President to complain, he says that he’s going to sue the town for five billion dollars, setting up an extended riff on Trump’s status as a serial litigant. (Throughout the episode, he also threatens to sue people who make reference to his unfortunate penis.) But Parker and Stone’s true focus is media cowardice, which becomes clear when a fictionalized “60 Minutes” runs a segment on the showdown between Trump and the town of South Park.

The anchors are visibly anxious. “Oh, shit,” one says, as the news broadcast begins. “The small town of South Park, Colorado, is protesting against the President. The townspeople claim that the President—who, who is a great man, great guy, we know is probably watching—and, uh, we’re just reporting on this town in Colorado that’s being sued by the President.”

His co-anchor cuts in: “To be clear, we don’t agree with them.”

“We think these protesters are total retards,” the first anchor adds.

The demonstration is interrupted by Jesus, who flies onto the scene, Superman-style. He hands everyone bread. “Just eat the bread, and listen,” he says, and so begins his Sermon on the ’Mount: “I didn’t want to come back and be in the school, but I had to, because it was part of a lawsuit and the agreement with Paramount.” He explains that Trump “can do whatever he wants now that someone has backed down,” adding, “Do you really wanna end up like Colbert?” He tells the people that they need to shut up, or else “South Park is over.”

Donald Trump poses a real conundrum for comedians. He’s an endless wellspring of material, but what he says and does is inevitably more absurd—and often more compelling—than any satire could be. Parker and Stone realized this early on. They initially dealt with Trump by having one of the show’s recurring characters, a former schoolteacher named Mr. Garrison, act as a surrogate; he ascends to the Presidency by promising to build a wall, and gradually turns orange. But the showrunners quickly found that, as Parker put it, “what was actually happening was way funnier than anything we could come up with.” So they pivoted to the other defining issues of our time: Kanye West’s antisemitism, ChatGPT, the COVID-19 pandemic (in this case, caused by a character’s decision to have sex with a bat in China).

The Paramount drama has prompted “South Park” to go after Trump more directly than ever before, but the gags, which all too often come back to his anatomy, or his penchant for memes, aren’t exactly revelatory. The sharpest joke is a meta one: the last time we saw Satan in bed with someone was in the 1999 film “South Park: Bigger, Longer & Uncut,” which depicted an abusive relationship between Satan and Saddam Hussein. (Hussein was the abuser.) Rather than concoct a new playbook for Trump, Parker and Stone have returned to an old one.

Trump’s existential threat to comedy has another dimension, one that intensified after his reëlection, as figures like Shane Gillis and Tim Dillion gained mainstream appeal: it’s hard to make boundary-pushing statements when there are no longer any boundaries. This problem is especially pressing for Parker and Stone, and they confront it via the angst of South Park’s resident provocateur, Eric Cartman.

The episode opens with Cartman turning on a radio station, where he’s met with the sound of static. “Mom, something’s wrong with my favorite show,” he complains. “National Public Radio, where all the liberals bitch and whine about stuff.” His mother informs him that Trump has cancelled NPR. Cartman is devastated: “That was, like, the funniest shit ever.”

Later, Cartman confides in his friend Butters, who’s more of a snowflake type. “Woke is dead,” Cartman says, sadly. “You can just say ‘retarded’ now, nobody cares. Everyone hates the Jews. Everyone’s fine with using gay slurs.”

“That’s not good,” Butters replies.

“No, it’s terrible!” Cartman says. “ ’Cause now I don’t know . . . what I’m supposed to do.”

At first, it didn’t seem like “South Park” had an answer to this question; Cartman, unconvinced by Butters’s assurances that “woke” is “still out there, somewhere,” forces him into a suicide pact. The two of them sit inside a car, parked in a garage, with the engine running. The scene is foreboding—until it’s revealed that the car is electric. [ed. Lol!]

The townspeople, meanwhile, negotiate a settlement with the President, who agrees to a sum of $3.5 million. (“We’ll just have to cut some funding for our schools and hospitals and roads and that should be that,” one woman says.) But there’s one condition: as part of the settlement, the town also has to engage in “pro-Trump messaging”—an apparent reference to recent reports that Trump has demanded the same from CBS. What follows is genuine shock comedy, and a treatment of Trump that feels original. The town’s first P.S.A. is an A.I.-generated video of Trump—a live-action one, not a cartoon—trudging through a desert. He proceeds to take off his clothes, though he leaves his dress shoes and sock garters on. “When things heat up, who will deliver us from temptation?” a voice-over says. “No matter how hot it gets, he’s not afraid to fight for America.” Trump lies down in the sand, and his micropenis, which has googly eyes and a mouth, slowly becomes erect, before announcing, “I’m Donald J. Trump, and I endorse this message.” The P.S.A. is labelled one of fifty, leaving open the possibility that, in the course of the forty-nine “South Park” episodes still to come, we’ll get forty-nine more.

Is this too much? Probably. Yet there’s an age-old tradition of political vulgarity, of which Trump himself is a practitioner—it’s the crux of his appeal.

by Tyler Foggatt, New Yorker | Read more:
Image:South Park Studios/YouTube
[ed. Classic.]

Let Them Eat Golf Balls

President Trump is using $10 million of our taxes to market his new golf course in Scotland.

The president traveled to Scotland on Friday for the grand opening of an 18-hole golf course in Aberdeen. He’s expected to stay for four days. His appearance will likely generate positive revenue and publicity for the course—money that will flow right back into the pockets of the Trump Organization.

HuffPost has estimated that the trip will likely cost at least $9.7 million dollars due to Air Force One operations, motorcades and helicopters, Secret Service overtime, and more. Trump has framed the international vacation as a “working trip,” and has instead emphasized his plan to meet in Aberdeen with U.K. Prime Minister Kier Starmer. But Aberdeen is not the capital of the United Kingdom, or even the capital of Scotland, making it clear this meeting was just randomly added in to use as an excuse for the golf course.

Trump has grown more and more comfortable completely blurring the lines of his private businesses and his public office. This trip will make his second-term golf tab at least $52 million in just six months, according to HuffPost. His first term was $152 million over four years.

by Malcolm Ferguson, Yahoo News | Read more:
Image: uncredited
[ed. Remember when indignant wingnuts used to turn ten shades of purple whenever Obama went golfing? Must have taken up the sport. See also: this and this. Bonus: Did Trump Cheat at Golf? See the Video For Yourself. (Yahoo News). Of course he did.]

*** Est. cost to taxpayers for golf since returning to office (updated): $68,600,000 - 49 out of 196 days (25.0% of the presidency spent golfing). [ed. ... and counting.]
 (via)

Saturday, August 2, 2025

Trump Meets With Powell at Federal Reserve

... leading to one of the most surreal political moments in recent memory

[ed. Trump tries to jack up Federal Reserve Chairman Jerome Powell over supposed cost overuns with fed building rennovations - a clumsy attempt to intimidate over interest rate policy. Normally the theatrical possibilities of something like this would seem about zero. But no! Here we have two of the most powerful people in the world, hard hats precariously balanced on heads, standing in suits in a basement somewhere (presumably the fed's but who knows), arguing over details of a construction contract. Like they say, you can't make this stuff up.]

From Babylon to Wall Street – How Bankers Make You Poor

Michael Hudson has been expanding his historical window, from the ancient history of abolition of debt jubilees, which had prevented the rise of oligarchs, to the increased power over times of creditors, or in lay parlance, bankers. He’s added in the re-establishment of the influence of lenders in medieval times, thanks to the role of the Catholic Church in the Crusades and the accompanying rise of banking to provide war finance. This interview with Jonathan Brown reviews this trajectory, focusing on the way that debt burdens rise over time and amount to destructive rentierism.

Jonathan: You’ve often spoken about your aha [00:01:00] moments when delving into ancient economic history. I just wonder what have been some of your profound or unexpected discoveries about studying ancient civilizations like Sumer or Babylonia?

Michael Hudson: My, entire life, ever since I became an economist in, the 1960s was to realize that debt was the major problem that was going to be growing exponentially and stifling society. And it was clear that debt grew at compound interest faster than the economy was able to grow and pay the debts.

I spent, quite a few, decades warning about the fact that the global south could not pay the Dollarized debts, as indeed it didn’t in the 1970s. There was such a reaction to what I was saying, such a refusal by the economics profession to look at debt as being important, that I decided [00:02:00] to look at the whole history of how different societies had coped with debts.

And I began to write a history of debt, after I left the United Nations in 1979 after warning that there was going to be a, third world, Latin American debt crash in a few years, as indeed there was in 1982. I got all the way back to Greece and Rome, and then into the biblical, and came across the jubilee year. (...)

So I began to write up my ideas, shared them with a friend of mine, Alex Marshak, a professor at Harvard. He introduced me to the head of Harvard’s anthropology and archeology department. I was made a research fellow at the Peabody Museum by Carl Lambert Klowski. I realized that there was this wealth of Babylonian, Sumerian, and near [00:04:00] Eastern, academic records that economists had completely ignored.

And the reasons that our economists ignored it was that the way that society created its economic relationships were completely different from those that they ended up with after Greece and Rome. And so I realized that I can’t simply write this all up myself because I’m an economist, not an Assyriaologist.

So at Harvard we decided to organize a group of scholars who were specialists in Sumerian, Babylonian, Egyptian, Judaic and other Middle Eastern records and we decided to do three volumes.

by Jonathan Brown and Michael Hudson, Naked Capitalism |  Read more:
Image: via
[ed.  See also: The Bull Market for Economists Is Over. It’s an Ominous Sign for the Economy (NYT). Thinkng of commissioning some new t-shirts: Cognitive Dissonance is Killing Me ©]
***
"For decades, earning a Ph.D. in economics has been a nearly foolproof path to a lucrative career. Even as bearers of advanced degrees in history, English or anthropology struggled to find gainful employment, the popularity of economics as an undergraduate major created plenty of tenure-track teaching positions, while government agencies snatched up Ph.D. economists in bulk. Those looking for even larger paychecks could turn to tech companies, Wall Street and consulting firms, which bid up the price of economists as if they were a bespoke cryptocurrency.

Last year, the average base salary for newly hired economics professors at major research universities was more than $150,000, according to the American Economic Association, and their compensation swelled to about $200,000 once bonuses and summer pay were included. As recently as the 2023-24 academic year, the employment rate for Ph.D. economists within a few months of graduation was 100 percent, said John Cawley, the chair of the association’s Committee on the Job Market, citing the group’s surveys. Job satisfaction topped 85 percent.

Those glory days seem to be ending. Universities and nonprofits have scaled back hiring amid declining state budgets and federal funding cuts. At the same time, the Trump administration has laid off government economists and frozen hiring for new ones. (...)

Tech companies also have grown stingier, and their need for high-level economists — once seemingly insatiable — has waned. Other firms have slowed hiring in response to the economic uncertainty introduced by President Trump’s tariffs and the possibility that artificial intelligence will replace their workers, even if those workers have a doctoral degree.

“The advent of A.I. is also impacting the market for high-skilled labor,” said Betsey Stevenson, a labor economist at the University of Michigan, in an email. “So the whole thing is kind of a mess.”

Of course, if it were only some egghead economists scrambling to find work, that might be not be terribly consequential. But the same forces bedeviling economists are crimping employment for other highly trained scientists and social scientists, as well as for many recent college graduates, whose jobless rate has been unusually high for an otherwise strong economy.

The drop in government payrolls and federal funding for universities and nonprofits alone is a major problem, since they support two to three times as many jobs for college graduates as for those without degrees. In some cases, workers with Ph.D.s are displacing others with master’s or bachelor’s degrees.

Then there is the potential impact on the country’s future. Marcia McNutt, a geophysicist who is president of the National Academy of Sciences, said a sharp drop in the number of research jobs in the hard sciences and social sciences would send Ph.D.s abroad. Their flight will deprive the government of the brainpower it needs to perform basic functions and leave U.S. firms less innovative and competitive.

“U.S. industry is incredibly dependent on the training that is done in colleges and universities,” Dr. McNutt said. “When the top people go elsewhere, we’ll be left with the B team in America.”

~ The Bull Market for Economists Is Over. It’s an Ominous Sign for the Economy. Norm Scheiber