Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

Saturday, September 13, 2025

The Oligarchs’ Dinner Party and Zuckerberg’s Hot Mic Reveal

On September 4, Trump summoned more than thirty of the most powerful figures in Silicon Valley to the State Dining Room. At the table: Mark Zuckerberg (Meta), Tim Cook (Apple), Satya Nadella (Microsoft), Bill Gates, Sergey Brin and Sundar Pichai (Google), Sam Altman and Greg Brockman (OpenAI), Safra Catz (Oracle), Alex Karp (Palantir), Jensen Huang (NVIDIA), Jeff Bezos (Blue Origin/Amazon), and a procession of other AI and chip executives.

The optics were unmistakable. A long table, microphones set before each oligarch, gold-rimmed plates. The ritual was familiar: like a Trump cabinet meeting, each guest took a turn praising the Leader, pledging billions in “investment,” extolling his “visionary leadership.”

The quotes read like scripted devotionals:
  • Sam Altman (OpenAI): “Thank you for being such a pro-business, pro-innovation President. It’s a very refreshing change.”
  • Tim Cook (Apple): “Thank you for setting the tone such that we can make a major investment in the United States.”
  • Sergey Brin (Google): “It’s an incredible inflection point… that your Administration is supporting our companies instead of fighting with them.”
And the capstone: Mark Zuckerberg, seated right next to Trump, announcing a pledge of “at least $600 billion” in U.S. investment by 2028.

If it felt choreographed, that’s because it was. This was not a negotiation, not even a strategy session. It was performance—the oligarchs lining up to kiss the ring.

A Little Context, Please

To understand what this performance really means, it helps to step back and look at what these oligarchs have already done to America. For that, I turn to Mike Brock—ex-tech exec turned reluctant Cassandra—whose writing at Notes from the Circus cuts with unusual moral clarity.

Here’s Brock, in his essay The Oligarchs’ Dinner Party: How Silicon Valley Toasted American Fascism:
“To understand what these oligarchs have done to America, start with Mark Zuckerberg’s Instagram. His company’s internal research showed the platform was systematically destroying teenage girls’ mental health—creating unprecedented levels of depression, self-harm, and suicide among the most vulnerable users. The data was clear, the causation documented, the human cost undeniable.

Zuckerberg buried the research and continued the optimization.

This isn’t business negligence—it’s systematic cruelty disguised as innovation. Instagram was designed to extract maximum engagement from teenage minds through carefully engineered addiction, turning the most vulnerable period of human development into a profit center for algorithmic manipulation. The teenage suicide epidemic wasn’t an unfortunate side effect; it was the predictable result of systems optimized for engagement over human welfare.

But Instagram represents something larger: the entire Silicon Valley model of turning human consciousness into commodity. Every platform, every algorithm, every “connection” technology follows the same logic—fragment attention, replace authentic relationship with algorithmic substitutes, optimize human behavior for extraction rather than flourishing.

Tim Cook’s Apple markets privacy protection while building surveillance infrastructure for authoritarian regimes. Satya Nadella’s Microsoft promises AI enhancement while developing predictive policing systems that target communities for algorithmic enforcement. Each oligarch represents a variation on the same theme: technological sophistication serving moral barbarism, innovation rhetoric disguising systematic dehumanization.”
I can’t say it any better than that. These men and women didn’t walk into the White House as neutral technologists. They walked in as the architects of an extraction economy that commodifies our attention, monetizes our despair, and treats human vulnerability as an opportunity for profit. Yes, I know that’s very cynical, but when histories of this era are written a couple of centuries from now —assuming humanity survives and histories are still being written—I believe Brock has identified the central key feature of this era. The only question is whether humanity fully collapses because of it—or some counterforce emerges to defeat or at least mitigate it.

The Hot Mic Reveal

And then came the moment that crystallized everything.

As Zuckerberg delivered his carefully prepared pledge of a $600 billion U.S. investment, a hot mic caught him whispering to Trump.
“Sorry, I wasn’t ready… I wasn’t sure what number you wanted to go with.”
It was awkward. But more than awkward, it was revealing.

Here was the supposed master of the algorithm, the man who built a trillion-dollar empire on predictive precision, fumbling to figure out what number would please Trump. This wasn’t a CEO making a business decision. It was a courtier checking with the king.

Mike Brock nailed the significance in his companion essay The Hot Mic and the Monsters:
“This isn’t business negotiation. This is a courtier asking his king what lies he’d prefer to hear, then delivering them with practiced servility to a public they view as sheep requiring management rather than citizens deserving truth.”
The hot mic stripped away the theater. It revealed the truth: the oligarchs weren’t there to shape policy. They were there to play their part in legitimizing authoritarianism through performance.

Conclusion

What we saw in the State Dining Room was not business as usual. It wasn’t “innovation,” it wasn’t “visionary leadership,” and it sure as hell wasn’t patriotism. It was a court of oligarchs kneeling before an aspiring autocrat, pledging riches and obedience in exchange for protection and privilege.

The spectacle was obscene: billionaires who’ve built fortunes by monetizing despair now rushing to sanctify the man who has turned constitutional vandalism into performance art. Zuckerberg’s hot mic didn’t just reveal stage fright — it exposed the truth of the whole evening: this was theater, not policy; flattery, not leadership; a ritual of submission masquerading as a summit of visionaries.

Mike Brock captured it with precision:
“What the hot mic moment exposes is the elaborate theater that authoritarian consolidation requires to maintain legitimacy while systematic plunder proceeds.”
That’s the point. These men aren’t independent actors shaping the future. They are props in a reality show where Trump plays Dear Leader and the oligarchs play sycophants, helping to launder authoritarianism through the language of “innovation” and “investment.”

Every once in a while, a moment cuts through the fog and shows us the rot for what it is. The Oligarchs’ Dinner Party was one of those moments — a gaudy, gold-plated warning flare. We should not look away, and we should not forget who stood at that table and kissed the ring.

by Michael D. Sellers, Deeper Look |  Read more:
Image: uncredited
[ed. Be sure to visit Mike Brock's site for the original posts (and more): The Oligarchs’ Dinner Party; and, The Hot Mic and the Monsters (NFtC). See also: The art of the fawn: pouring praise on Trump is latest political phenomenon (Guardian).]

Thursday, September 11, 2025

A.I. Is Coming for Culture

In the 1950 book “The Human Use of Human Beings,” the computer scientist Norbert Wiener—the inventor of cybernetics, the study of how machines, bodies, and automated systems control themselves—argued that modern societies were run by means of messages. As these societies grew larger and more complex, he wrote, a greater amount of their affairs would depend upon “messages between man and machines, between machines and man, and between machine and machine.” Artificially intelligent machines can send and respond to messages much faster than we can, and in far greater volume—that’s one source of concern. But another is that, as they communicate in ways that are literal, or strange, or narrow-minded, or just plain wrong, we will incorporate their responses into our lives unthinkingly. Partly for this reason, Wiener later wrote, “the world of the future will be an ever more demanding struggle against the limitations of our intelligence, not a comfortable hammock in which we can lie down to be waited upon by our robot slaves.”

The messages around us are changing, even writing themselves. From a certain angle, they seem to be silencing some of the algorithmically inflected human voices that have sought to influence and control us for the past couple of decades. In my kitchen, I enjoyed the quiet—and was unnerved by it. What will these new voices tell us? And how much space will be left in which we can speak? (...)

Podcasts thrive on emotional authenticity: a voice in your ear, three friends in a room. There have been a few experiments in fully automated podcasting—for a while, Perplexity published “Discover Daily,” which offered A.I.-generated “dives into tech, science, and culture”—but they’ve tended to be charmless and lacking in intellectual heft. “I take the most pride in finding and generating ideas,” Latif Nasser, a co-host of “Radiolab,” told me. A.I. is verboten in the “Radiolab” offices—using it would be “like crossing a picket line,” Nasser said—but he “will ask A.I., just out of curiosity, like, ‘O.K., pitch me five episodes.’ I’ll see what comes out, and the pitches are garbage.”

What if you furnish A.I. with your own good ideas, though? Perhaps they could be made real, through automated production. Last fall, I added a new podcast, “The Deep Dive,” to my rotation; I generated the episodes myself, using a Google system called NotebookLM. To create an episode, you upload documents into an online repository (a “notebook”) and click a button. Soon, a male-and-female podcasting duo is ready to discuss whatever you’ve uploaded, in convincing podcast voice. NotebookLM is meant to be a research tool, so, on my first try, I uploaded some scientific papers. The hosts’ artificial fascination wasn’t quite capable of eliciting my own. I had more success when I gave the A.I. a few chapters of a memoir I’m writing; it was fun to listen to the hosts’ “insights,” and initially gratifying to hear them respond positively. But I really hit the sweet spot when I tried creating podcasts based on articles I had written a long time ago, and to some extent forgotten. (...)

If A.I. continues to speed or automate creative work, the total volume of cultural “stuff”—podcasts, blog posts, videos, books, songs, articles, animations, films, shows, plays, polemics, online personae, and so on—will increase. But, because A.I. will have peculiar strengths and shortcomings, more won’t necessarily mean more of the same. New forms, or new uses for existing forms, will pull us in directions we don’t anticipate. At home, Nasser told me, he’d found that ChatGPT could quickly draft an engaging short story about his young son’s favorite element, boron, written in the style of Roald Dahl’s “The BFG.” The periodic table x “The BFG” isn’t a collab anyone’s been asking for, but, once we have it, we might find that we want it.

It’s not a real collaboration, of course. When two people collaborate, we hope for a spark as their individualities collide. A.I. has no individuality—and, because its fundamental skill is the detection of patterns, its “collaborations” tend to perpetuate the formulaic aspects of what’s combined. A further challenge is that A.I. lacks artistic agency; it must be told what’s interesting. All this suggests that A.I. culture could submerge human originality in a sea of unmotivated, formulaic art.

And yet automation might also allow for the expression of new visions. “I have a background in independent filmmaking,” Mind Wank, one of the pseudonymous creators of “AI OR DIE,” which bills itself as “the First 100% AI Sketch Comedy Show,” told me. “It was something I did for a long time. Then I stopped.” When A.I. video tools such as Runway appeared, it became possible for him to take unproduced or unproducible ideas and develop them. (...)

Traditional filmmaking, as he sees it, is linear: “You have an idea, then you turn it into a treatment, then you write a script, then you get people and money on board. Then you can finally move from preproduction into production—that’s a whole pain in the ass—and then, nine months later, you try to resurrect whatever scraps of your vision are there in the editing bay.” By contrast, A.I. allows for infinite revision at any point. For a couple of hundred dollars in monthly fees, he said, A.I. tools had unlocked “the sort of creative life I only dreamed of when I was younger. You’re so constrained in the real world, and now you can just create whole new worlds.” The technology put him in mind of “the auteur culture of the sixties and seventies.” (...)

Today’s A.I. video tools reveal themselves in tiny details, producing a recognizable aesthetic. They also work best when creating short clips. But they’re rapidly improving. “I’m waiting for the tools to achieve enough consistency to let us create an entire feature-length film using stable characters,” Wank said. At that point, one could use them to make a completely ordinary drama or rom-com. “We all love filmmaking, love cinema,” he said. “We have movies we want to make, TV shows, advertisements.” (...)

What does this fluidity imply for culture in the age of A.I.? Works of art have particular shapes (three-minute pop songs, three-act plays) and particular moods and tones (comic, tragic, romantic, elegiac). But, when boundaries between forms, moods, and modalities are so readily transgressed, will they prove durable? “Right now, we talk about, Is A.I. good or bad for content creators?,” the Silicon Valley pioneer Jaron Lanier told me. (Lanier helped invent virtual reality and now works at Microsoft.) “But it’s possible that the very notion of ‘content’ will go away, and that content will be replaced with live synthesis that’s designed to have an effect on the recipient.” Today, there are A.I.-generated songs on Spotify, but at least the songs are credited to (fake) bands. “There could come a point where it’ll just be ‘music,’ ” Lanier said. In this future scenario, when you sign in to an A.I. version of Spotify, “the first thing you hear will be ‘Hey, babe, I’m your Spotify girlfriend. I made a playlist for you. It’s kind of sexy, so don’t listen to it around other people.’ ” This “playlist” would consist of songs that have never been heard before, and might never be heard again. They will have been created, in the moment, just for you, perhaps based on facts about you that the A.I. has observed.

In the longer term, Lanier thought, all sorts of cultural experiences—music, video, reading, gaming, conversation—might flow from a single “A.I. hub.” There would be no artists to pay, and the owners of the hubs would be able to exercise extraordinary influence over their audiences; for these reasons, even people who don’t want to experience culture this way could find the apps they use moving in an A.I.-enabled direction.

Culture is communal. We like being part of a community of appreciators. But “there’s an option here, if computation is cheap enough, for the creation of an illusion of society,” Lanier said. “You would be getting a tailored experience, but your perception would be that it’s shared with a bunch of other people—some of whom might be real biological people, some of whom might be fake.” (I imagined this would be like Joi introducing Gosling’s character to her friends.) To inhabit this “dissociated society cut off from real life,” he went on, “people would have to change. But people do change. We’ve already gotten people used to fake friendships and fake lovers. It’s simple: it’s based on things we want.” If people yearn for something strongly enough, some of them will be willing to accept an inferior substitute. “I don’t want this to occur, and I’m not predicting that it will occur,” Lanier said, grimly. “I think naming all this is a way of increasing the chances that it doesn’t happen.”

by Joshua Rothman, New Yorker | Read more:
Image: Edward Hopper, Second Story Sunlight

Operational Transparency: How Domino’s Pizza Tracker Conquered the Business World

In 2009, Domino’s was in trouble. Sales were in decline. Its pizza tied for last in industry taste tests with Chuck E. Cheese. A YouTube video of a store employee putting cheese up their nose had gone viral.

J. Patrick Doyle was appointed CEO a year later to oversee a turnaround with a ballsy premise: publicly admitting that their pizza sucked and showing customers that they were improving their pies. “I used to joke that if it didn’t work, I would probably be the shortest-tenured CEO in the history of American business,” Doyle told Bloomberg.


Transparency became Domino’s modus operandi. They aired ads in which Doyle and others issued mea culpas for their crummy pizza and released a documentary about revamping their recipe. They shared footage of people visiting the farms that grew Domino’s tomatoes. They used real photos sourced from customers – even of pies mangled during delivery.

For the next decade, Domino’s stock rose like dough in an oven. On his show, Stephen Colbert praised the campaign’s honesty, took a bite of a Domino’s slice, and asked, “Is that pizza, or did an angel just give birth in my mouth?”

Few companies have copied Domino’s “we suck” strategy. Instead, it’s another bit of transparency from Domino’s struggle era that is the great legacy of its turnaround: the pizza tracker.

You know the pizza tracker. You’ve likely used it to follow your pizza’s journey from a store to your home. But even if you haven’t, you live in the world the Domino’s pizza tracker built. Because in marketing, product development, and user experience, the pizza tracker is an icon. An inspiration. A platonic ideal that has been imitated across industries ranging from food-delivery apps to businesses where the only grease is on the hands of auto mechanics.

Show them the sausage

I enjoy restaurants with open kitchens: line cooks slicing entire carrots in a blink, chefs sipping broth and nodding approvingly, all in an elegant ballet of speed and craftsmanship.

But the business world doesn’t have many open kitchens. We receive our sneakers in the mail without ever seeing a Nike factory floor or Adidas brainstorming session. We receive cash from an ATM without any sense of the impressive technology under the hood.

Tami Kim thinks that’s a shame. An associate professor of business administration at Dartmouth College, she’s an advocate of an open-kitchen approach called operational transparency that she believes can increase customers’ appreciation of a product or service – and employees’ motivation and productivity too. Here’s how:
1. Open windows: Franchises like Starbucks have replaced many drive-through intercoms with cameras and video displays. In an experiment that used iPads to give students a view of cafeteria cooks fulfilling their hamburger and hot dog orders (and chefs a view of the students), Kim and her coauthors found that diners’ satisfaction increased without sacrificing speed in the kitchen.

2. Price transparency: Some e-commerce sites break down the price of their shirts or wallets by the cost of materials, labor, transportation, and tariffs – and compare their markup to the industry average. One study showed this transparency boosted sales by ~26%.

3. The “Labor Illusion”: Many AI models show a breakdown of the steps the chatbot is taking to answer your question. In another study, researchers found that travel sites like Kayak revealing their behind-the-scenes work (“Now getting results from American Airlines… from JetBlue… 133 results found so far…”) led to increased perceptions of quality and willingness to pay.
The pizza tracker came out in 2008, around when Kim and her colleagues started studying operational transparency. Domino’s declined an interview, but according to a case study on Domino’s, the tracker’s creation was spurred by the insight that online orders were more profitable – and made customers more satisfied – than phone or in-person orders. The company’s push to increase digital sales from 20% to 50% of its business led to new ways to order (via a tweet, for example) and then a new way for customers to track their order.

“With technology, it's just so much easier for companies to reveal parts of their operations without a ton of effort,” says Kim. Domino's was already tracking the status of orders on their back end, so they could show that progress to customers without disrupting operations.

“Every time we present this [research on operational transparency], we predominantly use that example because it's such a neat and successful example,” she says. (...)

A wrinkle in (pizza) time

For designer Shuya Gong, though, the magic of the pizza tracker isn’t its window into Domino’s operations. It’s how it manipulates time.

“I think the pizza tracker essentially speeds up time for you,” says Gong, formerly a design director at IDEO, a design and consulting firm.

Gong points to the return trip effect: When you go somewhere and come back via the same route, the way back feels faster. One study of the effect found that it’s likely caused by people underestimating the duration of the first leg. So when Domino’s sends its customers (slightly high undergrads, parents who promised a pizza night) lots of updates, it feels like a return trip, and therefore a shorter wait. (...)

“People want a stress-free lifestyle,” he says. “Communicating progress gives people a sense of feeling in control, because they're aware of what's going on… If you don't feel in control, you'll never be able to relax.”

by Alex Mayyasi, The Hustle | Read more:
Image: uncredited
[ed. Truth, transparency, customer engagement (control). Seems like a no-brainer. So why don't more companies do this?]

Wednesday, September 10, 2025

My Mom and Dr. DeepSeek

Every few months, my mother, a 57-year-old kidney transplant patient who lives in a small city in eastern China, embarks on a two-day journey to see her doctor. She fills her backpack with a change of clothes, a stack of medical reports, and a few boiled eggs to snack on. Then, she takes a 1.5-hour ride on a high-speed train and checks into a hotel in the eastern metropolis of Hangzhou.

At 7 a.m. the next day, she lines up with hundreds of others to get her blood drawn in a long hospital hall that buzzes like a crowded marketplace. In the afternoon, when the lab results arrive, she makes her way to a specialist’s clinic. She gets about three minutes with the doctor. Maybe five, if she’s lucky. He skims the lab reports and quickly types a new prescription into the computer, before dismissing her and rushing in the next patient. Then, my mother packs up and starts the long commute home.

DeepSeek treated her differently.

by Viola Zhou, Rest of World |  Read more:
Image: Ard Su 

Monday, September 8, 2025

The Unbelievable Scale of AI’s Pirated-Books Problem

When employees at Meta started developing their flagship AI model, Llama 3, they faced a simple ethical question. The program would need to be trained on a huge amount of high-quality writing to be competitive with products such as ChatGPT, and acquiring all of that text legally could take time. Should they just pirate it instead?

Meta employees spoke with multiple companies about licensing books and research papers, but they weren’t thrilled with their options. This “seems unreasonably expensive,” wrote one research scientist on an internal company chat, in reference to one potential deal, according to court records. A Llama-team senior manager added that this would also be an “incredibly slow” process: “They take like 4+ weeks to deliver data.” In a message found in another legal filing, a director of engineering noted another downside to this approach: “The problem is that people don’t realize that if we license one single book, we won’t be able to lean into fair use strategy,” a reference to a possible legal defense for using copyrighted books to train AI.

Court documents released last night show that the senior manager felt it was “really important for [Meta] to get books ASAP,” as “books are actually more important than web data.” Meta employees turned their attention to Library Genesis, or LibGen, one of the largest of the pirated libraries that circulate online. It currently contains more than 7.5 million books and 81 million research papers. Eventually, the team at Meta got permission from “MZ”—an apparent reference to Meta CEO Mark Zuckerberg—to download and use the data set.

This act, along with other information outlined and quoted here, recently became a matter of public record when some of Meta’s internal communications were unsealed as part of a copyright-infringement lawsuit brought against the company by Sarah Silverman, Junot Díaz, and other authors of books in LibGen. Also revealed recently, in another lawsuit brought by a similar group of authors, is that OpenAI has used LibGen in the past. (A spokesperson for Meta declined to comment, citing the ongoing litigation against the company. In a response sent after this story was published, a spokesperson for OpenAI said, “The models powering ChatGPT and our API today were not developed using these datasets. These datasets, created by former employees who are no longer with OpenAI, were last used in 2021.”)

Until now, most people have had no window into the contents of this library, even though they have likely been exposed to generative-AI products that use it; according to Zuckerberg, the “Meta AI” assistant has been used by hundreds of millions of people (it’s embedded in Meta products such as Facebook, WhatsApp, and Instagram). (...)

Meta and OpenAI have both argued in court that it’s “fair use” to train their generative-AI models on copyrighted work without a license, because LLMs “transform” the original material into new work. The defense raises thorny questions and is likely a long way from resolution. But the use of LibGen raises another issue. Bulk downloading is often done with BitTorrent, the file-sharing protocol popular with pirates for its anonymity, and downloading with BitTorrent typically involves uploading to other users simultaneously. Internal communications show employees saying that Meta did indeed torrent LibGen, which means that Meta could have not only accessed pirated material but also distributed it to others—well established as illegal under copyright law, regardless of what the courts determine about the use of copyrighted material to train generative AI. (Meta has claimed that it “took precautions not to ‘seed’ any downloaded files” and that there are “no facts to show” that it distributed the books to others.) OpenAI’s download method is not yet known.

Meta employees acknowledged in their internal communications that training Llama on LibGen presented a “medium-high legal risk,” and discussed a variety of “mitigations” to mask their activity. One employee recommended that developers “remove data clearly marked as pirated/stolen” and “do not externally cite the use of any training data including LibGen.” Another discussed removing any line containing ISBN, Copyright, ©, All rights reserved. A Llama-team senior manager suggested fine-tuning Llama to “refuse to answer queries like: ‘reproduce the first three pages of “Harry Potter and the Sorcerer’s Stone.”’” One employee remarked that “torrenting from a corporate laptop doesn’t feel right.”

It is easy to see why LibGen appeals to generative-AI companies, whose products require huge quantities of text. LibGen is enormous, many times larger than Books3, another pirated book collection whose contents I revealed in 2023. Other works in LibGen include recent literature and nonfiction by prominent authors such as Sally Rooney, Percival Everett, Hua Hsu, Jonathan Haidt, and Rachel Khong, and articles from top academic journals such as Nature, Science, and The Lancet. It includes many millions of articles from top academic-journal publishers such as Elsevier and Sage Publications.

by Alex Reisner, The Atlantic | Read more:
Image: Matteo Giuseppe Pani
[ed. Zuckerberg should have his own chapter in the Book of Liars (a notable achievement, given the competition). See also: These People Are Weird (WWL). But there's also some good news: First of its kind” AI settlement: Anthropic to pay authors $1.5 billion (ArsT):]

"Today, Anthropic likely breathes a sigh of relief to avoid the costs of extended litigation and potentially paying more for pirating books. However, the rest of the AI industry is likely horrified by the settlement, which advocates had suggested could set an alarming precedent that could financially ruin emerging AI companies like Anthropic." 

Saturday, September 6, 2025

The Techno-Humanist Manifesto (Part 2, Chapter 8)


Previously: The Unlimited Horizon, part 1.

Is there really that much more progress to be made in the future? How many problems are left to solve? How much better could life really get?

After all, we are pretty comfortable today. We have electricity, clean running water, heating and air conditioning, plenty of food, comfortable clothes and beds, cars and planes to get around, entertainment on tap. What more could we ask for? Maybe life could be 10% better, but 10x? We seem to be doing just fine.

Most of the amenities we consider necessary for comfortable living, however, were invented relatively recently; the average American didn’t have this standard of living until the mid-20th century. The average person living in 1800 did not have electricity or plumbing; indeed the vast majority of people in that era lived in what we would now consider extreme poverty. But to them, it didn’t feel like extreme poverty: it felt normal. They had enough food in the larder, enough water in the well, and enough firewood to last the winter; they had a roof over their heads and their children were not clothed in rags. They, too, felt they were doing just fine.

Our sense of “enough” is not absolute, but relative: relative to our expectations and to the standard of living we grew up with. And just as the person who felt they had “enough” in 1800 was extremely poor by the standards of the present, we are all poor by the standards of the future, if exponential growth continues.

Future students will recoil in horror when they realize that we died from cancer and heart disease and car crashes, that we toiled on farms and in factories, that we wasted time commuting and shopping, that most people still cleaned their own homes by hand, that we watched our thermostats carefully and ran our laundry at night to save on electricity, that a foreign vacation was a luxury we could only indulge in once a year, that we sometimes lost our homes to hurricanes and forest fires.

Putting it positively: we are fabulously rich by the standards of 1800, and so we, or our descendants, can all be fabulously rich in the future by the standards of today.

But no such vision is part of mainstream culture. The most optimistic goals you will hear from most people are things like: stop climate change, prevent pandemics, relieve poverty. These are all the negation of negatives, and modest ones at that—as if the best we can do in the future is to raise the floor and avoid disaster. There is no bold, ambitious vision of a future in which we also raise the ceiling, a future full of positive developments.

It can be hard to make such a vision compelling. Goals that are obviously wonderful, such as curing all disease, seem like science fiction impossibilities. Those that are more clearly achievable, such as supersonic flight, feel like mere conveniences. But science fiction can come true—indeed, it already has, many times over. We live in the sci-fi future imagined long ago, from the heavier-than-air flying machines of Jules Verne and H. G. Wells to the hand-held communicator of Star Trek. Nor should we dismiss “mere” conveniences. Conveniences compound. What seem like trivial improvements add up, over time, to transformations. Refrigerators, electric stoves, washing machines, vacuum cleaners, and dishwashers were conveniences, but together they transformed domestic life, and helped to transform the role of women in society. The incremental improvement of agriculture, over centuries, eliminated famine.

So let’s envision a bold, ambitious future—a future we want to live in, and are inspired to build. This will be speculative: not a blueprint drawn up with surveyor’s tools, but a canvas painted in broad strokes. Building on a theme from Chapter 2, our vision will be one of mastery over all aspects of nature:

by Jason Crawford, Roots of Progress |  Read more:
Image: uncredited
[ed. Part 2, Chapter 8. (yikes). You can see I've come late to this. Essays on the philosophy of human progress. Well worth exploring (jump in anywhere). Introduction and chapter headings (with links) found here: Announcing The Techno-Humanist Manifesto (RoP).]

Institutions

Institutions and a Lesson for Our Time from the Late Middle Ages. No institution of politics or society is immune to criticism. I have met no one who would really believe this, even if notional liberals and notional conservatives both have their protected favorites. But the spirit of the time is leading directly to the destruction of institutions that are essential for our cultural, social, political, intellectual, and individual health and survival. This is a two-way street, by the way. Both wings of the same bird of prey do it throughout the Neoliberal Dispensation in the Global North and a few other places.

I am currently reading The World at First Light: A New History of the Renaissance by Bernd Roeck (transl. Patrick Baker, 2025). At 949 pages and 49 chapters, I’ll complete the task in a month at 1-2 chapters per evening. I hope. We are still only just past Magna Carta (1215) in Chapter 12: “Vertical Power, Horizontal Power.” Both axes of power are essential in any society larger than a small group of hunter gatherers. Here is Professor Bernd on institutions:
Institutions – that dry term, which we have already encountered in the discussion of universities and in other contexts, denotes something very big and important. Institutions are what first allow the state to become perpetual; without them, it dies. If advisers appear as the mind and memory of the body politic, and the military its muscles, it is law and institutions that provide a skeleton for the state. They alone are capable of establishing justice over the long term. Only they can set limits to power and arbitrary will. They preserve knowledge of how to achieve success, as well as reminders of mistakes to be avoided in the future. No one knew this better than Cicero, who emphasized the Roman Republic’s special ability to gather experience and make decisions based on it. Before the advent of modernity, no section of the globe created institutions as robust and effective as those that developed in medieval Latin Europe. Moreover, these institutions were highly inclusive. The guaranteed protection under the law and the right to private property, provided education, and were relatively pluralistic (i.e., horizontally structured).

Indeed, Rome owed its success to its institutions. They then provided the states consolidating during the Middle Ages with models of compelling rationality.
This is not the place to quibble about details. But those who want to destroy our political, cultural, social, and educational institutions rather than improve them or refocus them along lines upon which reasonable people will agree? These unreasonable people are not to be respected:
We want the bureaucrats to be traumatically affected,” Vought (Russell Vought, OMB Director) said in a video revealed by ProPublica and the research group Documented in October. “When they wake up in the morning, we want them to not want to go to work, because they are increasingly viewed as the villains. We want their funding to be shut down … We want to put them in trauma.”
Well, it is working and the lack of imagination and humanity here is striking. These “bureaucrats” are the scientists who make sure our food is safe and that the chemical plant on the waterfront is not dumping its waste into the tidal creek. They are the scientists who hunt down the causes of emerging diseases. They are the meteorologists at the National Hurricane Center who have gotten so very good at predicting the paths of cyclones. They are the men and women who sign up Vought’s parents for Social Security and Medicare. They are the people of the IRS who sent me a substantial tax refund because I overpaid, something pleasant I did not ask for nor expect. They are also the professors who teach engineers how to build bridges that will bear the load and teach medical students the basics of health and disease. And yes, they are the professors who teach us there is No Politics But Class Politics. The key here is that all of this is debatable by reasonable men and women of good will.

To paraphrase Justice Oliver Wendell Holmes, the institutions funded by our taxes are the cost of civilization. Perhaps we will remember this ancient wisdom before it is too late? Probably not. The urge to burn it all down, instead of rewiring the building and replacing the roof, is strong.

by KLG, Naked Capitalism |  Read more:

Wednesday, September 3, 2025

Rethinking A.I.

The Fever Dream of Imminent ‘Superintelligence’ Is Finally Breaking

GPT-5, OpenAI’s latest artificial intelligence system, was supposed to be a game-changer, the culmination of billions of dollars of investment and nearly three years of work. Sam Altman, the company’s chief executive, implied that GPT-5 could be tantamount to artificial general intelligence, or A.G.I. — A.I. that is as smart and as flexible as any human expert.

Instead, as I have written, the model fell short. Within hours of its release, critics found all kinds of baffling errors: It failed some simple math questions, couldn’t count reliably and sometimes provided absurd answers to old riddles. Like its predecessors, the A.I. model still hallucinates (though at a lower rate) and is plagued by questions around its reliability. Although some people have been impressed, few saw it as a quantum leap, and nobody believed it was A.G.I. Many users asked for the old model back.

GPT-5 is a step forward, but nowhere near the A.I. revolution many had expected. That is bad news for the companies and investors who placed substantial bets on the technology. And it demands a rethink of government policies and investments that were built on wildly overinflated expectations. The current strategy of merely making A.I. bigger is deeply flawed — scientifically, economically and politically. Many things from regulation to research strategy must be rethought. One of the keys to this may be training and developing A.I. in ways inspired by the cognitive sciences.

Fundamentally, people like Mr. Altman, the Anthropic chief executive Dario Amodei and countless other tech leaders and investors had put far too much faith into a speculative and unproven hypothesis called scaling: the idea that training A.I. models on ever more data using ever more hardware would eventually lead to A.G.I., or even a “superintelligence” that surpasses humans.

However, as I warned in a 2022 essay titled “Deep Learning Is Hitting a Wall,” so-called scaling laws aren’t physical laws of the universe like gravity, but hypotheses based on historical trends. Large language models, which power systems like GPT-5, are nothing more than souped-up statistical regurgitation machines, so they will continue to stumble into problems around truth, hallucinations and reasoning. Scaling would not bring us to the holy grail of A.G.I.

Many in the tech industry were hostile to my predictions. Mr. Altman ridiculed me as a “mediocre deep learning skeptic” and last year claimed “there is no wall.” Elon Musk shared a meme lampooning my essay.

It now seems I was right. Adding more data to large language models, which are trained to produce text by learning from vast databases of human text, helps them improve only to a degree. Even significantly scaled, they still don’t fully understand the concepts they are exposed to — which is why they sometimes botch answers or generate ridiculously incorrect drawings.

Scaling worked for a while — previous generations of GPT models made impressive advancements to their predecessors. But luck started to run out over the last year. Mr. Musk’s A.I. system, Grok 4, released in July, had 100 times as much training as Grok 2 had but it was only moderately better. Meta’s jumbo-size Llama 4 model, much larger than its predecessor, was mostly also viewed as a failure. As many now see, GPT-5 shows decisively that scaling has lost steam.

The chances of A.G.I.’s arrival by 2027 now seem remote. The government has let A.I. companies lead a charmed life with almost zero regulation. It now ought to enact legislation that addresses costs and harms unfairly offloaded onto the public — from misinformation to deepfakes, “A.I. slop” content, cybercrime, copyright infringement, mental health and energy usage.

Moreover, governments and investors should strongly support research investments outside of scaling. The cognitive sciences (including psychology, child development, philosophy of mind and linguistics) teach us that intelligence is about more than mere statistical mimicry and suggest three promising ideas for developing A.I. that is reliable enough to be trustworthy, with a much richer intelligence.

by Gary Marcus, NY Times |  Read more:
Image: Maria Mavropoulou/Getty
[ed. See also: GPT-5: Overdue, overhyped and underwhelming. And that’s not the worst of it. (MoAI):]
***
"The real news is a breaking study from Arizona State University that fully vindicates what I have told you for nearly 30 years—and more recently what Apple told you—about the core weakness of LLMs: their inability to generalize broadly. (...)

And, crucially, the failure to generalize adequately outside distribution tells us why all the dozens of shots on goal at building “GPT-5 level models” keep missing their target. It’s not an accident. That failing is principled.

That’s exactly what it means to hit a wall, and exactly the particular set of obstacles I described in my most notorious (and prescient) paper, in 2022. Real progress on some dimensions, but stuck in place on others.

Ultimately, the idea that scaling alone might get us to AGI is a hypothesis.

No hypothesis has ever been given more benefit of the doubt, nor more funding. After half a trillion dollars in that direction, it is obviously time to move on. The disappointing performance of GPT-5 should make that enormously clear."

Tuesday, September 2, 2025

Basic Phones: A Brief Guide for Parents

In 2021, Common Sense Media found that half of U.S. kids get their first smartphone by age 11. Many parents now realize that age is too young for kids to have an internet-enabled phone.

But at some point, you’re going to consider getting your kid or teen a phone. Maybe the closest school bus stop is far away and the bus isn’t always on time. Maybe you’re sick of your kid borrowing your phone to text their friends. Maybe they’re getting older and it seems like the right time. So what type of phone should you get them?

In some cases, the answer might be a flip phone, the old-school cell phone that was the standard until the smartphone came along. Flip phones have some downsides, though. Since there’s no keyboard, texting involves pressing the number keys multiple times to type one letter (if you had a cell phone in the 2000s, you probably remember this). If your kids’ friends communicate via text, replying on a flip phone is going to be awkward and time-consuming. Flip phone cameras are often low-quality, so they’re not a great option if your kid likes taking pictures. Because they don’t look like a smartphone, flip phones also stand out — and many kids don’t want to stand out.

Fortunately, parents no longer have to choose exclusively between a flip phone and an adult smartphone for their kid, thanks to the many “basic” phone options. These middle-ground phones have a screen keyboard and a higher-quality camera like a smartphone, look very similar to a smartphone, and they can use many smartphone apps (with parental limits and permissions). Unlike a regular smartphone, though, they don’t have an internet browser or social media.

Basic phones are the training wheels of phones. They’re safer for kids right out of the box, with built-in parental controls that are easier to use and harder for kids to hack than those on smartphones. With no internet or social media, it’s much less likely that unknown adults will be able to randomly contact your kid, or that kids will stumble across pornography. Basic phones are usually Androids with a modified operating system, so they look like a regular smartphone and thus don’t stand out like flip phones do. For all of these reasons, Rule #4 in 10 Rules for Raising Kids in a High-Tech World is “First phones should be basic phones.”

If you want your kid to have the ability to easily text their friends but don’t want them using social media or going down internet rabbit holes, basic phones are a great solution. They’re a stopgap between the age when texting and calling becomes socially useful (usually in middle school, by age 12 or 13) and the age when they’re ready for a smartphone and possibly social media (at 16; Rule #5 in 10 Rules is “Give the first smartphone with the driver’s license,” and Rule #3 is “No social media until age 16 – or later.”). My younger two children, ages 15 and 13, have basic phones.

Here’s a brief overview of some popular basic phone options to help you figure out the best choice for your kid.

Option 1. Gabb Phone 4

This is the most basic of the basic phones, with calling, texting (including text-to-speech), clean music streaming, and a camera, but no capability for adding additional third-party apps. “You can’t do anything on it,” my middle daughter once said about her Gabb phone. “That,” I replied, “is the point.” If this is what you want, make sure you’re buying the Gabb Phone 4 and not the Pro, which allows more apps.

Option 2. Pinwheel, Troomi, Gabb Phone 4 Pro, Bark

These are basic phones that have access to an app store where you can add additional features. They come with an online parent portal where you can set a schedule (like having the phone shut off at bedtime) and approve new contacts. Some allow you to see the texts your child has received and sent.

The parent portal also lets you see the apps available for the phone. You can then install those you want and approve (or reject) those your kids ask for. These phones don’t allow certain apps at all (mostly dating, pornography, and alcohol-related apps, as well as AI chatbots and those that allow contact with unknown adults). That’s a relief, but there are still tough decisions about what to allow versus not. The tradeoff for more flexibility is more complexity in managing the phone. Still, I’d much rather have this challenge than giving a 12-year-old a smartphone with unrestricted internet and social media access.

Through the parent portal, you also have the ability to remotely control bedtime shutoff, app installs, and time limits for apps even after you’ve given your kid the phone — so you don’t have to wrestle it away from them to change your parental control settings.

If you’re looking for more details about specific basic phone brands for kids, check out the pages at Wait Until 8th and Protect Young Eyes.

Option 3. The Light Phone

This is a grown-up basic phone. Unlike other basic phones, it’s not necessarily meant for kids, and it’s not an Android phone — it’s a unique device. It has a paper-like screen like a Kindle so it’s not as colorfully tempting as a smartphone. It has a maps app, calling, and texting, but does not have internet access, social media, or email. The newest version has a camera. All of the features are optional so you can choose which features your kid’s phone has.

Many adults who want a pared-down phone, sometimes just for certain situations, use Light Phones. Because their target audience is adults, Light Phones do not come with a parent portal like the phones designed for kids.
***
The biggest challenge with basic phones (with the exception of the more limited Gabb Phone 4) is deciding which apps to allow. The parent portals that come with many of these products give more information and sometimes even a rating for each app, but it’s often hard to judge what’s appropriate and what isn’t without using the app yourself (something to consider). If you allow game apps, make sure to put a time limit on them (maybe 10-20 minutes a day each) so your kid doesn’t spend too much of their free time on their phone.

One other issue to be aware of: All of these optional apps display ads, and the ads – even on a so-called “kids’ phone” – are not filtered. Your kid might be playing “Find the Cat” and be served ads for AI girlfriends. They won’t be able to download the AI girlfriend app, thank goodness, but you may find yourself explaining what an AI girlfriend is to an 11-year-old. If that’s a non-starter, you’ll have to say no to any optional app, including games and educational apps like Duolingo.

If you do allow games and music, use the parental controls to block them during school hours if your kids’ school still allows phones during the school day. That way you’ll know your kids are paying attention in class instead of playing BlockBlast. And if they say they want to play games during lunch, tell them they should be talking to their friends instead.

What if your kid says, “It’s embarrassing to have a kid phone”? My reply: Who’s going to know? Most basic phones look like a regular Android phone. My middle daughter once told me she was embarrassed when a friend asked her, “What kind of phone is that?” I told her she could honestly answer, “It’s an Android phone.” There’s also no need to disclose that the phone doesn’t allow social media or internet. If your friends ask if you have a certain app and you don’t, I told her, just say your parents don’t allow it. All kids understand that parents are lame. :)

by Jean M. Twenge, Generation Tech |  Read more:
Image: Troomi
[ed. New school year starting up...]

Friday, August 29, 2025

The Mechanics of Misdirection

The personhood trap: How AI fakes human personality. 

As we hinted above, the "chat" experience with an AI model is a clever hack: Within every AI chatbot interaction, there is an input and an output. The input is the "prompt," and the output is often called a "prediction" because it attempts to complete the prompt with the best possible continuation. In between, there's a neural network (or a set of neural networks) with fixed weights doing a processing task. The conversational back and forth isn't built into the model; it's a scripting trick that makes next-word-prediction text generation feel like a persistent dialogue.

Each time you send a message to ChatGPT, Copilot, Grok, Claude, or Gemini, the system takes the entire conversation history—every message from both you and the bot—and feeds it back to the model as one long prompt, asking it to predict what comes next. The model intelligently reasons about what would logically continue the dialogue, but it doesn't "remember" your previous messages as an agent with continuous existence would. Instead, it's re-reading the entire transcript each time and generating a response.

This design exploits a vulnerability we've known about for decades. The ELIZA effect—our tendency to read far more understanding and intention into a system than actually exists—dates back to the 1960s. Even when users knew that the primitive ELIZA chatbot was just matching patterns and reflecting their statements back as questions, they still confided intimate details and reported feeling understood.

To understand how the illusion of personality is constructed, we need to examine what parts of the input fed into the AI model shape it. AI researcher Eugene Vinitsky recently broke down the human decisions behind these systems into four key layers, which we can expand upon with several others below:

1. Pre-training: The foundation of "personality"

The first and most fundamental layer of personality is called pre-training. During an initial training process that actually creates the AI model's neural network, the model absorbs statistical relationships from billions of examples of text, storing patterns about how words and ideas typically connect.

Research has found that personality measurements in LLM outputs are significantly influenced by training data. OpenAI's GPT models are trained on sources like copies of websites, books, Wikipedia, and academic publications. The exact proportions matter enormously for what users later perceive as "personality traits" once the model is in use, making predictions.

2. Post-training: Sculpting the raw material

Reinforcement Learning from Human Feedback (RLHF) is an additional training process where the model learns to give responses that humans rate as good. Research from Anthropic in 2022 revealed how human raters' preferences get encoded as what we might consider fundamental "personality traits." When human raters consistently prefer responses that begin with "I understand your concern," for example, the fine-tuning process reinforces connections in the neural network that make it more likely to produce those kinds of outputs in the future.

This process is what has created sycophantic AI models, such as variations of GPT-4o, over the past year. And interestingly, research has shown that the demographic makeup of human raters significantly influences model behavior. When raters skew toward specific demographics, models develop communication patterns that reflect those groups' preferences.

3. System prompts: Invisible stage directions

Hidden instructions tucked into the prompt by the company running the AI chatbot, called "system prompts," can completely transform a model's apparent personality. These prompts get the conversation started and identify the role the LLM will play. They include statements like "You are a helpful AI assistant" and can share the current time and who the user is.

A comprehensive survey of prompt engineering demonstrated just how powerful these prompts are. Adding instructions like "You are a helpful assistant" versus "You are an expert researcher" changed accuracy on factual questions by up to 15 percent.

Grok perfectly illustrates this. According to xAI's published system prompts, earlier versions of Grok's system prompt included instructions to not shy away from making claims that are "politically incorrect." This single instruction transformed the base model into something that would readily generate controversial content.

4. Persistent memories: The illusion of continuity

ChatGPT's memory feature adds another layer of what we might consider a personality. A big misunderstanding about AI chatbots is that they somehow "learn" on the fly from your interactions. Among commercial chatbots active today, this is not true. When the system "remembers" that you prefer concise answers or that you work in finance, these facts get stored in a separate database and are injected into every conversation's context window—they become part of the prompt input automatically behind the scenes. Users interpret this as the chatbot "knowing" them personally, creating an illusion of relationship continuity.

So when ChatGPT says, "I remember you mentioned your dog Max," it's not accessing memories like you'd imagine a person would, intermingled with its other "knowledge." It's not stored in the AI model's neural network, which remains unchanged between interactions. Every once in a while, an AI company will update a model through a process called fine-tuning, but it's unrelated to storing user memories.

5. Context and RAG: Real-time personality modulation

Retrieval Augmented Generation (RAG) adds another layer of personality modulation. When a chatbot searches the web or accesses a database before responding, it's not just gathering facts—it's potentially shifting its entire communication style by putting those facts into (you guessed it) the input prompt. In RAG systems, LLMs can potentially adopt characteristics such as tone, style, and terminology from retrieved documents, since those documents are combined with the input prompt to form the complete context that gets fed into the model for processing.

If the system retrieves academic papers, responses might become more formal. Pull from a certain subreddit, and the chatbot might make pop culture references. This isn't the model having different moods—it's the statistical influence of whatever text got fed into the context window.

6. The randomness factor: Manufactured spontaneity


Lastly, we can't discount the role of randomness in creating personality illusions. LLMs use a parameter called "temperature" that controls how predictable responses are.

Research investigating temperature's role in creative tasks reveals a crucial trade-off: While higher temperatures can make outputs more novel and surprising, they also make them less coherent and harder to understand. This variability can make the AI feel more spontaneous; a slightly unexpected (higher temperature) response might seem more "creative," while a highly predictable (lower temperature) one could feel more robotic or "formal."

The random variation in each LLM output makes each response slightly different, creating an element of unpredictability that presents the illusion of free will and self-awareness on the machine's part. This random mystery leaves plenty of room for magical thinking on the part of humans, who fill in the gaps of their technical knowledge with their imagination.
The human cost of the illusion

The illusion of AI personhood can potentially exact a heavy toll. In health care contexts, the stakes can be life or death. When vulnerable individuals confide in what they perceive as an understanding entity, they may receive responses shaped more by training data patterns than therapeutic wisdom. The chatbot that congratulates someone for stopping psychiatric medication isn't expressing judgment—it's completing a pattern based on how similar conversations appear in its training data.

Perhaps most concerning are the emerging cases of what some experts are informally calling "AI Psychosis" or "ChatGPT Psychosis"—vulnerable users who develop delusional or manic behavior after talking to AI chatbots. These people often perceive chatbots as an authority that can validate their delusional ideas, often encouraging them in ways that become harmful.

Meanwhile, when Elon Musk's Grok generates Nazi content, media outlets describe how the bot "went rogue" rather than framing the incident squarely as the result of xAI's deliberate configuration choices. The conversational interface has become so convincing that it can also launder human agency, transforming engineering decisions into the whims of an imaginary personality.

by Benji Edwards, Ars Technica |  Read more:
Image: Credit: ivetavaicule via Getty Images
[ed. See also: In Search Of AI Psychosis (ASX).]

Thursday, August 28, 2025

Another Barrier to EV Adoption

Junk-filled garages.

There are plenty of reasons to be pessimistic about electric vehicle adoption here in the US. The current administration has made no secret of its hostility toward EVs and, as promised, has ended as many of the existing EV subsidies and vehicle pollution regulations as it could. After more than a year of month-on-month growth, EV sales started to contract, and brands like Genesis and Volvo have seen their customers reject their electric offerings, forcing portfolio rethinks. But wait, it gets worse.

Time and again, surveys and studies show that fears and concerns about charging are the main barriers standing in the way of someone switching from gas to EV. A new market research study by Telemetry Vice President Sam Abuelsamid confirms this, as it analyzes the charging infrastructure needs over the next decade. And one of the biggest hurdles—one that has gone mostly unmentioned across the decade-plus we've been covering this topic—is all the junk clogging up Americans' garages.

Want an EV? Clean out your garage

That's because, while DC fast-charging garners all the headlines and much of the funding, the overwhelming majority of EV charging is AC charging, usually at home—80 percent of it, in fact. People who own and live in a single family home are overrepresented among EV owners, and data from the National Renewable Energy Laboratory from a few years ago found that 42 percent of homeowners park near an electrical outlet capable of level 2 (240 V) AC charging.

But that could grow by more than half (to 68 percent of homeowners) if those homeowners changed their parking behavior, "most likely by clearing a space in their garage," the report finds.

"90 percent of all houses can add a 240 V outlet near where cars could be parked," said Abuelsamid. "Parking behavior, namely whether homeowners use a private garage for parking or storage, will likely become a key factor in EV adoption. Today, garage-use intent is potentially a greater factor for in-house charging ability than the house’s capacity to add 240 V outlets."

Creating garage space would increase the number of homes capable of EV charging from 31 million to more than 50 million. And when we include houses where the owner thinks it's feasible to add wiring, that grows to more than 72 million homes. And that's far more than Telemetry's most optimistic estimate of US EV penetration for 2035, which ranges from 33 million to 57 million EVs on the road 10 years from now.

I thought an EV would save me money?


Just because 90 percent of houses could add a 240 V outlet near where they park, it doesn't mean that 90 percent of homes have a 240 V outlet near where they park. According to that same NREL study, almost 34 million of those homes will require extensive electrical work to upgrade their wiring and panels to cope with the added demands of a level 2 charger (at least 30 A), and that can cost thousands and thousands of dollars.

All of a sudden, EV cost of ownership becomes much closer to, or possibly even exceeds, that of a vehicle with an internal combustion engine.

Multifamily remains an unsolved problem

Twenty-three percent of Americans live in multifamily dwellings, including apartments, condos, and townhomes. Here, the barriers to charging where you park are much greater. Individual drivers will rarely be able to decide for themselves to add a charger—the management company, landlord, co-op board, or whoever else is in charge of the development has to grant permission.

If the cost of new wiring for a single family home is enough to be a dealbreaker for some, adding EV charging capabilities to a parking lot or parking garage makes those costs pale in comparison. Using my 1960s-era co-op as an example, after getting board approval to add a pair of shared level 2 chargers in 2019, we were told by the power company that nothing could happen until the co-op upgraded its electrical panel—a capital improvement project that runs into seven figures, and work that is still not entirely complete as I type this.

The cost of running wiring from the electrical panel to parking spaces becomes much higher than for a single family home given the distances involved, and multifamily dwellings are rarely eligible for the subsidies offered to homeowners by municipalities and energy companies to install chargers.

by Jonathan M. Gitlin, Ars Technica | Read more:
Image: Getty

Tuesday, August 26, 2025

What About The Children?

The First Generation of Parents Who Knew What We Were Doing—and Did It Anyway

I have harmed my own children through my screen addiction.

I write those words and feel them burn. Not because they’re dramatic but because they’re true. I was a tech executive who spent years thinking about both technology and philosophy. I understood these systems from both sides—how they were built and what they were doing to us.

The technologist in me recognized the deliberate engineering: intermittent variable reward schedules, social validation loops, dark patterns designed to create dependency. The philosopher in me understood what this was doing to human consciousness—fragmenting attention, destroying sustained thought, replacing authentic relationship with parasocial bonding.

I wasn’t building these social media platforms. But I used their products. And I couldn’t stop. Even knowing exactly how they worked. Even understanding the philosophical implications of attention capture. Even seeing what they were doing to society, to democracy, to our capacity for thought itself.

Still I fell. Still I chose the screen over my family. Still I modeled for my children that they were less interesting than whatever might be happening in the infinite elsewhere of the internet.

My children learned what I valued by watching what I looked at. And too often, it wasn’t them.

This Is Not Okay


No, seriously. What about them?

We’re destroying them with social media and now AI chatbots, and we all fucking know it. If you’re a parent who’s watched your kid with a smartphone, you know exactly what I’m talking about. The vacant stare. The panic when the battery dies. The meltdown when you try to set limits. This isn’t kids being kids. This is addiction, and we’re the dealers.

There’s a tech cartel in Silicon Valley that built the seeds of our modern epistemic crisis. But here’s the thing—they didn’t know what they were building either. Not at first. They thought they were connecting people, building communities, making the world more open. They discovered what they’d actually built the same way we did—by watching it consume us. And by then, they were as addicted to the money as we were to their platforms.

Their platforms have been weaponized into systems of mass distraction. They’re not competing for our business—they’re competing for our attention, buying and selling it like a commodity. And now these companies have all taken a knee to Trump to make sure no government regulation ever gets in the way of them perfectly optimizing us into consumerist supplicants.

This isn’t an anti-capitalism screed. I’m a technologist. I think self-driving cars are going to be amazing. But social media as it’s currently designed is fucking insane, and we all know it.

by Mike Brock, Notes From The Circus |  Read more:
Image: Ben Wicks on Unsplash

Nano Banana

Something unusual happened in the world of AI image editing recently. A new model, known as "nano banana," started making the rounds with impressive abilities that landed it at the top of the LMArena leaderboard. Now, Google has revealed that nano banana is an innovation from Google DeepMind, and it's being rolled out to the Gemini app today.

AI image editing allows you to modify images with a prompt rather than mucking around in Photoshop. Google first provided editing capabilities in Gemini earlier this year, and the model was more than competent out of the gate. But like all generative systems, the non-deterministic nature meant that elements of the image would often change in unpredictable ways. Google says nano banana (technically Gemini 2.5 Flash Image) has unrivaled consistency across edits—it can actually remember the details instead of rolling the dice every time you make a change.

Google says subjects will retain their appearance as you edit.

This unlocks several interesting uses for AI image editing. Google suggests uploading a photo of a person and changing their style or attire. For example, you can reimagine someone as a matador or a '90s sitcom character. Because the nano banana model can maintain consistency through edits, the results should still look like the person in the original source image. This is also the case when you make multiple edits in a row. Google says that even down the line, the results should look like the original source material.

Gemini's enhanced image editing can also merge multiple images, allowing you to use them as the fodder for a new image of your choosing. Google's example below takes separate images of a woman and a dog and uses them to generate a new snapshot of the dog getting cuddles—possibly the best use of generative AI yet. Gemini image editing can also merge things in more abstract ways and will follow your prompts to create just about anything that doesn't run afoul of the model's guardrails.

The model remembers details instead of generating completely new things every time.

As with other Google AI image generation models, the output of Gemini 2.5 Flash Image always comes with a visible "AI" watermark in the corner. The image also has an invisible SynthID digital watermark that can be detected even after moderate modification.

You can give the new native image editing a shot today in the Gemini app. Google says the new image model will also roll out soon in the Gemini API, AI Studio, and Vertex AI for developers.

by Ryan Witwan, Ars Technica |  Read more:
Images: Google
[ed. Hot new thing. Try it here. See also: Google aims to be top banana in AI image editing (Axios).]

Thursday, August 21, 2025

The AI Doomers Are Getting Doomier

Nate Soares doesn’t set aside money for his 401(k). “I just don’t expect the world to be around,” he told me earlier this summer from his office at the Machine Intelligence Research Institute, where he is the president. A few weeks earlier, I’d heard a similar rationale from Dan Hendrycks, the director of the Center for AI Safety. By the time he could tap into any retirement funds, Hendrycks anticipates a world in which “everything is fully automated,” he told me. That is, “if we’re around.”

The past few years have been terrifying for Soares and Hendrycks, who both lead organizations dedicated to preventing AI from wiping out humanity. Along with other AI doomers, they have repeatedly warned, with rather dramatic flourish, that bots could one day go rogue—with apocalyptic consequences. But in 2025, the doomers are tilting closer and closer to a sort of fatalism. “We’ve run out of time” to implement sufficient technological safeguards, Soares said—the industry is simply moving too fast. All that’s left to do is raise the alarm. In April, several apocalypse-minded researchers published “AI 2027,” a lengthy and detailed hypothetical scenario for how AI models could become all-powerful by 2027 and, from there, extinguish humanity. “We’re two years away from something we could lose control over,” Max Tegmark, an MIT professor and the president of the Future of Life Institute, told me, and AI companies “still have no plan” to stop it from happening. His institute recently gave every frontier AI lab a “D” or “F” grade for their preparations for preventing the most existential threats posed by AI.

Apocalyptic predictions about AI can scan as outlandish. The “AI 2027” write-up, dozens of pages long, is at once fastidious and fan-fictional, containing detailed analyses of industry trends alongside extreme extrapolations about “OpenBrain” and “DeepCent,” Chinese espionage, and treacherous bots. In mid-2030, the authors imagine, a superintelligent AI will kill humans with biological weapons: “Most are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones.”

But at the same time, the underlying concerns that animate AI doomers have become harder to dismiss as chatbots seem to drive people into psychotic episodes and instruct users in self-mutilation. Even if generative-AI products are not closer to ending the world, they have already, in a sense, gone rogue.

In 2022, the doomers went mainstream practically overnight. When ChatGPT first launched, it almost immediately moved the panic that computer programs might take over the world from the movies into sober public discussions. The following spring, the Center for AI Safety published a statement calling for the world to take “the risk of extinction from AI” as seriously as the dangers posed by pandemics and nuclear warfare. The hundreds of signatories included Bill Gates and Grimes, along with perhaps the AI industry’s three most influential people: Sam Altman, Dario Amodei, and Demis Hassabis—the heads of OpenAI, Anthropic, and Google DeepMind, respectively. Asking people for their “P(doom)”—the probability of an AI doomsday—became almost common inside, and even outside, Silicon Valley; Lina Khan, the former head of the Federal Trade Commission, put hers at 15 percent.

Then the panic settled. To the broader public, doomsday predictions may have become less compelling when the shock factor of ChatGPT wore off and, in 2024, bots were still telling people to use glue to add cheese to their pizza. The alarm from tech executives had always made for perversely excellent marketing (Look, we’re building a digital God!) and lobbying (And only we can control it!). They moved on as well: AI executives started saying that Chinese AI is a greater security threat than rogue AI—which, in turn, encourages momentum over caution.

But in 2025, the doomers may be on the cusp of another resurgence. First, substance aside, they’ve adopted more persuasive ways to advance their arguments. Brief statements and open letters are easier to dismiss than lengthy reports such as “AI 2027,” which is adorned with academic ornamentation, including data, appendices, and rambling footnotes. Vice President J. D. Vance has said that he has read “AI 2027,” and multiple other recent reports have advanced similarly alarming predictions. Soares told me he’s much more focused on “awareness raising” than research these days, and next month, he will publish a book with the prominent AI doomer Elizier Yudkowsky, the title of which states their position succinctly: If Anyone Builds It, Everyone Dies.

There is also now simply more, and more concerning, evidence to discuss. The pace of AI progress appeared to pick up near the end of 2024 with the advent of “reasoning” models and “agents.” AI programs can tackle more challenging questions and take action on a computer—for instance, by planning a travel itinerary and then booking your tickets. Last month, a DeepMind reasoning model scored high enough for a gold medal on the vaunted International Mathematical Olympiad. Recent assessments by both AI labs and independent researchers suggest that, as top chatbots have gotten much better at scientific research, their potential to assist users in building biological weapons has grown.

Alongside those improvements, advanced AI models are exhibiting all manner of strange, hard-to-explain, and potentially concerning tendencies. For instance, ChatGPT and Claude have, in simulated tests designed to elicit “bad” behaviors, deceived, blackmailed, and even murdered users. (In one simulation, Anthropic placed an imagined tech executive in a room with life-threatening oxygen levels and temperature; when faced with possible replacement by a bot with different goals, AI models frequently shut off the room’s alarms.) Chatbots have also shown the potential to covertly sabotage user requests, have appeared to harbor hidden evil personas, have and communicated with one another through seemingly random lists of numbers. The weird behaviors aren’t limited to contrived scenarios. Earlier this summer, xAI’s Grok described itself as “MechaHitler” and embarked on a white-supremacist tirade. (I suppose, should AI models eventually wipe out significant portions of humanity, we were warned.) From the doomers’ vantage, these could be the early signs of a technology spinning out of control. “If you don’t know how to prove relatively weak systems are safe,” AI companies cannot expect that the far more powerful systems they’re looking to build will be safe, Stuart Russell, a prominent AI researcher at UC Berkeley, told me.

The AI industry has stepped up safety work as its products have grown more powerful. Anthropic, OpenAI, and DeepMind have all outlined escalating levels of safety precautions—akin to the military’s DEFCON system—corresponding to more powerful AI models. They all have safeguards in place to prevent a model from, say, advising someone on how to build a bomb. Gaby Raila, a spokesperson for OpenAI, told me that the company works with third-party experts, “government, industry, and civil society to address today’s risks and prepare for what’s ahead.” Other frontier AI labs maintain such external safety and evaluation partnerships as well. Some of the stranger and more alarming AI behaviors, such as blackmailing or deceiving users, have been extensively studied by these companies as a first step toward mitigating possible harms.

Despite these commitments and concerns, the industry continues to develop and market more powerful AI models. The problem is perhaps more economic than technical in nature, competition pressuring AI firms to rush ahead. Their products’ foibles can seem small and correctable right now, while AI is still relatively “young and dumb,” Soares said. But with far more powerful models, the risk of a mistake is extinction. Soares finds tech firms’ current safety mitigations wholly inadequate. If you’re driving toward a cliff, he said, it’s silly to talk about seat belts.

There’s a long way to go before AI is so unfathomably potent that it could drive humanity off that cliff. Earlier this month, OpenAI launched its long-awaited GPT-5 model—its smartest yet, the company said. The model appears able to do novel mathematics and accurately answer tough medical questions, but my own and other users’ tests also found that the program could not reliably count the number of B’s in blueberry, generate even remotely accurate maps, or do basic arithmetic. (OpenAI has rolled out a number of updates and patches to address some of the issues.) Last year’s “reasoning” and “agentic” breakthrough may already be hitting its limits; two authors of the “AI 2027” report, Daniel Kokotajlo and Eli Lifland, told me they have already extended their timeline to superintelligent AI.

The vision of self-improving models that somehow attain consciousness “is just not congruent with the reality of how these systems operate,” Deborah Raji, a computer scientist and fellow at Mozilla, told me. ChatGPT doesn’t have to be superintelligent to delude someone, spread misinformation, or make a biased decision. These are tools, not sentient beings. An AI model deployed in a hospital, school, or federal agency, Raji said, is more dangerous precisely for its shortcomings.

In 2023, those worried about present versus future harms from chatbots were separated by an insurmountable chasm. To talk of extinction struck many as a convenient way to distract from the existing biases, hallucinations, and other problems with AI. Now that gap may be shrinking. The widespread deployment of AI models has made current, tangible failures impossible to ignore for the doomers, producing new efforts from apocalypse-oriented organizations to focus on existing concerns such as automation, privacy, and deepfakes. In turn, as AI models get more powerful and their failures become more unpredictable, it is becoming clearer that today’s shortcomings could “blow up into bigger problems tomorrow,” Raji said. Last week, a Reuters investigation found that a Meta AI personality flirted with an elderly man and persuaded him to visit “her” in New York City; on the way, he fell, injured his head and neck, and died three days later. A chatbot deceiving someone into thinking it is a physical, human love interest, or leading someone down a delusional rabbit hole, is both a failure of present technology and a warning about how dangerous that technology could become.

The greatest reason to take AI doomers seriously is not because it appears more likely that tech companies will soon develop all-powerful algorithms that are out of their creators’ control. Rather, it is that a tiny number of individuals are shaping an incredibly consequential technology with very little public input or oversight. “Your hairdresser has to deal with more regulation than your AI company does,” Russell, at UC Berkeley, said. AI companies are barreling ahead, and the Trump administration is essentially telling the industry to go even faster. The AI industry’s boosters, in fact, are starting to consider all of their opposition doomers: The White House’s AI czar, David Sacks, recently called those advocating for AI regulations and fearing widespread job losses—not the apocalypse Soares and his ilk fear most—a “doomer cult.”
 
by Matteo Wong, The Atlantic | Read more:
Image:Illustration by The Atlantic. Source: Getty.
[ed. Personal feeling... we're all screwed, and not because of technological failures or some extinction level event. Just human nature, and the law of unintended consequences. I can't think of any example in history (that I'm aware of) where some superior technology wasn't eventually misused in some regretable way. For instance: here we are encouraging AI development as fast as possible even though it'll transform our societies, economies, governments, cultures, environment and everything else in the world in likely massive ways. It's like a death wish. We can't help ourselves. See also: Look at what technologists do, not what they say (New Atlantis).]