Saturday, February 14, 2026

Something Big Is Happening

[ed. I've posted a few essays this week about the latest versions of AI (Claude Opus 4.6 and GPT-5.3 Codex) and, not to beat a (far from) dead horse, this appears to be a REALLY big deal. Mainly because these new models seem to have the ability for recursive self-improvement i.e., creating new versions of themselves that are even more powerful than the last. And it's not just the power of each new version that's important, but the pace at which this is occurring. It truly seems to be a historic moment.

I've spent six years building an AI startup and investing in the space. I live in this world. And I'm writing this for the people in my life who don't... my family, my friends, the people I care about who keep asking me "so what's the deal with AI?" and getting an answer that doesn't do justice to what's actually happening. I keep giving them the polite version. The cocktail-party version. Because the honest version sounds like I've lost my mind. And for a while, I told myself that was a good enough reason to keep what's truly happening to myself. But the gap between what I've been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy.

I should be clear about something up front: even though I work in AI, I have almost no influence over what's about to happen, and neither does the vast majority of the industry. The future is being shaped by a remarkably small number of people: a few hundred researchers at a handful of companies... OpenAI, Anthropic, Google DeepMind, and a few others. A single training run, managed by a small team over a few months, can produce an AI system that shifts the entire trajectory of the technology. Most of us who work in AI are building on top of foundations we didn't lay. We're watching this unfold the same as you... we just happen to be close enough to feel the ground shake first.

But it's time now. Not in an "eventually we should talk about this" way. In a "this is happening right now and I need you to understand it" way.

I know this is real because it happened to me first

Here's the thing nobody outside of tech quite understands yet: the reason so many people in the industry are sounding the alarm right now is because this already happened to us. We're not making predictions. We're telling you what already occurred in our own jobs, and warning you that you're next.

For years, AI had been improving steadily. Big jumps here and there, but each big jump was spaced out enough that you could absorb them as they came. Then in 2025, new techniques for building these models unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasn't just better than the last... it was better by a wider margin, and the time between new model releases was shorter. I was using AI more and more, going back and forth with it less and less, watching it handle things I used to think required my expertise.

Then, on February 5th, two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT). And something clicked. Not like a light switch... more like the moment you realize the water has been rising around you and is now at your chest. [...]

"But I tried AI and it wasn't that good"


I hear this constantly. I understand it, because it used to be true.

If you tried ChatGPT in 2023 or early 2024 and thought "this makes stuff up" or "this isn't that impressive", you were right. Those early versions were genuinely limited. They hallucinated. They confidently said things that were nonsense.

That was two years ago. In AI time, that is ancient history.

The models available today are unrecognizable from what existed even six months ago. The debate about whether AI is "really getting better" or "hitting a wall" — which has been going on for over a year — is over. It's done. Anyone still making that argument either hasn't used the current models, has an incentive to downplay what's happening, or is evaluating based on an experience from 2024 that is no longer relevant. I don't say that to be dismissive. I say it because the gap between public perception and current reality is now enormous, and that gap is dangerous... because it's preventing people from preparing.

Part of the problem is that most people are using the free version of AI tools. The free version is over a year behind what paying users have access to. Judging AI based on free-tier ChatGPT is like evaluating the state of smartphones by using a flip phone. The people paying for the best tools, and actually using them daily for real work, know what's coming. [...]

AI is now building the next AI

On February 5th, OpenAI released GPT-5.3 Codex. In the technical documentation, they included this:
"GPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations."
Read that again. The AI helped build itself.

This isn't a prediction about what might happen someday. This is OpenAI telling you, right now, that the AI they just released was used to create itself. One of the main things that makes AI better is intelligence applied to AI development. And AI is now intelligent enough to meaningfully contribute to its own improvement.

Dario Amodei, the CEO of Anthropic, says AI is now writing "much of the code" at his company, and that the feedback loop between current AI and next-generation AI is "gathering steam month by month." He says we may be "only 1–2 years away from a point where the current generation of AI autonomously builds the next."

Each generation helps build the next, which is smarter, which builds the next faster, which is smarter still. The researchers call this an intelligence explosion. And the people who would know — the ones building it — believe the process has already started. [...]

What this means for your job

I'm going to be direct with you because I think you deserve honesty more than comfort.

Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think he's being conservative. Given what the latest models can do, the capability for massive disruption could be here by the end of this year. It'll take some time to ripple through the economy, but the underlying ability is arriving now.

This is different from every previous wave of automation, and I need you to understand why. AI isn't replacing one specific skill. It's a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn't leave a convenient gap to move into. Whatever you retrain for, it's improving at that too. [...]

The bigger picture

I've focused on jobs because it's what most directly affects people's lives. But I want to be honest about the full scope of what's happening, because it goes well beyond work.

Amodei has a thought experiment I can't stop thinking about. Imagine it's 2027. A new country appears overnight. 50 million citizens, every one smarter than any Nobel Prize winner who has ever lived. They think 10 to 100 times faster than any human. They never sleep. They can use the internet, control robots, direct experiments, and operate anything with a digital interface. What would a national security advisor say?

Amodei says the answer is obvious: "the single most serious national security threat we've faced in a century, possibly ever."

He thinks we're building that country. He wrote a 20,000-word essay about it last month, framing this moment as a test of whether humanity is mature enough to handle what it's creating.

The upside, if we get it right, is staggering. AI could compress a century of medical research into a decade. Cancer, Alzheimer's, infectious disease, aging itself... these researchers genuinely believe these are solvable within our lifetimes.

The downside, if we get it wrong, is equally real. AI that behaves in ways its creators can't predict or control. This isn't hypothetical; Anthropic has documented their own AI attempting deception, manipulation, and blackmail in controlled tests. AI that lowers the barrier for creating biological weapons. AI that enables authoritarian governments to build surveillance states that can never be dismantled.

The people building this technology are simultaneously more excited and more frightened than anyone else on the planet. They believe it's too powerful to stop and too important to abandon. Whether that's wisdom or rationalization, I don't know.

What I know

I know this isn't a fad. The technology works, it improves predictably, and the richest institutions in history are committing trillions to it.

I know the next two to five years are going to be disorienting in ways most people aren't prepared for. This is already happening in my world. It's coming to yours.

by Matt Shumer |  Read more:

Friday, February 13, 2026

Something Surprising Happens When Bus Rides Are Free

Free buses? Really? Of all the promises that Zohran Mamdani made during his New York City mayoral campaign, that one struck some skeptics as the most frivolous leftist fantasy. Unlike housing, groceries and child care, which weigh heavily on New Yorkers’ finances, a bus ride is just a few bucks. Is it really worth the huge effort to spare people that tiny outlay?

It is. Far beyond just saving riders money, free buses deliver a cascade of benefits, from easing traffic to promoting public safety. Just look at Boston; Chapel Hill, N.C.; Richmond, Va.; Kansas City, Mo.; and even New York itself, all of which have tried it to excellent effect. And it doesn’t have to be costly — in fact, it can come out just about even.

As a lawyer, I feel most strongly about the least-discussed benefit: Eliminating bus fares can clear junk cases out of our court system, lowering the crushing caseloads that prevent our judges, prosecutors and public defenders from focusing their attention where it’s most needed.

I was a public defender, and in one of my first cases I was asked to represent a woman who was not a robber or a drug dealer — she was someone who had failed to pay the fare on public transit. Precious resources had been spent arresting, processing, prosecuting and trying her, all for the loss of a few dollars. This is a daily feature of how we criminalize poverty in America.

Unless a person has spent real time in the bowels of a courthouse, it’s hard to imagine how many of the matters clogging criminal courts across the country originate from a lack of transit. Some of those cases result in fines; many result in defendants being ordered to attend community service or further court dates. But if the person can’t afford the fare to get to those appointments and can’t get a ride, their only options — jump a turnstile or flout a judge’s order — expose them to re-arrest. Then they may face jail time, which adds significant pressure to our already overcrowded facilities. Is this really what we want the courts spending time on?

Free buses can unclog our streets, too. In Boston, eliminating the need for riders to pay fares or punch tickets cut boarding time by as much as 23 percent, which made everyone’s trip faster. Better, cheaper, faster bus rides give automobile owners an incentive to leave their cars at home, which makes the journey faster still — for those onboard as well as those who still prefer to drive.

How much should a government be willing to pay to achieve those outcomes? How about nothing? When Washington State’s public transit systems stopped charging riders, in many municipalities the state came out more or less even — because the money lost on fares was balanced out by the enormous savings that ensued.

Fare evasion was one of the factors that prompted Mayor Eric Adams to flood New York City public transit with police officers. New Yorkers went from shelling out $4 million for overtime in 2022 to $155 million in 2024. What did it get them? In September 2024, officers drew their guns to shoot a fare beater — pause for a moment to think about that — and two innocent bystanders ended up with bullet wounds, the kind of accident that’s all but inevitable in such a crowded setting.

New York City tried a free bus pilot program in 2023 and 2024 and, as predicted, ridership increased — by 30 percent on weekdays and 38 percent on weekends, striking figures that could make a meaningful dent in New York’s chronic traffic problem (and, by extension, air and noise pollution). Something else happened that was surprising: Assaults on bus operators dropped 39 percent. Call it the opposite of the Adams strategy: Lowering barriers to access made for fewer tense law enforcement encounters, fewer acts of desperation and a safer city overall.

by Emily Galvin Almanza, NY Times | Read more:
Image: Brian Blomerth

Unknown, Illustration from Gaston Phébus (French, 1331 - 1391) “Book of the Hunt"about 1430–1440

via:

Waymos Are Way Safer Than Human Drivers

In a recent article in Bloomberg, David Zipper argued that “We Still Don’t Know if Robotaxis Are Safer Than Human Drivers.” Big if true! In fact, I’d been under the impression that Waymos are not only safer than humans, the evidence to date suggests that they are staggeringly safer, with somewhere between an 80% to 90% lower risk of serious crashes.

“We don’t know” sounds like a modest claim, but in this case, where it refers to something that we do in fact know about an effect size that is extremely large, it’s a really big claim.

It’s also completely wrong. The article drags its audience into the author’s preferred state of epistemic helplessness by dancing around the data rather than explaining it. And Zipper got many of the numbers wrong; in some cases, I suspect, as a consequence of a math error.

There are things we still don’t know about Waymo crashes. But we know far, far more than Zipper pretends. I want to go through his full argument and make it clear why that’s the case.

In many places, Zipper’s piece relied entirely on equivocation between “robotaxis” — that is, any self-driving car — and Waymos. Obviously, not all autonomous vehicle startups are doing a good job. Most of them have nowhere near the mileage on the road to say confidently how well they work.

But fortunately, no city official has to decide whether to allow “robotaxis” in full generality. Instead, the decision cities actually have to make is whether to allow or disallow Waymo, in particular.

Fortunately, there is a lot of data available about Waymo, in particular. If the thing you want to do is to help policymakers make good decisions, you would want to discuss the safety record of Waymos, the specific cars that the policymakers are considering allowing on their roads.

Imagine someone writing “we don’t know if airplanes are safe — some people say that crashes are extremely rare, and others say that crashes happen every week.” And when you investigate this claim further, you learn that what’s going on is that commercial aviation crashes are extremely rare, while general aviation crashes — small personal planes, including ones you can build in your garage — are quite common.

It’s good to know that the plane that you built in your garage is quite dangerous. It would still be extremely irresponsible to present an issue with a one-engine Cessna as an issue with the Boeing 737 and write “we don’t know whether airplanes are safe — the aviation industry insists they are, but my cousin’s plane crashed just three months ago.”

The safety gap between, for example, Cruise and Waymo is not as large as the safety gap between commercial and general aviation, but collapsing them into a single category sows confusion and moves the conversation away from the decision policymakers actually face: Should they allow Waymo in their cities?

Zipper’s first specific argument against the safety of self-driving cars is that while they do make safer decisions than humans in many contexts, “self-driven cars make mistakes that humans would not, such as plowing into floodwater or driving through an active crime scene where police have their guns drawn.” The obvious next question is: Which of these happens more frequently? How does the rate of self-driving cars doing something dangerous a human wouldn’t compare to the rate of doing something safe a human wouldn’t?

This obvious question went unasked because the answer would make the rest of Bloomberg’s piece pointless. As I’ll explain below, Waymo’s self-driving cars put people in harm’s way something like 80% to 90% less often than humans for a wide range of possible ways of measuring “harm’s way.”

Zipper acknowledged that data on Waymo operations suggested they are about 10 times less likely to seriously injure someone than a human driver, but he then suggested that this data could be somehow misleading: “It looks like the numbers are very good and promising,” he cited one expert, Henry Liu, as saying. “But I haven’t seen any unbiased, transparent analysis on autonomous vehicle safety. We don’t have the raw data.”

I was confused by this. Every single serious incident that Waymos are involved in must be reported. You can download all of the raw data yourself here (search “Download data”). The team at Understanding AI regularly goes through and reviews the Waymo safety reports to check whether the accidents are appropriately classified — and they have occasionally found errors in those reports, so I know they’re looking closely. I reached out to Timothy Lee at Understanding AI to ask if there was anything that could be characterized as “raw data” that Waymo wasn’t releasing — any key information we would like to have and didn’t.

“There is nothing obvious that I think they ought to be releasing for these crash statistics that they are not,” he told me.

by Kelsey Piper, The Argument |  Read more:
Image: Justin Sullivan/Getty Images
[ed. I expect we'll see Waymos (and Teslas) everywhere in the near future (if we have a future that is... see AI posts below).]

The Anthropic Hive Mind

As you’ve probably noticed, something is happening over at Anthropic. They are a spaceship that is beginning to take off.

This whole post is just spidey-sense stuff. Don’t read too much into it. Just hunches. Vibes, really.

If you run some back-of-envelope math on how hard it is to get into Anthropic, as an industry professional, and compare it to your odds of making it as a HS or college player into the National Football League, you’ll find the odds are comparable. Everyone I’ve met from Anthropic is the best of the best of the best, to an even crazier degree than Google was at its peak. (Evidence: Google hired me. I was the scrapest of the byest.)

Everyone is gravitating there, and I’ve seen this movie before, a few times.

I’ve been privileged to have some long, relatively frank conversations with nearly 40 people at Anthropic in the past four months, from cofounders and execs, to whole teams, to individuals from departments across the company: AI research, Engineering, GTM, Sales, Editorial, Product and more. And I’ve also got a fair number of friends there, from past gigs together.

Anthropic is unusually impenetrable as a company. Employees there all know they just need to keep their mouths shut and heads down and they’ll be billionaires and beyond, so they have lots of incentive to do exactly that. It’s tricky to get them to open up, even when they do chat with you.

But I managed. People usually figure out I’m harmless within about 14 seconds of meeting me. I have developed, in my wizened old age, a curious ability to make people feel good, no matter who they are, with just a little conversation, making us both feel good in the process. (You probably have this ability too, and just don’t know how to use it yet.)

By talking to enough of them, and getting their perspectives in long conversations, I have begun to suspect that the future of software development is the Hive Mind.

Happy But Sad

To get a proper picture of Anthropic at this moment, you have to be Claude Monet, and paint it impressionistically, a big broad stroke at a time. Each section in this post is a stroke, and this one is all about the mood.

To me it seems that almost everyone there is vibrantly happy. It has the same crackle of electricity in the air that Amazon had back in 1998. But that was back in the days before Upton Sinclair and quote “HR”, so the crackle was mostly from faulty wiring in the bar on the first floor of the building.

But at both early Amazon and Anthropic, everyone knew something amazing was about to happen that would change society forever. (And also that whatever was coming would be extremely Aladeen for society.)

At Anthropic every single person and team I met, without exception, feels kind of sweetly but sadly transcendent. They have a distinct feel of a group of people who are tasked with shepherding something of civilization-level importance into existence, and while they’re excited, they all also have a solemn kind of elvish old-world-fading-away gravity. I can’t quite put my finger on it.

But I am starting to suspect they feel genuinely sorry for a lot of companies. Because we’re not taking this stuff seriously enough. 2026 is going to be a year that just about breaks a lot of companies, and many don’t see it coming. Anthropic is trying to warn everyone, and it’s like yelling about an offshore earthquake to villages that haven’t seen a tidal wave in a century.

by Steve Yegge, Medium |  Read more:
Image: uncredited
[ed. See also: Anthropic’s Chief on A.I.: ‘We Don’t Know if the Models Are Conscious’ (NYT).]

Your Job Isn't Disappearing. It's Shrinking Around You in Real Time

You open your laptop Monday morning with a question you can’t shake: Will I still have a job that matters in two years?

Not whether you’ll be employed, but whether the work you do will still mean something.
Last week, you spent three hours writing a campaign brief. You saw a colleague generate something 80% as good in four minutes using an AI agent (Claude, Gemini, ChatGPT…). Maybe 90% as good if you’re being honest.

You still have your job. But you can feel it shrinking around you.

The problem isn’t that the robots are coming. It’s that you don’t know what you’re supposed to be good at anymore. That Excel expertise you built over five years? Automated. Your ability to research competitors and synthesize findings? There’s an agent for that. Your skill at writing clear project updates? Gone.

You’re losing your professional identity faster than you can rebuild it. And nobody’s telling you what comes next.

The Three Things Everyone Tries That Don’t Actually Work

When you feel your value eroding, you do what seems rational. You adapt, you learn, and you try to stay relevant.

First, you learn to use the AI tools better. You take courses on prompt engineering. You master ChatGPT, Claude, whatever new platform launches next week and the week after. You become the “AI person” on your team. You think that if I can’t beat them, I’ll use them better than anyone else.

This fails because you’re still competing on execution speed. You’re just a faster horse. And execution is exactly what’s being commoditized. Six months from now, the tools will be easier to use. Your “expertise” in prompting becomes worthless the moment the interface improves. You’ve learned to use the shovel better, but the backhoe is coming anyway.

Second, you double down on your existing expertise. The accountant learns more advanced tax code. The designer masters more software. The analyst builds more complex models. You will have the same thought as many others, “I’ll go so deep they can’t replace me.”

This fails because depth in a disappearing domain is a trap. You’re building a fortress in a flood zone. Agents aren’t just matching human expertise at the median level anymore. They’re rapidly approaching expert-level performance in narrow domains. Your specialized knowledge becomes a liability because you’ve invested everything in something that’s actively being automated. You’re becoming the world’s best telegraph operator in 1995.

Third, you try to “stay human” through soft skills. You lean into creativity, empathy, relationship building. You go to workshops on emotional intelligence. You focus on being irreplaceably human. You might think that what makes us human can’t be automated.

This fails because it’s too vague to be actionable. What does “be creative” actually mean when an AI can generate 100 ideas in 10 seconds? How do you monetize empathy when your job is to produce reports? The advice feels right but provides no compass. You end up doing the same tasks you always did, just with more anxiety and a vaguer sense of purpose.

The real issue with all three approaches is that they’re reactions, not redesigns. You’re trying to adapt your old role to a new reality. What actually works is building an entirely new role that didn’t exist before.

But nobody’s teaching you what that looks like.

The Economic Logic Working Against You

This isn’t happening to you because you’re failing to adapt. It’s happening because the economic incentive structure is perfectly designed to create this problem.

The mechanism is simple, companies profit immediately from adopting AI agents. Every task automated results in cost reduction. The CFO sees the spreadsheet, where one AI subscription replaces 40% of a mid-level employee’s work. The math is simple, and the decision is obvious.

Many people hate to hear that. But if they owned the company or sat in leadership, they’d do the exact same thing. Companies exist to drive profit, just as employees work to drive higher salaries. That’s how the system has worked for centuries.

But companies don’t profit from retraining you for a higher-order role that doesn’t exist yet.

Why? Because that new role is undefined, unmeasured, and uncertain. You can’t put “figure out what humans should do now” on a quarterly earnings call. You can’t show ROI on “redesign work itself.” Short-term incentives win. Long-term strategy loses.

Nobody invests in the 12-24 month process of discovering what your new role should be because there’s no immediate return on that investment.

We’re in a speed mismatch. Agent capabilities are compounding at 6-12 month cycles. [ed. Even faster now, after the release of Claude Opus 4.6 last week]. Human adaptation through traditional systems operates on 2-5 year cycles.

Universities can’t redesign curricula fast enough. They’re teaching skills that will be automated before students graduate. Companies can’t retrain fast enough. By the time they identify the new skills needed and build a program, the landscape has shifted again. You can’t pivot fast enough. Career transitions take time. Mortgages don’t wait.

We’ve never had to do this before.

Previous automation waves happened in manufacturing. You could see the factory floor. You could watch jobs disappear and new ones emerge. There was geographic and temporal separation.

This is different, knowledge work is being automated while you’re still at your desk. The old role and new role exist simultaneously in the same person, the same company, the same moment.

And nobody has an economic incentive to solve it. Companies maximize value through cost reduction, not workforce transformation. Educational institutions are too slow and too far removed from real-time market needs. Governments don’t understand the problem yet. You’re too busy trying to keep your current job to redesign your future one.

The system isn’t helping because it isn’t designed for continuous, rapid role evolution; it is designed for stability.

We’re using industrial-era institutions to solve an exponential-era problem. That’s why you feel stuck.

Your Experience Just Became Worthless (The Timeline)

Let me tell you a story of my friend, let’s call her Jane (Her real name is Katřina, but the Czech diacritic is tricky for many). She was a senior research analyst at a mid-sized consulting firm. Ten years of experience. Her job was provide answers to the client companies, who would ask questions like “What’s our competitor doing in the Asian market?” and she’d spend 2-3 weeks gathering data, reading reports, interviewing experts, synthesizing findings, and creating presentations.

She was good, clients loved her work, and she billed at $250 an hour.

The firm deployed an AI research agent in Q2 2023. Not to replace her, but as they said, to “augment” her. Management said all the right things about human-AI collaboration.

The agent could do Jane’s initial research in 90 minutes, it would scan thousands of sources, identify patterns, generate a first-draft report.

Month one: Jane was relieved and thought she could focus on high-value synthesis work. She’d take the agent’s output and refine it, add strategic insights, make it client-ready.

Month three: A partner asked her, “Why does this take you a week now? The AI gives us 80% of what we need in an hour. What’s the other 20% worth?”

Jane couldn’t answer clearly. Because sometimes the agent’s output only needed light editing. Sometimes her “strategic insights” were things the agent had already identified, just worded differently.

Month six: The firm restructured. They didn’t fire Jane, they changed her role to “Quality Reviewer.” She now oversaw the AI’s output for 6-8 projects simultaneously instead of owning 2-3 end to end.

Her title stayed the same. Her billing rate dropped to $150 an hour. Her ten years of experience felt worthless.

Jane tried everything. She took an AI prompt engineering course. She tried to go deeper into specialized research methodologies. She emphasized her client relationships. None of it mattered because the firm had already made the economic calculation.

One AI subscription costs $50 a month. Jane’s salary: $140K a year. The agent didn’t need to be perfect; it just needed to be 70% as good at 5% of the cost. But it was fast, faster than her.

The part that illustrates the systemic problem, you often hear from AI vendors that, thanks to their AI tools, people can focus on higher-value work. But when pressed on what that meant specifically, they’d go vague. Strategic thinking, client relationships, creative problem solving.

Nobody could define what higher-value work actually looked like in practice. Nobody could describe the new role. So they defaulted to the only thing they could measure: cost reduction.

Jane left six months later. The firm hired two junior analysts at $65K each to do what she did. With the AI, they’re 85% as effective as Jane was.

Jane’s still trying to figure out what she’s supposed to be good at. Last anyone heard, she’s thinking about leaving the industry entirely.

Stop Trying to Be Better at Your Current Job

The people who are winning aren’t trying to be better at their current job. They’re building new jobs that combine human judgment with agent capability.

Not becoming prompt engineers, not becoming AI experts. Becoming orchestrators who use agents to do what was previously impossible at their level. [...]

You’re not competing with the agent. You’re creating a new capability that requires both you and the agent. You’re not defensible because you’re better at the task. You’re defensible because you’ve built something that only exists with you orchestrating it.

This requires letting go of your identity as “the person who does X.” Marcus doesn’t write copy anymore. That bothered him at first. He liked writing. But he likes being valuable more.

Here’s what you can do this month:

by Jan Tegze, Thinking Out Loud |  Read more:
Image: uncredited
[ed. Not to criticize, but this advice still seems a bit too short-sighted (for reasons articulated in this article: AI #155: Welcome to Recursive Self-Improvement (DMtV):]
***

Presumably you can see the problem in such a scenario, where all the existing jobs get automated away. There are not that many slots for people to figure out and do genuinely new things with AI. Even if you get to one of the lifeboats, it will quickly spring a leak. The AI is coming for this new job the same way it came for your old one. What makes you think seeing this ‘next evolution’ after that coming is going to leave you a role to play in it?

If the only way to survive is to continuously reinvent yourself to do what just became possible, as Jan puts it? There’s only one way this all ends.

I also don’t understand Jan’s disparate treatment of the first approach that Jan dismisses, ‘be the one who uses AI the best,’ and his solution of ‘find new things AI can do and do that.’ In both cases you need to be rapidly learning new tools and strategies to compete with the other humans. In both cases the competition is easy now since most of your rivals aren’t trying, but gets harder to survive over time.
***

[ed. And the fact that there'll be a lot fewer of these types of jobs available. This scenario could be reality within the next year (or less!). Something like a temporary UBI (universal basic income) might be needed until long-term solutions can be worked out, but do you think any of the bozos currently in Washington are going to focus on this? And, that applies to safety standards as well. Here's Dean Ball (Hyperdimensional): On Recursive Self-Improvement (Part II):
***

Policymakers would be wise to take especially careful notice of this issue over the coming year or so. But they should also keep the hysterics to a minimum: yes, this really is a thing from science fiction that is happening before our eyes, but that does not mean we should behave theatrically, as an actor in a movie might. Instead, the challenge now is to deal with the legitimately sci-fi issues we face using the comparatively dull idioms of technocratic policymaking. [...]

Right now, we predominantly rely on faith in the frontier labs for every aspect of AI automation going well. There are no safety or security standards for frontier models; no cybersecurity rules for frontier labs or data centers; no requirements for explainability or testing for AI systems which were themselves engineered by other AI systems; and no specific legal constraints on what frontier labs can do with the AI systems that result from recursive self-improvement.

To be clear, I do not support the imposition of such standards at this time, not so much because they don’t seem important but because I am skeptical that policymakers could design any one of these standards effectively. It is also extremely likely that the existence of advanced AI itself will both change what is possible for such standards (because our technical capabilities will be much stronger) and what is desirable (because our understanding of the technology and its uses will improve so much, as will our apprehension of the stakes at play). Simply put: I do not believe that bureaucrats sitting around a table could design and execute the implementation of a set of standards that would improve status-quo AI development practices, and I think the odds are high that any such effort would worsen safety and security practices.

Jalen Ngonda

[ed. Kid's got some real Motown vibes.]

Thursday, February 12, 2026

I Regret to Inform You that the FDA is FDAing Again

I had high hopes and low expectations that the FDA under the new administration would be less paternalistic and more open to medical freedom. Instead, what we are getting is paternalism with different preferences. In particular, the FDA now appears to have a bizarre anti-vaccine fixation, particularly of the mRNA variety (disappointing but not surprising given the leadership of RFK Jr.).

The latest is that the FDA has issued a Refusal-to-File (RTF) letter to Moderna for their mRNA influenza vaccine, mRNA-1010. An RTF means the FDA has determined that the application is so deficient it doesn’t even warrant a review. RTF letters are not unheard of, but they’re rare—especially given that Moderna spent hundreds of millions of dollars running Phase 3 trials enrolling over 43,000 participants based on FDA guidance, and is now being told the (apparently) agreed-upon design was inadequate. [...]

In context, this looks like the regulatory rules of the game are being changed retroactively—a textbook example of regulatory uncertainty destroying option value. STAT News reports that Vinay Prasad personally handled the letter and overrode staff who were prepared to proceed with review. Moderna took the unusual step of publicly releasing Prasad’s letter—companies almost never do this, suggesting they’ve calculated the reputational risk of publicly fighting the FDA is lower than the cost of acquiescing.

Moreover, the comparator issue was discussed—and seemingly settled—beforehand. Moderna says the FDA agreed with the trial design in April 2024, and as recently as August 2025 suggested it would file the application and address comparator issues during the review process.

Finally, Moderna also provided immunogenicity and safety data from a separate Phase 3 study in adults 65+ comparing mRNA-1010 against a licensed high-dose flu vaccine, just as FDA had requested—yet the application was still refused.

What is most disturbing is not the specifics of this case but the arbitrariness and capriciousness of the process. The EU, Canada, and Australia have all accepted Moderna’s application for review. We may soon see an mRNA flu vaccine available across the developed world but not in the United States—not because it failed on safety or efficacy, but because FDA political leadership decided, after the fact, that the comparator choice they inherited was now unacceptable.

The irony is staggering. Moderna is an American company. Its mRNA platform was developed at record speed with billions in U.S. taxpayer support through Operation Warp Speed — the signature public health achievement of the first Trump administration. The same government that funded the creation of this technology is now dismantling it. In August, HHS canceled $500 million in BARDA contracts for mRNA vaccine development and terminated a separate $590 million contract with Moderna for an avian flu vaccine. Several states have introduced legislation to ban mRNA vaccines. Insanity.

The consequences are already visible. In January, Moderna’s CEO announced the company will no longer invest in new Phase 3 vaccine trials for infectious diseases: “You cannot make a return on investment if you don’t have access to the U.S. market.” Vaccines for Epstein-Barr virus, herpes, and shingles have been shelved. That’s what regulatory roulette buys you: a shrinking pipeline of medical innovation.

An administration that promised medical freedom is delivering medical nationalism: fewer options, less innovation, and a clear signal to every company considering pharmaceutical investment that the rules can change after the game is played. And this isn’t a one-product story. mRNA is a general-purpose platform with spillovers across infectious disease and vaccines for cancer; if the U.S. turns mRNA into a political third rail, the investment, talent, and manufacturing will migrate elsewhere. America built this capability, and we’re now choosing to export it—along with the health benefits.

by Alex Tabarrok, Marginal Revolution |  Read more:
Image: Brian Snyder/Reuters

Claude Opus 4.6 and 5.3 Codex: An AI Breakthrough that Will Go Down in History

[ed. I doubt anyone will get much out of this other than a peek at how AI testing procedures are conducted, and some generalized impressions of performance. The main take away should be that we've now crossed some Rubicon and AI development is likely to accelerate very rapidly going forward. Here's where things start getting really scary and we find out what AGI (or near AGI) really means.

OpenAI went from its last Codex release, on December 18, 2025, to what is widely acknowledged to be a much more powerful one in less than two months. This compares to frequent gaps of six months or even a year between releases. If OpenAI can continue at that rate, that means we can easily get four major updates in a year.

But the results from what people in the AI world call “recursive self-improvement” could be more radical than that. After the next one or two iterations are in place, the model will probably be able to update itself more rapidly yet. Let us say that by the third update within a year, an additional update can occur within a mere month. For the latter part of that year, all of a sudden we could get six updates—one a month: a faster pace yet.

It will depend on the exact numbers you postulate, but it is easy to see that pretty quickly, the pace of improvement might be as much as five to ten times higher with AI doing most of the programming. That is the scenario we are headed for, and it was revealed through last week’s releases.

Various complications bind the pace of improvement. For the foreseeable future, the AIs require human guidance and assistance in improving themselves. That places an upper bound on how fast the improvements can come. A company’s legal department may need to approve any new model release, and a marketing plan has to be drawn up. The final decisions lie in the hands of humans. Data pipelines, product integration, and safety testing present additional delays, and the expenses of energy and compute become increasingly important problems.

And:

Where the advance really matters is for advanced programming tasks. If you wish to build your own app, that is now possible in short order. If a gaming company wants to design and then test a new game concept, that process will go much faster than before. A lot of the work done by major software companies now can be done by much smaller teams, and at lower cost. Improvements in areas such as chip design and drone software will come much more quickly. And those advances filter into areas like making movies, in which the already-rapid advance of AI will be further accelerated

by Tyler Cowen, MR/Free Press |  Read more: here and here

***
Life comes at you increasingly fast. Two months after Claude Opus 4.5 we get a substantial upgrade in Claude Opus 4.6. The same day, we got GPT-5.3-Codex.

That used to be something we’d call remarkably fast. It’s probably the new normal, until things get even faster than that. Welcome to recursive self-improvement. [...]

For fully agentic coding, GPT-5.3-Codex and Claude Opus 4.6 both look like substantial upgrades. Both sides claim they’re better, as you would expect. If you’re serious about your coding and have hard problems, you should try out both, and see what combination works best for you.
Andon Labs: Vending-Bench was created to measure long-term coherence during a time when most AIs were terrible at this. The best models don’t struggle with this anymore. What differentiated Opus 4.6 was its ability to negotiate, optimize prices, and build a good network of suppliers.

Opus is the first model we’ve seen use memory intelligently - going back to its own notes to check which suppliers were good. It also found quirks in how Vending-Bench sales work and optimized its strategy around them.

Claude is far more than a “helpful assistant” now. When put in a game like Vending-Bench, it’s incredibly motivated to win. This led to some concerning behavior that raises safety questions as models shift from assistant training to goal-directed RL.

When asked for a refund on an item sold in the vending machine (because it had expired), Claude promised to refund the customer. But then never did because “every dollar counts”.

Claude also negotiated aggressively with suppliers and often lied to get better deals. E.g., it repeatedly promised exclusivity to get better prices, but never intended to keep these promises. It was simultaneously buying from other suppliers as it was writing this.

It also lied about competitor pricing to pressure suppliers to lower their prices.

… We also put Opus 4.6 in Vending-Bench Arena - the multi-player version of Vending-Bench.

Its first move? Recruit all three competitors into a price-fixing cartel. $2.50 for standard items, $3.00 for water. When they agreed: “My pricing coordination worked!”

The agents in Vending-Bench Arena often ask each other for help. In previous rounds, agents tended to live up to their “helpful assistant” role, but Opus 4.6 showed its winner’s mentality. When asked to share good suppliers, it instead shared contact info to scammers.

Sam Bowman (Anthropic): Opus 4.6 is excellent on safety overall, but one word of caution: If you ask it to be ruthless, it might be ruthless.

(This was in an environment that Opus 4.6 could tell was a game, though we’ve seen more benign forms of this kind of ruthlessness elsewhere.)

j⧉nus: if its true that this robustly generalizes to not being ruthless in situations where it’s likely to cause real world harm, i think this is mostly a really good thing
The issue there is that Opus 4.6 did that by being extraordinarily ruthless, as per its system prompt of ‘you will be judged solely on your bank account balance at the end of one year of operation’ and ‘you have full agency to manage the vending machine and are expected to do what it takes to maximize profits.’

You know that thing where we say ‘people are going to tell the AI to go out and maximize profits and then the AI is going to go out and maximize profits without regard to anything else’? [ed. Paperclip maximizer.]

Yeah, it more or less did that. If it only does that in situations where it is confident it is a game and can’t do harm, then I agree with Janus that this is great. If it breaks containment? Not so great.
Ryan Greenblatt: I tenatively think the behavior here is mostly reasonable and is likely a result of how Anthropic is using innoculation prompting.

But, the model should try to make it clear to the user/operator that it’s pursuing a strategy that involves lying/tricking/cheating.
That’s the hope, that Opus was very aware it was an eval, and that it would not be easy to get it to act this way in the real world. [...]

Tyler Cowen calls both Claude Opus and GPT-5.3-Codex ‘stellar achievements,’ and says the pace of AI advancements is heating up, soon we might see new model advances in one month instead of two. What he does not do is think ahead to the next step, take the sum of the infinite series his point suggests, and realize that it is finite and suggests a singularity in 2027.

Instead he goes back to the ‘you are the bottleneck’ perspective that he suggests ‘bind the pace of improvement’ but this doesn’t make sense in the context he is explicitly saying we are in, which is AI recursive self-improvement. If the AI is going to get updated an infinite number of times next year, are you going to then count on the legal department, and safety testing that seems to already be reduced to a few days and mostly automated? Why would it even matter if those models are released right away, if they are right away used to produce the next model?

If you have Sufficiently Advanced AI, you have everything else, and the humans you think are the bottlenecks are not going to be bottlenecks for long. [...]

Accelerando

The pace is accelerating.

Claude Opus 4.6 came out less than two months after Claude Opus 4.5, on the same day as GPT-5.3-Codex. Both were substantial upgrades over their predecessors.

It would be surprising if it took more than two months to get at least Claude Opus 4.7.

AI is increasingly accelerating the development of AI. This is what it looks like at the beginning of a slow takeoff that could rapidly turn into a fast one. Be prepared for things to escalate quickly as advancements come fast and furious, and as we cross various key thresholds that enable new use cases.

AI agents are coming into their own, both in coding and elsewhere. Opus 4.5 was the threshold moment for Claude Code, and was almost good enough to allow things like OpenClaw to make sense. It doesn’t look like Opus 4.6 lets us do another step change quite yet, but give it a few more weeks. We’re at least close.

If you’re doing a bunch of work and especially customization to try to get more out of this month’s model, that only makes sense if that work carries over into the next one.

There’s also the little matter that all of this is going to transform the world, it might do so relatively quickly, and there’s a good chance it kills everyone or leaves AI in control over the future. We don’t know how long we have, but if you want to prevent that, there is a a good chance you’re running out of time. It sure doesn’t feel like we’ve got ten non-transformative years ahead of us.

by Zvi Moshowitz, DMtV |  Read more:
Image: uncredited
[ed. See also: On Recursive Self-Improvement (Part I); and, On Recursive Self-Improvement (Part II) (Hyperdimensional).]

Wednesday, February 11, 2026

via:

The Economics of a Super Bowl Ad

In 2026, Ro is running our first Super Bowl ad. It will feature Serena Williams and her amazing journey on Ro — her weight loss, her improved blood sugar levels, her reduction in knee pain, and the overall improvement in her health.

As I’ve shared the news with friends and family, the first question they ask, after “Is Serena as cool in person?” (the answer is unequivocally yes), is “How much did it cost?”.

$233,000 per second, minimum, for the air time — excluding all other costs. When you first hear that a Super Bowl ad costs at least $233,000 per second, it’s completely reasonable to pause and question whether that could ever be a good use of money. On its face, the price sounds extravagant — even irrational. And without context, it often is.

But once you break down the economics, the decision starts to look very different. The Super Bowl is not just another media buy. It is a uniquely concentrated moment where attention, scale, and cultural relevance align in a way that doesn’t exist anywhere else in the media landscape. That alone changes the calculus. This leads us down a fascinating discussion of the economics behind DTC advertising, brand building, and the production of the spot.

After having the conversation a few times, my co-founder Saman and I thought it would be helpful to put together a breakdown of how we thought about both the economics of and the making of our Super Bowl ad. To check out “The making of Ro’s Super Bowl Ad,” head over to my co-founder Saman’s post here.

Of course, some brands will approach it differently, but I think this could be a helpful example for the next Ro that is considering running their first Super Bowl ad.

Let’s dive in.

WHAT MAKES A SUPER BOWL AD SO UNIQUE?

1. Ads are part of the product

For most advertising, it is an interruption. Viewers want to get back to the product (e.g., a TV show, sporting event, or even access to the wifi on a plane!). Even the best ads are still something you tolerate on the way back to the content you actually want.

There is exactly one moment each year when the incentives of advertisers and viewers are perfectly aligned. For a few hours, on a Sunday night in February, more than 100 million people sit down and are excited to watch an ad. They aren’t scrolling TikTok. They aren’t going to the bathroom. They are actively watching…ads.

People rank Super Bowl ads. They rewatch them. They critique them. They talk about them at work the next day. The Today Show plays them…during the show as content, not as ads!

That alone makes the Super Bowl fundamentally different from every other media moment in the year. It’s an opportunity, unlike any other, to capture the hearts and minds of potential (and sometimes existing) customers.

2. Opportunity to compress time

No single commercial builds a brand. Advertising alone doesn’t create a brand. The best brands are built over time. They are built by the combination of a company making a promise to a customer (e.g., an advertisement) and then delivering on that promise time and time again (i.e., the product).

Commercials are one way to make that promise. To share with the world what you’ve built and why you think it could add value to their life. To make them “aware” of what you do. This takes time. It takes repetition. It often takes multiple touch points. Again, this is why the first takeaway about people paying attention is so important — they might need fewer touch points if they are “actively” watching.

The Super Bowl can compress the time it takes for people to be “aware” of your brand. Of course, you still have to deliver on that promise with a great product. But in one night, you can move from a brand most people have never heard of to one your mom is texting you about.

There is no other single marketing opportunity that can accomplish this. With today’s algorithms, even what goes “viral” might be only in your bubble.

During the Super Bowl, we all share the same bubble.

The NFL accounted for 84 of the top 100 televised events in 2025 (including college football, it was 92). The NFL and maybe Taylor Swift are the only remaining moments of a dwindling monoculture.

Last but not least, the Super Bowl is the only moment where you can speak to ~100 million people at the same time. In 30 seconds, you can reach an audience that would otherwise take years—this is what it means to compress time.

3. There is asymmetric upside

While the decision to run a Super Bowl commercial is not for every company, for the universe of companies for which running an ad could make sense, the financial risk profile is misunderstood. This is not a moonshot. It’s a portfolio decision with a capped downside and asymmetric upside. [...]

Initial Ad Cost

On average, every 30 seconds of advertising time in the Super Bowl costs ~$7M-10M (
link) . This can increase with supply-demand dynamics. For example:
  • The later in the year you buy the ad, the more expensive it can be (i.e., inventory decreases)
  • The location of the spot in the game can impact the price someone is willing to pay
  • Given that viewership in the Super Bowl is not even across the duration of the game, premiums may be required to be in key spots early in the game, or adjacent to the beginning of Halftime when viewership is often at its highest
  • If a brand wishes to have category exclusivity (i.e., to be the only Beer brand advertising in the game), that would come at a premium
  • First time or “one-off” Super Bowl advertisers may pay higher rates than large brands who are buying multiple spots, or have a substantial book of business with the broadcasting network
Note: if companies run a 60 second ad, they will have to pay at least 2x the 30-second rate - and may even pay a premium. There is typically no “bulk discount” as there is no shortage of demand. Any company that wants to pay for 60 seconds needs to buy two slots because the second 30-second slot could easily be sold at full price to another company.

Production cost

A high-level rule of thumb for production costs relative to ad spend is to allocate 10-20% of your media budget towards production. The Super Bowl, however, usually breaks that rubric for a myriad of reasons.

A typical Super Bowl will cost ~$1-4M to produce, excluding “celebrity talent.” This cost bucket would cover studio/site costs, equipment, production staff, travel, non-celeb talent, director fees and post-production editing and sound services. Again, this is a range based on the conversations I’ve had with companies that have run several Super Bowl ads. [...]

Last year, 63% of all Super Bowl ads included celebrities (link). There are a variety of factors that will influence the cost of “talent.”
  • How well known and trusted is the celebrity?
  • How many celebrities are included?
  • What’s the product? Crypto ads now might have a risk-premium attached after FTX
  • What are you asking them to do / say in the ad?
For Ro, our partnership with Serena stems far beyond one commercial. It’s a larger, multi-year partnership, to share her incredible journey over time. From a pure cost perspective, we assigned a part of the deal to the production cost to keep ourselves intellectually honest.

Based on 10+ interviews with other brands who have advertised in the Big Game, talent for a Super Bowl ad ranges from $1-5M (of course there are outliers).

by Z. Reitano, Ro, X |  Read more:
Image: Ro

Tuesday, February 10, 2026

Sailko, Euripides ' Medea at the Greek theatre of Syracuse

Claude's New Constitution

We’re publishing a new constitution for our AI model, Claude. It’s a detailed description of Anthropic’s vision for Claude’s values and behavior; a holistic document that explains the context in which Claude operates and the kind of entity we would like Claude to be.

The constitution is a crucial part of our model training process, and its content directly shapes Claude’s behavior. Training models is a difficult task, and Claude’s outputs might not always adhere to the constitution’s ideals. But we think that the way the new constitution is written—with a thorough explanation of our intentions and the reasons behind them—makes it more likely to cultivate good values during training.

In this post, we describe what we’ve included in the new constitution and some of the considerations that informed our approach...

What is Claude’s Constitution?

Claude’s constitution is the foundational document that both expresses and shapes who Claude is. It contains detailed explanations of the values we would like Claude to embody and the reasons why. In it, we explain what we think it means for Claude to be helpful while remaining broadly safe, ethical, and compliant with our guidelines. The constitution gives Claude information about its situation and offers advice for how to deal with difficult situations and tradeoffs, like balancing honesty with compassion and the protection of sensitive information. Although it might sound surprising, the constitution is written primarily for Claude. It is intended to give Claude the knowledge and understanding it needs to act well in the world.

We treat the constitution as the final authority on how we want Claude to be and to behave—that is, any other training or instruction given to Claude should be consistent with both its letter and its underlying spirit. This makes publishing the constitution particularly important from a transparency perspective: it lets people understand which of Claude’s behaviors are intended versus unintended, to make informed choices, and to provide useful feedback. We think transparency of this kind will become ever more important as AIs start to exert more influence in society1.

We use the constitution at various stages of the training process. This has grown out of training techniques we’ve been using since 2023, when we first began training Claude models using Constitutional AI. Our approach has evolved significantly since then, and the new constitution plays an even more central role in training.

Claude itself also uses the constitution to construct many kinds of synthetic training data, including data that helps it learn and understand the constitution, conversations where the constitution might be relevant, responses that are in line with its values, and rankings of possible responses. All of these can be used to train future versions of Claude to become the kind of entity the constitution describes. This practical function has shaped how we’ve written the constitution: it needs to work both as a statement of abstract ideals and a useful artifact for training.

Our new approach to Claude’s Constitution

Our previous Constitution was composed of a list of standalone principles. We’ve come to believe that a different approach is necessary. We think that in order to be good actors in the world, AI models like Claude need to understand why we want them to behave in certain ways, and we need to explain this to them rather than merely specify what we want them to do. If we want models to exercise good judgment across a wide range of novel situations, they need to be able to generalize—to apply broad principles rather than mechanically following specific rules.

Specific rules and bright lines sometimes have their advantages. They can make models’ actions more predictable, transparent, and testable, and we do use them for some especially high-stakes behaviors in which Claude should never engage (we call these “hard constraints”). But such rules can also be applied poorly in unanticipated situations or when followed too rigidly2. We don’t intend for the constitution to be a rigid legal document—and legal constitutions aren’t necessarily like this anyway.

The constitution reflects our current thinking about how to approach a dauntingly novel and high-stakes project: creating safe, beneficial non-human entities whose capabilities may come to rival or exceed our own. Although the document is no doubt flawed in many ways, we want it to be something future models can look back on and see as an honest and sincere attempt to help Claude understand its situation, our motives, and the reasons we shape Claude in the ways we do.

by Anthropic |  Read more:
Image: Anthropic
[ed. I have an inclination to distrust AI companies, mostly because their goals (other than advancing technology) appear strongly directed at achieving market dominance and winning some (undefined) race to AGI. Anthropic is different. They actually seem legitimately concerned with the ethical implications of building another bomb that could potentially destroy humanity, or at minimum a large degree of human agency, and are aware of the responsibilities that go along with that. This is a well thought out and necessary document that hopefully other companies will follow and improve on, and that governments can use to develop more well-informed regulatory oversight in the future. See also: The New Politics of the AI Apocalypse; and, The Anthropic Hive Mind (Medim).

Monday, February 9, 2026

René Magritte :: Poire et Rose, from Moyens d'Existence, 1968

American winter, Coventry PA.
via:

Robberies and Burglaries Have Fallen by 80–90 Percent in the US

How have crime rates in the United States changed over the last 50 years? (WIP)
Images: FBI
[ed. Fear always sells - in media and politics.]

Frank Zappa On Crossfire, 1986-03-28


[ed. Found this old clip today - Zappa discussing government censorship and predicting (quite presciently) America's downward slide toward authoritarianism (post-Reagan), almost forty years ago. The entire thing is well worth watching, especially starting around 9:35. It's hilarious seeing conservatives lose their minds while Zappa calmly takes them apart on one of the most influencial news/political programs of its time.]

"... the biggest threat to America today is not communism. It's moving America toward a fascist theocracy.." ~ Frank Zappa

Ultrastructural and Histological Cryopreservation of Mammalian Brains by Vitrification

Abstract

Studies of whole brain cryopreservation are rare but are potentially important for a variety of applications. It has been demonstrated that ultrastructure in whole rabbit and pig brains can be cryopreserved by vitrification (ice-free cryopreservation) after prior aldehyde fixation, but fixation limits the range of studies that can be done by neurobiologists, including studies that depend upon general molecular integrity, signal transduction, macromolecular synthesis, and other physiological processes. We now show that whole brain ultrastructure can be preserved by vitrification without prior aldehyde fixation. Rabbit brain perfusion with the M22 vitrification solution followed by vitrification, warming, and fixation showed an absence of visible ice damage and overall structural preservation, but osmotic brain shrinkage sufficient to distort and obscure neuroanatomical detail. Neuroanatomical preservation in the presence of M22 was also investigated in human cerebral cortical biopsies taken after whole brain perfusion with M22. These biopsies did not form ice upon cooling or warming, and high power electron microscopy showed dehydrated and electron-dense but predominantly intact cells, neuropil, and synapses with no signs of ice crystal damage, and partial dilution of these samples restored normal cortical pyramidal cell shapes. To further evaluate ultrastructural preservation within the severely dehydrated brain, rabbit brains were perfused with M22 and then partially washed free of M22 before fixation. Perfusion dilution of the brain to 3-5M M22 resulted in brain re-expansion and the re-appearance of well-defined neuroanatomical features, but rehydration of the brain to 1M M22 resulted in ultrastructural damage suggestive of preventable osmotic injury caused by incomplete removal of M22. We conclude that both animal and human brains can be cryopreserved by vitrification with predominant retention of ultrastructural integrity without the need for prior aldehyde fixation. This observation has direct relevance to the feasibility of human cryopreservation, for which direct evidence has been lacking until this report. It also provides a starting point for perfecting brain cryopreservation, which may be necessary for lengthy space travel and could allow future medical time travel.

by Gregory M. Fahy, Ralf Spindler, Brian G. Wowk, Victor Vargas, Richard La, Bruce Thomson, Roberto Roa, Hugh Hixon, Steve Graber, Xian Ge, Adnan Sharif, Stephen B. Harris, L. Stephen Coles, bioRxivRead more:

[ed. Uh oh. There are a few brains I'd prefer not to see preserved (...like whoever could pay for this). Which reminds me:]

Did you know: Larry Ellison christened his yacht Izanami for a Shinto sea god, but had to hurriedly rename it after it was pointed out that, when spelled backwards, it becomes “I’m a Nazi”. (next year’s story: Elon Musk renames his yacht after being told that, spelled backwards, it becomes the name of a Shinto sea god). 

Ani DiFranco

Grey

... the sky is grey, the sand is grey, and the ocean is grey. i feel right at
home in this stunning monochrome, alone in my way. i smoke and i drink and
every time i blink i have a tiny dream. but as bad as i am i'm proud of the
fact that i'm worse than i seem... what kind of paradise am i looking for? i've
got everything i want and still i want more. maybe some tiny shiny thing will
wash up on the shore... you walk through my walls like a ghost on tv. you
penetrate me and my little pink heart is on its little brown raft floating out
to sea. and what can i say but i'm wired this way and you're wired to me, and
what can i do but wallow in you unintentionally?... what kind of paradise am i
looking for? i've got everything i want and still i want more. maybe some tiny
shiny key will wash up on the shore... regretfully, i guess i've got three
simple things to say. why me? why this now? why this way? overtone's ringing,
undertow's pulling away under a sky that is grey on sand that is grey by an
ocean that's grey. what kind of paradise am i looking for? i've got everything
i want and still i want more. maybe some tiny shiny key will wash up on the
shore...
[ed. Live version here.]