Friday, February 13, 2026

Your Job Isn't Disappearing. It's Shrinking Around You in Real Time

You open your laptop Monday morning with a question you can’t shake: Will I still have a job that matters in two years?

Not whether you’ll be employed, but whether the work you do will still mean something.
Last week, you spent three hours writing a campaign brief. You saw a colleague generate something 80% as good in four minutes using an AI agent (Claude, Gemini, ChatGPT…). Maybe 90% as good if you’re being honest.

You still have your job. But you can feel it shrinking around you.

The problem isn’t that the robots are coming. It’s that you don’t know what you’re supposed to be good at anymore. That Excel expertise you built over five years? Automated. Your ability to research competitors and synthesize findings? There’s an agent for that. Your skill at writing clear project updates? Gone.

You’re losing your professional identity faster than you can rebuild it. And nobody’s telling you what comes next.

The Three Things Everyone Tries That Don’t Actually Work

When you feel your value eroding, you do what seems rational. You adapt, you learn, and you try to stay relevant.

First, you learn to use the AI tools better. You take courses on prompt engineering. You master ChatGPT, Claude, whatever new platform launches next week and the week after. You become the “AI person” on your team. You think that if I can’t beat them, I’ll use them better than anyone else.

This fails because you’re still competing on execution speed. You’re just a faster horse. And execution is exactly what’s being commoditized. Six months from now, the tools will be easier to use. Your “expertise” in prompting becomes worthless the moment the interface improves. You’ve learned to use the shovel better, but the backhoe is coming anyway.

Second, you double down on your existing expertise. The accountant learns more advanced tax code. The designer masters more software. The analyst builds more complex models. You will have the same thought as many others, “I’ll go so deep they can’t replace me.”

This fails because depth in a disappearing domain is a trap. You’re building a fortress in a flood zone. Agents aren’t just matching human expertise at the median level anymore. They’re rapidly approaching expert-level performance in narrow domains. Your specialized knowledge becomes a liability because you’ve invested everything in something that’s actively being automated. You’re becoming the world’s best telegraph operator in 1995.

Third, you try to “stay human” through soft skills. You lean into creativity, empathy, relationship building. You go to workshops on emotional intelligence. You focus on being irreplaceably human. You might think that what makes us human can’t be automated.

This fails because it’s too vague to be actionable. What does “be creative” actually mean when an AI can generate 100 ideas in 10 seconds? How do you monetize empathy when your job is to produce reports? The advice feels right but provides no compass. You end up doing the same tasks you always did, just with more anxiety and a vaguer sense of purpose.

The real issue with all three approaches is that they’re reactions, not redesigns. You’re trying to adapt your old role to a new reality. What actually works is building an entirely new role that didn’t exist before.

But nobody’s teaching you what that looks like.

The Economic Logic Working Against You

This isn’t happening to you because you’re failing to adapt. It’s happening because the economic incentive structure is perfectly designed to create this problem.

The mechanism is simple, companies profit immediately from adopting AI agents. Every task automated results in cost reduction. The CFO sees the spreadsheet, where one AI subscription replaces 40% of a mid-level employee’s work. The math is simple, and the decision is obvious.

Many people hate to hear that. But if they owned the company or sat in leadership, they’d do the exact same thing. Companies exist to drive profit, just as employees work to drive higher salaries. That’s how the system has worked for centuries.

But companies don’t profit from retraining you for a higher-order role that doesn’t exist yet.

Why? Because that new role is undefined, unmeasured, and uncertain. You can’t put “figure out what humans should do now” on a quarterly earnings call. You can’t show ROI on “redesign work itself.” Short-term incentives win. Long-term strategy loses.

Nobody invests in the 12-24 month process of discovering what your new role should be because there’s no immediate return on that investment.

We’re in a speed mismatch. Agent capabilities are compounding at 6-12 month cycles. [ed. Even faster now, after the release of Claude Opus 4.6 last week]. Human adaptation through traditional systems operates on 2-5 year cycles.

Universities can’t redesign curricula fast enough. They’re teaching skills that will be automated before students graduate. Companies can’t retrain fast enough. By the time they identify the new skills needed and build a program, the landscape has shifted again. You can’t pivot fast enough. Career transitions take time. Mortgages don’t wait.

We’ve never had to do this before.

Previous automation waves happened in manufacturing. You could see the factory floor. You could watch jobs disappear and new ones emerge. There was geographic and temporal separation.

This is different, knowledge work is being automated while you’re still at your desk. The old role and new role exist simultaneously in the same person, the same company, the same moment.

And nobody has an economic incentive to solve it. Companies maximize value through cost reduction, not workforce transformation. Educational institutions are too slow and too far removed from real-time market needs. Governments don’t understand the problem yet. You’re too busy trying to keep your current job to redesign your future one.

The system isn’t helping because it isn’t designed for continuous, rapid role evolution; it is designed for stability.

We’re using industrial-era institutions to solve an exponential-era problem. That’s why you feel stuck.

Your Experience Just Became Worthless (The Timeline)

Let me tell you a story of my friend, let’s call her Jane (Her real name is KatÅ™ina, but the Czech diacritic is tricky for many). She was a senior research analyst at a mid-sized consulting firm. Ten years of experience. Her job was provide answers to the client companies, who would ask questions like “What’s our competitor doing in the Asian market?” and she’d spend 2-3 weeks gathering data, reading reports, interviewing experts, synthesizing findings, and creating presentations.

She was good, clients loved her work, and she billed at $250 an hour.

The firm deployed an AI research agent in Q2 2023. Not to replace her, but as they said, to “augment” her. Management said all the right things about human-AI collaboration.

The agent could do Jane’s initial research in 90 minutes, it would scan thousands of sources, identify patterns, generate a first-draft report.

Month one: Jane was relieved and thought she could focus on high-value synthesis work. She’d take the agent’s output and refine it, add strategic insights, make it client-ready.

Month three: A partner asked her, “Why does this take you a week now? The AI gives us 80% of what we need in an hour. What’s the other 20% worth?”

Jane couldn’t answer clearly. Because sometimes the agent’s output only needed light editing. Sometimes her “strategic insights” were things the agent had already identified, just worded differently.

Month six: The firm restructured. They didn’t fire Jane, they changed her role to “Quality Reviewer.” She now oversaw the AI’s output for 6-8 projects simultaneously instead of owning 2-3 end to end.

Her title stayed the same. Her billing rate dropped to $150 an hour. Her ten years of experience felt worthless.

Jane tried everything. She took an AI prompt engineering course. She tried to go deeper into specialized research methodologies. She emphasized her client relationships. None of it mattered because the firm had already made the economic calculation.

One AI subscription costs $50 a month. Jane’s salary: $140K a year. The agent didn’t need to be perfect; it just needed to be 70% as good at 5% of the cost. But it was fast, faster than her.

The part that illustrates the systemic problem, you often hear from AI vendors that, thanks to their AI tools, people can focus on higher-value work. But when pressed on what that meant specifically, they’d go vague. Strategic thinking, client relationships, creative problem solving.

Nobody could define what higher-value work actually looked like in practice. Nobody could describe the new role. So they defaulted to the only thing they could measure: cost reduction.

Jane left six months later. The firm hired two junior analysts at $65K each to do what she did. With the AI, they’re 85% as effective as Jane was.

Jane’s still trying to figure out what she’s supposed to be good at. Last anyone heard, she’s thinking about leaving the industry entirely.

Stop Trying to Be Better at Your Current Job

The people who are winning aren’t trying to be better at their current job. They’re building new jobs that combine human judgment with agent capability.

Not becoming prompt engineers, not becoming AI experts. Becoming orchestrators who use agents to do what was previously impossible at their level. [...]

You’re not competing with the agent. You’re creating a new capability that requires both you and the agent. You’re not defensible because you’re better at the task. You’re defensible because you’ve built something that only exists with you orchestrating it.

This requires letting go of your identity as “the person who does X.” Marcus doesn’t write copy anymore. That bothered him at first. He liked writing. But he likes being valuable more.

Here’s what you can do this month:

by Jan Tegze, Thinking Out Loud |  Read more:
Image: uncredited
[ed. Not to criticize, but this advice still seems a bit too short-sighted (for reasons articulated in this article: AI #155: Welcome to Recursive Self-Improvement (DMtV):]
***

Presumably you can see the problem in such a scenario, where all the existing jobs get automated away. There are not that many slots for people to figure out and do genuinely new things with AI. Even if you get to one of the lifeboats, it will quickly spring a leak. The AI is coming for this new job the same way it came for your old one. What makes you think seeing this ‘next evolution’ after that coming is going to leave you a role to play in it?

If the only way to survive is to continuously reinvent yourself to do what just became possible, as Jan puts it? There’s only one way this all ends.

I also don’t understand Jan’s disparate treatment of the first approach that Jan dismisses, ‘be the one who uses AI the best,’ and his solution of ‘find new things AI can do and do that.’ In both cases you need to be rapidly learning new tools and strategies to compete with the other humans. In both cases the competition is easy now since most of your rivals aren’t trying, but gets harder to survive over time.
***

[ed. And the fact that there'll be a lot fewer of these types of jobs available. This scenario could be reality within the next year (or less!). Something like a temporary UBI (universal basic income) might be needed until long-term solutions can be worked out, but do you think any of the bozos currently in Washington are going to focus on this? And, that applies to safety standards as well. Here's Dean Ball (Hyperdimensional): On Recursive Self-Improvement (Part II):
***

Policymakers would be wise to take especially careful notice of this issue over the coming year or so. But they should also keep the hysterics to a minimum: yes, this really is a thing from science fiction that is happening before our eyes, but that does not mean we should behave theatrically, as an actor in a movie might. Instead, the challenge now is to deal with the legitimately sci-fi issues we face using the comparatively dull idioms of technocratic policymaking. [...]

Right now, we predominantly rely on faith in the frontier labs for every aspect of AI automation going well. There are no safety or security standards for frontier models; no cybersecurity rules for frontier labs or data centers; no requirements for explainability or testing for AI systems which were themselves engineered by other AI systems; and no specific legal constraints on what frontier labs can do with the AI systems that result from recursive self-improvement.

To be clear, I do not support the imposition of such standards at this time, not so much because they don’t seem important but because I am skeptical that policymakers could design any one of these standards effectively. It is also extremely likely that the existence of advanced AI itself will both change what is possible for such standards (because our technical capabilities will be much stronger) and what is desirable (because our understanding of the technology and its uses will improve so much, as will our apprehension of the stakes at play). Simply put: I do not believe that bureaucrats sitting around a table could design and execute the implementation of a set of standards that would improve status-quo AI development practices, and I think the odds are high that any such effort would worsen safety and security practices.