Let’s shift gears a bit and talk about artificial intelligence. You’ve written comprehensively about the subject and present a compelling case about its dangers, the most compelling case I’ve read so far. I’ve been pretty interested in but still largely unconvinced by the perspectives of the most vocal opponents of strong general AI like Eliezer Yudkowsky and Geoffrey Hinton, but you’ve been able to bring a tempered clarity here that’s squared some circles around the matter for me. And so I’ve been wanting to ask something further:
Before we get AGI we’re likely to see more progress in the areas with the most commercial potential, but what this progress could and should look like is a hugely important but still unanswered question. What are your perspectives on the way the AI market will shape up in the near term? Will we see vertical integration with companies like OpenAi making fully AI powered phones? Or will sex bots become common? And what kind of products would you expect or like to see as a high-level creator?
Tomas:
I don't have fully formed opinions on the topic, so this might be a good time to think out loud.
It's not clear to me that there will be huge companies like Facebook or Google in AI.
These companies were the result of network effects, where the more users you had, the better the service became. This is true of all marketplaces, but I don't see it in AI. I see a big cost of entry to train the models, but it doesn't look like it's big enough to eliminate competition. There's already half a dozen competitors close to the cutting edge, with OpenAI, Mistral, Anthropic, Google Gemini, Meta... And odds are the training will get cheaper with better algorithms and training techniques. It also looks like Gemini Advanced is close to ChatGPT 4 in terms of performance, which suggests intelligence is an emerging quality of neural networks rather than something unique OpenAI did.
If you think about it, that makes sense. There are very few differences in our genetic code between other primates and humans. Odds our the differences are mostly just more layers of neurons, and maybe a few tweaks on how they work. But the basis is the same, so it looks like we live in a universe where intelligence is an emerging property of neural networks. I'm simplifying tremendously here, but all of this seems consistent.
If this is true, it will have lots of consequences we can foresee.
One is that we will reach AGI (Artificial General Intelligence) soon enough. Just looking at computing power, we should reach AGI in one to three decades.
It might be that we already have the necessary components, but we just didn't connect them properly. ChatGPT is extremely powerful, but it's just one module that takes words in and spits words out. It doesn't have modules for things like deciding its own goals or acting on them. So of course the intelligence it will show will appear limited! That's why I believe exploring AI agents is a heavily underestimated approach to reach AGI.
If it's true that AGI will be reached in our lifetimes, odds are the singularity will come around that time, and we'll get a superintellgence. We can't see beyond the singularity, so it's impossible to predict anything. But we can speculate what might happen afterwards, assuming the AGI is aligned and doesn't try to kill us all.
First, nothing else really matters.
The fertility issue? Solved by the singularity, since a superintelligence (let's call it ASI) can understand the problem (and solve it), design devices to make babies, educate them better than humans would do, build robots to take care of their needs...
Wars? Most of them are due to resource scarcity, but most of it disappears after ASI. Not enough food? Increase productivity. Not enough energy? Do nuclear fission or fusion or beam solar energy from space. Not enough raw materials? Mine them from space or transmute them.
Humans tend to focus on what mattered in the past, but that is becoming obsolete pretty fast.
Then there's the question of the interim. What will happen between now and ASI? I think productivity will explode, but it might not be seen in GDP data, because a lot of the explosion will be deflationary: Things that used to take lots of resources to make will suddenly take substantially less. In the short term, it will increase demand, but supply (productivity improvements) will be driven by AI while demand is mostly driven by human decisions, so odds are supply will outstrip demand and prices will shrink and industries will shrink.
The counterbalance will be that now it's much cheaper to create new companies and new markets, but those won't require as much resources to be built. We will see the first billionaire solopreneurs and many unemployed people. Of course, this means inequality will increase. But wealth will be geographically spread unlike in the past, and yet billionaires will be extremely mobile. The world has never seen this before. Odds are tax bases will crumble, there will be tax competition for these people, and they will be able to coordinate to influence politics in an advantageous way. I wonder if new city-states won't be built on the basis of catering to them.
Another thing that I assume will happen is that the fight for attention will increase several notches, so we will need AIs to buffer us. We already have AIs that protect us from spam. Soon, our personal AIs will filter the content we get exposed to, to only show what's most relevant. At the same time, we will be able to reach more people at a scale never seen before, so we will need filters for that. The cost of litigation might drop, so we might develop AIs that sue, countersue, and protect us from litigation without us even realizing it. Our AIs might crawl the Internet to learn about people we might be interested in meeting, contact them, or filter these contacts. In other words, the information overload will only be manageable with AI buffers between us and the world of information.
This is, assuming AIs can't build great robots. Odds are they will be able to, at which points humans won't be much different from AIs, and we'll get into a Blade Runner world where we won't know whether a person is human or robot. In such a world, most of our social needs will have an option to be solved by robots, and human experiences will just be a special version of that—special because it will remain scarce, not because it will be better.
An interesting analogy might be art. Up until the mid-1800s, paintings became more and more realistic. Then we invented photography, realism became completely devalued, and suddenly we have impressionism, cubism, and the like. A lot of their value is not as much the creativity, as the fact that a human did that art and not a machine. Something similar might happen with relationships, with the added complexity that impressionism might be creative, but odds are AIs will be more creative than humans.
Put in another way, we're entering a strange world.
Sotonye: On whether gains in business efficiency means a loss of creativity
So this is pretty huge. This has clarified a lot or the ambiguity over what AI “is” for me. If I’m understanding this the right way, the simplest way to think about AI is as a tool for adding gains to general efficiency. The past is a good leading indicator here and I think pretty much confirms this, we’ve seen the use-case of “dumb ai” follow this exact sort of efficiency promoting pattern. That pattern rarely gets mapped onto the future when we normally think about AI interestingly, maybe because current AI is so seamlessly diffused throughout the business process it’s sort of the air we breathe, no one sees it! But this future makes a lot of sense to me.
I’m wondering now about whether our future efficiency gains spell boon or bust for creative innovation and progress, and I’m trying to sort of reason about it through analogy: For example there’s a case to be made that the reason we get 1,000 Batman remakes everyday before sunrise and a new Apple tablet mini everyday after sunset may be less about a spiritual or other kind of Spenglerian decline, and more about businesses just working better, becoming extremely efficient. 90s Hollywood and 2000s Apple, without big efficient databases, may have left industry executives with only vague insight into the day to day of internal operations and finer details of outward markets, and product ideas may have been greenlit that would otherwise seem too risky. Does efficiency create a stagnant culture, or is Spengler right about a dearth of transcendent vision creating such conditions? I am seriously desperate for good new movies and I’m worried that the age of quality is behind us!
Tomas:
I fear your analogy might be misleading, and I'll tell you why in a moment. Instead, I would use the analogy of what you and I are doing now.
30 years ago, it would have been impossible, because creators like us were extremely rare. Why? Because bringing insights to the market had high production, transaction, and distribution costs.
To get distribution, you needed to physically print a paper and distribute it with vans, or emit a radio or TV signal. Since that's expensive, only a few did it, and they controlled the content.
The content itself was expensive too, because the production values required equipment and humans supporting the shows, or research and trips and phone calls from journalists and producers.
You needed agreements with payment processors, rev share agreements with different partners in the stack...
The result was that there was little content. Supply was lower than demand.
But now all these costs have been eliminated. Creating an article just takes one person's time with Substack. Creating a video takes one person's time with Tiktok or Youtube. And they can live off of that.
The result has been an explosion of supply. That's what reducing the marginal cost of production does.
With the explosion in supply, a few things have happened:
Now supply outstrips demand, and we're hitting a limiting factor that we had never hit before as a species: our attention. It's now precious. It's scarce. We have to be very cautious about how to use it, and this is not something we've evolved naturally to do.
When you create so much supply, the vast majority will be shit. But some will be amazing. It's the wild west, with lots of bad things happening but also gold rushes. In other words, the distribution of content quality will change, from something narrow but reasonably high quality, to a much broader distribution that includes lots of duds and a few pieces of gold. This is how you get people like Ben Thompson or Veritasium.
Social media fulfills a double function of crushing distribution costs but also as a filter for content quality
AI is going to follow this trend further. We are going to drown on supply, and most of it will be bad, but some of it will be exceptional.
This means we will need means to filter content quality. Social Media already fulfills that, but it's about to get attacked by this AI-generated content. Will we need other tools?
It also means we're about to enter a world full of weird content, where most of it is trash, but some of it will be the best content ever.
by Sotonye, Neo Narrative | Read more:
Image: uncredited
[ed. I'm adding a bit more from Pathfinding With Tomas Pueyo: An Interview (below) because the topics covered were so wide-ranging.]