AI subscriptions are, so far, the tech industry’s favorite idea for making money from AI. This is conceptually simple — your customers are paying you for access to a new product. The problem is that compute-heavy cloud services like ChatGPT and Copilot remain extremely expensive to run, meaning that in some cases even paying customers might be costing them money. Computing costs are likely to fall, and AI-model efficiency could improve, but, much like the basic assumption that there’s a huge market for these things just waiting to be tapped, these are bets and not particularly safe ones.
This week, Microsoft announced that it would be integrating AI more deeply into even more of its products, including Windows, which, among many other chatbot-shaped things, is set to get a feature called Recall, described by the company as “an explorable timeline of your PC’s past.” This feature, which will be turned on by default for Windows users, records and “recalls” everything you do on your computer by taking near-constant screenshots, processing them with AI, and making them available for future browsing through a conversational interface. (...)
Like smartphones, personal computers already collect and produce vast amounts of data about their users, but this is a big step in the direction of surveillance — constant, open-ended, and mostly unredacted — offered in exchange for a strange feature that Microsoft’s CEO is quite insistent its users will enjoy. Nadella attempts to preempt any concerns by pointing out that the AI models powering Recall run locally — that is, on the user’s device, not in the cloud. This is, at best, a partial solution to a problem of Microsoft’s own creation — a problem Windows users didn’t know they had until this week.
On-device AI processing is interesting to Microsoft for other reasons, too. In a world where AI services are expensive to run, installing them in every popular Microsoft product represents a real risk. In a world where the processing necessary to run chatbots, generate images, or surveil your own computer usage to the maximum possible extent occurs on users’ devices, the cost of deploying AI is vastly lower.
For Microsoft, that is — if it expects to fully utilize these new features that are becoming increasingly integral to the core Window product, customers will have to buy new machines, some of which Microsoft also showed off this week. According to The Verge:
“All of Microsoft’s major laptop partners will offer Copilot Plus PCs, Microsoft CEO Satya Nadella said at an event at the company’s headquarters on Monday. That includes Dell, Lenovo, Samsung, HP, Acer, and Asus; Microsoft is also introducing two of its own as part of the Surface line. And while Microsoft is also making a big push to bring Arm chips to Windows laptops today, Nadella said that laptops with Intel and AMD chips will offer these AI features, too.”These PCs will come with a “neural processor,” roughly akin to a graphics card, which is a separate hardware feature that can handle AI-related processing tasks more quickly and with lower power use than existing CPUs and GPUs. In conjunction with Microsoft’s shift to more efficient mobile processor architecture for laptops and desktops — something Apple committed to years ago, selling huge numbers of laptops in the process — AI is being used to make the case to its customers that this is the next stage of the upgrade cycle. It’s time to get a new PC, says the company that makes the software that powers most PCs and that sells PCs of its own.
Microsoft, like many other tech giants, says it’s all in on AI, but its approach includes hedges against AI deflation, too. Maybe customers flock to new AI features, in which case Microsoft will have shifted computing expenses back to its billions of customers, improving margins on subscription products and selling lots of Windows licenses in the process. If they don’t, though — if people keep using their Windows machines in approximately the same way they have for decades — Microsoft makes money anyway and leaves its cloud computing capacity free to sell to other firms that want to try their luck building AI tools.
by John Herrman, Intelligencer | Read more:
Image: Intelligencer; Photo: Microsoft[ed. Probably the biggest threat from AI - now and the near future - is how people use it. Long before a sentient AI decides - "Hey, maybe this human species isn't that smart after all - not enough to be my Master, anyway" we'll have already proven why. It's possible that AI's greatest achievement, if we allow it, might be protecting us from ourselves. See also: AI Is an Existential Threat—Just Not the Way You Think (Scientific American):]
"Actually, there is an existential danger inherent in using AI, but that risk is existential in the philosophical rather than apocalyptic sense. AI in its current form can alter the way people view themselves. It can degrade abilities and experiences that people consider essential to being human.
For example, humans are judgment-making creatures. People rationally weigh particulars and make daily judgment calls at work and during leisure time about whom to hire, who should get a loan, what to watch and so on. But more and more of these judgments are being automated and farmed out to algorithms. As that happens, the world won’t end. But people will gradually lose the capacity to make these judgments themselves. The fewer of them people make, the worse they are likely to become at making them.
Or consider the role of chance in people’s lives. Humans value serendipitous encounters: coming across a place, person or activity by accident, being drawn into it and retrospectively appreciating the role accident played in these meaningful finds. But the role of algorithmic recommendation engines is to reduce that kind of serendipity and replace it with planning and prediction.
Finally, consider ChatGPT’s writing capabilities. The technology is in the process of eliminating the role of writing assignments in higher education. If it does, educators will lose a key tool for teaching students how to think critically.
For example, humans are judgment-making creatures. People rationally weigh particulars and make daily judgment calls at work and during leisure time about whom to hire, who should get a loan, what to watch and so on. But more and more of these judgments are being automated and farmed out to algorithms. As that happens, the world won’t end. But people will gradually lose the capacity to make these judgments themselves. The fewer of them people make, the worse they are likely to become at making them.
Or consider the role of chance in people’s lives. Humans value serendipitous encounters: coming across a place, person or activity by accident, being drawn into it and retrospectively appreciating the role accident played in these meaningful finds. But the role of algorithmic recommendation engines is to reduce that kind of serendipity and replace it with planning and prediction.
Finally, consider ChatGPT’s writing capabilities. The technology is in the process of eliminating the role of writing assignments in higher education. If it does, educators will lose a key tool for teaching students how to think critically.
Not Dead But Diminished
So, no, AI won’t blow up the world. But the increasingly uncritical embrace of it, in a variety of narrow contexts, means the gradual erosion of some of humans’ most important skills. Algorithms are already undermining people’s capacity to make judgments, enjoy serendipitous encounters and hone critical thinking. [ed. Not to mention providing new ways of making money and securing power in previously unknown and unique ways.]
The human species will survive such losses. But our way of existing will be impoverished in the process. The fantastic anxieties around the coming AI cataclysm, singularity, Skynet, or however you might think of it, obscure these more subtle costs. Recall T.S. Eliot’s famous closing lines of “The Hollow Men”: “This is the way the world ends,” he wrote, “not with a bang but a whimper.”]