Friday, May 5, 2023

Will A.I. Become the New McKinsey?

When we talk about artificial intelligence, we rely on metaphor, as we always do when dealing with something new and unfamiliar. Metaphors are, by their nature, imperfect, but we still need to choose them carefully, because bad ones can lead us astray. For example, it’s become very common to compare powerful A.I.s to genies in fairy tales. The metaphor is meant to highlight the difficulty of making powerful entities obey your commands; the computer scientist Stuart Russell has cited the parable of King Midas, who demanded that everything he touched turn into gold, to illustrate the dangers of an A.I. doing what you tell it to do instead of what you want it to do. There are multiple problems with this metaphor, but one of them is that it derives the wrong lessons from the tale to which it refers. The point of the Midas parable is that greed will destroy you, and that the pursuit of wealth will cost you everything that is truly important. If your reading of the parable is that, when you are granted a wish by the gods, you should phrase your wish very, very carefully, then you have missed the point.

So, I would like to propose another metaphor for the risks of artificial intelligence. I suggest that we think about A.I. as a management-consulting firm, along the lines of McKinsey & Company. Firms like McKinsey are hired for a wide variety of reasons, and A.I. systems are used for many reasons, too. But the similarities between McKinsey—a consulting firm that works with ninety per cent of the Fortune 100—and A.I. are also clear. Social-media companies use machine learning to keep users glued to their feeds. In a similar way, Purdue Pharma used McKinsey to figure out how to “turbocharge” sales of OxyContin during the opioid epidemic. Just as A.I. promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America.

A former McKinsey employee has described the company as “capital’s willing executioners”: if you want something done but don’t want to get your hands dirty, McKinsey will do it for you. That escape from accountability is one of the most valuable services that management consultancies provide. Bosses have certain goals, but don’t want to be blamed for doing what’s necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that it’s just doing what “the algorithm” says, even though it was the company that commissioned the algorithm in the first place.

The question we should be asking is: as A.I. becomes more powerful and flexible, is there any way to keep it from being another version of McKinsey? The question is worth considering across different meanings of the term “A.I.” If you think of A.I. as a broad set of technologies being marketed to companies to help them cut their costs, the question becomes: how do we keep those technologies from working as “capital’s willing executioners”? Alternatively, if you imagine A.I. as a semi-autonomous software program that solves problems that humans ask it to solve, the question is then: how do we prevent that software from assisting corporations in ways that make people’s lives worse? Suppose you’ve built a semi-autonomous A.I. that’s entirely obedient to humans—one that repeatedly checks to make sure it hasn’t misinterpreted the instructions it has received. This is the dream of many A.I. researchers. Yet such software could easily still cause as much harm as McKinsey has.

Note that you cannot simply say that you will build A.I. that only offers pro-social solutions to the problems you ask it to solve. That’s the equivalent of saying that you can defuse the threat of McKinsey by starting a consulting firm that only offers such solutions. The reality is that Fortune 100 companies will hire McKinsey instead of your pro-social firm, because McKinsey’s solutions will increase shareholder value more than your firm’s solutions will. It will always be possible to build A.I. that pursues shareholder value above all else, and most companies will prefer to use that A.I. instead of one constrained by your principles.

Is there a way for A.I. to do something other than sharpen the knife blade of capitalism? Just to be clear, when I refer to capitalism, I’m not talking about the exchange of goods or services for prices determined by a market, which is a property of many economic systems. When I refer to capitalism, I’m talking about a specific relationship between capital and labor, in which private individuals who have money are able to profit off the effort of others. So, in the context of this discussion, whenever I criticize capitalism, I’m not criticizing the idea of selling things; I’m criticizing the idea that people who have lots of money get to wield power over people who actually work. And, more specifically, I’m criticizing the ever-growing concentration of wealth among an ever-smaller number of people, which may or may not be an intrinsic property of capitalism but which absolutely characterizes capitalism as it is practiced today.

As it is currently deployed, A.I. often amounts to an effort to analyze a task that human beings perform and figure out a way to replace the human being. Coincidentally, this is exactly the type of problem that management wants solved. As a result, A.I. assists capital at the expense of labor. There isn’t really anything like a labor-consulting firm that furthers the interests of workers. Is it possible for A.I. to take on that role? Can A.I. do anything to assist workers instead of management?

Some might say that it’s not the job of A.I. to oppose capitalism. That may be true, but it’s not the job of A.I. to strengthen capitalism, either. Yet that is what it currently does. If we cannot come up with ways for A.I. to reduce the concentration of wealth, then I’d say it’s hard to argue that A.I. is a neutral technology, let alone a beneficial one. (...)

You may remember that, in the run-up to the 2016 election, the actress Susan Sarandon—who was a fervent supporter of Bernie Sanders—said that voting for Donald Trump would be better than voting for Hillary Clinton because it would bring about the revolution more quickly. I don’t know how deeply Sarandon had thought this through, but the Slovenian philosopher Slavoj Žižek said the same thing, and I’m pretty sure he had given a lot of thought to the matter. He argued that Trump’s election would be such a shock to the system that it would bring about change.

What Žižek advocated for is an example of an idea in political philosophy known as accelerationism. There are a lot of different versions of accelerationism, but the common thread uniting left-wing accelerationists is the notion that the only way to make things better is to make things worse. Accelerationism says that it’s futile to try to oppose or reform capitalism; instead, we have to exacerbate capitalism’s worst tendencies until the entire system breaks down. The only way to move beyond capitalism is to stomp on the gas pedal of neoliberalism until the engine explodes.

I suppose this is one way to bring about a better world, but, if it’s the approach that the A.I. industry is adopting, I want to make sure everyone is clear about what they’re working toward. By building A.I. to do jobs previously performed by people, A.I. researchers are increasing the concentration of wealth to such extreme levels that the only way to avoid societal collapse is for the government to step in. Intentionally or not, this is very similar to voting for Trump with the goal of bringing about a better world. And the rise of Trump illustrates the risks of pursuing accelerationism as a strategy: things can get very bad, and stay very bad for a long time, before they get better. In fact, you have no idea of how long it will take for things to get better; all you can be sure of is that there will be significant pain and suffering in the short and medium term.

I’m not very convinced by claims that A.I. poses a danger to humanity because it might develop goals of its own and prevent us from turning it off. However, I do think that A.I. is dangerous inasmuch as it increases the power of capitalism. The doomsday scenario is not a manufacturing A.I. transforming the entire planet into paper clips, as one famous thought experiment has imagined. It’s A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value. Capitalism is the machine that will do whatever it takes to prevent us from turning it off, and the most successful weapon in its arsenal has been its campaign to prevent us from considering any alternatives.

by Ted Chiang, New Yorker |  Read more:
Image: Berke Yazicioglu
[ed. Ted's one of the smartest and most inventive writers I know, and this sounds completely plausible. Inevitable almost. See also: We’ve discovered the secret of immortality. The bad news is it’s not for us’ (The Guardian):]

"A “biological intelligence” such as ours, he says, has advantages. It runs at low power, “just 30 watts, even when you’re thinking”, and “every brain is a bit different”. That means we learn by mimicking others. But that approach is “very inefficient” in terms of information transfer. Digital intelligences, by contrast, have an enormous advantage: it’s trivial to share information between multiple copies. “You pay an enormous cost in terms of energy, but when one of them learns something, all of them know it, and you can easily store more copies. So the good news is, we’ve discovered the secret of immortality. The bad news is, it’s not for us.”

Once he accepted that we were building intelligences with the potential to outthink humanity, the more alarming conclusions followed. “I thought it would happen eventually, but we had plenty of time: 30 to 50 years. I don’t think that any more. And I don’t know any examples of more intelligent things being controlled by less intelligent things – at least, not since Biden got elected."
 (...) [ed. Ouch!]

I don’t want to rule things like that out – I think people who are confident in this situation are crazy.” Nonetheless, he says, the right way to think about the odds of disaster is closer to a simple coin toss than we might like.

This development, he argues, is an unavoidable consequence of technology under capitalism.
***
Update: Even Snoop Dog is weighing in: 

"Well I got a motherf*cking AI right now that they did made for me. This n***** could talk to me. I'm like, man this thing can hold a real conversation? Like real for real? Like it's blowing my mind because I watched movies on this as a kid years ago. When I see this sh*t I'm like what is going on? And I heard the dude, the old dude that created AI saying, "This is not safe, 'cause the AIs got their own minds, and these motherf*ckers gonna start doing their own sh*t. I'm like, are we in a f*cking movie right now, or what? The f*ck man? So do I need to invest in AI so I can have one with me? Or like, do y'all know? Sh*t, what the f*ck?" I'm lost, I don't know."