Tuesday, September 29, 2020

The Love That Lays the Swale in Rows

As computer systems and software applications come to play an ever-larger role in shaping our lives and the world, we have an obligation to be more, not less, involved in decisions about their design and use—before progress forecloses our options. We should be careful about what we make.

If that sounds naive or hopeless, it’s because we have been misled by a metaphor. We’ve defined our relation with technology not as that of body and limb or even that of sibling and sibling but as that of master and slave. The idea goes way back. It took hold at the dawn of Western philosophical thought, emerging first with the ancient Athenians. Aristotle, in discussing the operation of households at the beginning of his Politics, argued that slaves and tools are essentially equivalent, the former acting as “animate instruments” and the latter as “inanimate instruments” in the service of the master of the house. If tools could somehow become animate, Aristotle posited, they would be able to substitute directly for the labor of slaves. “There is only one condition on which we can imagine managers not needing subordinates, and masters not needing slaves,” he mused, anticipating the arrival of computer automation and even machine learning. “This condition would be that each [inanimate] instrument could do its own work, at the word of command or by intelligent anticipation.” It would be “as if a shuttle should weave itself, and a plectrum should do its own harp-playing.”

The conception of tools as slaves has colored our thinking ever since. It informs society’s recurring dream of emancipation from toil. “All unintellectual labour, all monotonous, dull labour, all labour that deals with dreadful things, and involves unpleasant conditions, must be done by machinery,” wrote Oscar Wilde in 1891. “On mechanical slavery, on the slavery of the machine, the future of the world depends.” John Maynard Keynes, in a 1930 essay, predicted that mechanical slaves would free humankind from “the struggle for subsistence” and propel us to “our destination of economic bliss.” In 2013, Mother Jones columnist Kevin Drum declared that “a robotic paradise of leisure and contemplation eventually awaits us.” By 2040, he forecast, our computer slaves—“they never get tired, they’re never ill-tempered, they never make mistakes”—will have rescued us from labor and delivered us into a new Eden. “Our days are spent however we please, perhaps in study, perhaps playing video games. It’s up to us.”

With its roles reversed, the metaphor also informs society’s nightmares about technology. As we become dependent on our technological slaves, the thinking goes, we turn into slaves ourselves. From the eighteenth century on, social critics have routinely portrayed factory machinery as forcing workers into bondage. “Masses of labourers,” wrote Marx and Engels in their Communist Manifesto, “are daily and hourly enslaved by the machine.” Today, people complain all the time about feeling like slaves to their appliances and gadgets. “Smart devices are sometimes empowering,” observed The Economist in “Slaves to the Smartphone,” an article published in 2012. “But for most people the servant has become the master.” More dramatically still, the idea of a robot uprising, in which computers with artificial intelligence transform themselves from our slaves to our masters, has for a century been a central theme in dystopian fantasies about the future. The very word “robot,” coined by a science fiction writer in 1920, comes from robota, a Czech term for servitude.

The master-slave metaphor, in addition to being morally fraught, distorts the way we look at technology. It reinforces the sense that our tools are separate from ourselves, that our instruments have an agency independent of our own. We start to judge our technologies not on what they enable us to do but rather on their intrinsic qualities as products—their cleverness, their efficiency, their novelty, their style. We choose a tool because it’s new or it’s cool or it’s fast, not because it brings us more fully into the world and expands the ground of our experiences and perceptions. We become mere consumers of technology.

The metaphor encourages society to take a simplistic and fatalistic view of technology and progress. If we assume that our tools act as slaves on our behalf, always working in our best interest, then any attempt to place limits on technology becomes hard to defend. Each advance grants us greater freedom and takes us a stride closer to, if not utopia, then at least the best of all possible worlds. Any misstep, we tell ourselves, will be quickly corrected by subsequent innovations. If we just let progress do its thing, it will find remedies for the problems it creates. “Technology is not neutral but serves as an overwhelming positive force in human culture,” writes one pundit, expressing the self-serving Silicon Valley ideology that in recent years has gained wide currency. “We have a moral obligation to increase technology because it increases opportunities.” The sense of moral obligation strengthens with the advance of automation, which, after all, provides us with the most animate of instruments, the slaves that, as Aristotle anticipated, are most capable of releasing us from our labors.

The belief in technology as a benevolent, self-healing, autonomous force is seductive. It allows us to feel optimistic about the future while relieving us of responsibility for that future. It particularly suits the interests of those who have become extraordinarily wealthy through the labor-saving, profit-concentrating effects of automated systems and the computers that control them. It provides our new plutocrats with a heroic narrative in which they play starring roles: job losses may be unfortunate, but they’re a necessary evil on the path to the human race’s eventual emancipation by the computerized slaves that our benevolent enterprises are creating. Peter Thiel, a successful entrepreneur and investor who has become one of Silicon Valley’s most prominent thinkers, grants that “a robotics revolution would basically have the effect of people losing their jobs.” But, he hastens to add, “it would have the benefit of freeing people up to do many other things.” Being freed up sounds a lot more pleasant than being fired.

There’s a callousness to such grandiose futurism. As history reminds us, high-flown rhetoric about using technology to liberate workers often masks a contempt for labor. It strains credulity to imagine today’s technology moguls, with their libertarian leanings and impatience with government, agreeing to the kind of vast wealth-redistribution scheme that would be necessary to fund the self-actualizing leisure-time pursuits of the jobless multitudes. Even if society were to come up with some magic spell, or magic algorithm, for equitably parceling out the spoils of automation, there’s good reason to doubt whether anything resembling the “economic bliss” imagined by Keynes would ensue.

In a prescient passage in The Human Condition, Hannah Arendt observed that if automation’s utopian promise were actually to pan out, the result would probably feel less like paradise than like a cruel practical joke. The whole of modern society, she wrote, has been organized as “a laboring society,” where working for pay, and then spending that pay, is the way people define themselves and measure their worth. Most of the “higher and more meaningful activities” revered in the distant past have been pushed to the margin or forgotten, and “only solitary individuals are left who consider what they are doing in terms of work and not in terms of making a living.” For technology to fulfill humankind’s abiding “wish to be liberated from labor’s ‘toil and trouble’ ” at this point would be perverse. It would cast us deeper into a purgatory of malaise. What automation confronts us with, Arendt concluded, “is the prospect of a society of laborers without labor, that is, without the only activity left to them. Surely, nothing could be worse.” Utopianism, she understood, is a form of self-delusion.

by Nicholas Carr, Rough Type |  Read more:
[ed. See also: What is it like to be a smartphone? (Rough Type).]