This is not the dystopia we were promised. We are not learning to love Big Brother, who lives, if he lives at all, on a cluster of server farms, cooled by environmentally friendly technologies. Nor have we been lulled by Soma and subliminal brain programming into a hazy acquiescence to pervasive social hierarchies.
Dystopias tend toward fantasies of absolute control, in which the system sees all, knows all, and controls all. And our world is indeed one of ubiquitous surveillance. Phones and household devices produce trails of data, like particles in a cloud chamber, indicating our wants and behaviors to companies such as Facebook, Amazon, and Google. Yet the information thus produced is imperfect and classified by machine-learning algorithms that themselves make mistakes. The efforts of these businesses to manipulate our wants leads to further complexity. It is becoming ever harder for companies to distinguish the behavior which they want to analyze from their own and others’ manipulations.
This does not look like totalitarianism unless you squint very hard indeed. As the sociologist Kieran Healy has suggested, sweeping political critiques of new technology often bear a strong family resemblance to the arguments of Silicon Valley boosters. Both assume that the technology works as advertised, which is not necessarily true at all.
Standard utopias and standard dystopias are each perfect after their own particular fashion. We live somewhere queasier—a world in which technology is developing in ways that make it increasingly hard to distinguish human beings from artificial things. The world that the Internet and social media have created is less a system than an ecology, a proliferation of unexpected niches, and entities created and adapted to exploit them in deceptive ways. Vast commercial architectures are being colonized by quasi-autonomous parasites. Scammers have built algorithms to write fake books from scratch to sell on Amazon, compiling and modifying text from other books and online sources such as Wikipedia, to fool buyers or to take advantage of loopholes in Amazon’s compensation structure. Much of the world’s financial system is made out of bots—automated systems designed to continually probe markets for fleeting arbitrage opportunities. Less sophisticated programs plague online commerce systems such as eBay and Amazon, occasionally with extraordinary consequences, as when two warring bots bid the price of a biology book up to $23,698,655.93 (plus $3.99 shipping).
In other words, we live in Philip K. Dick’s future, not George Orwell’s or Aldous Huxley’s. Dick was no better a prophet of technology than any science fiction writer, and was arguably worse than most. His imagined worlds jam together odd bits of fifties’ and sixties’ California with rocket ships, drugs, and social speculation. Dick usually wrote in a hurry and for money, and sometimes under the influence of drugs or a recent and urgent personal religious revelation.
Still, what he captured with genius was the ontological unease of a world in which the human and the abhuman, the real and the fake, blur together. As Dick described his work (in the opening essay to his 1985 collection, I Hope I Shall Arrive Soon):
In his novels Dick was interested in seeing how people react when their reality starts to break down. A world in which the real commingles with the fake, so that no one can tell where the one ends and the other begins, is ripe for paranoia. The most toxic consequence of social media manipulation, whether by the Russian government or others, may have nothing to do with its success as propaganda. Instead, it is that it sows an existential distrust. People simply do not know what or who to believe anymore. Rumors that are spread by Twitterbots merge into other rumors about the ubiquity of Twitterbots, and whether this or that trend is being driven by malign algorithms rather than real human beings.
Such widespread falsehood is especially explosive when combined with our fragmented politics. Liberals’ favorite term for the right-wing propaganda machine, “fake news,” has been turned back on them by conservatives, who treat conventional news as propaganda, and hence ignore it. On the obverse, it may be easier for many people on the liberal left to blame Russian propaganda for the last presidential election than to accept that many voters had a very different understanding of America than they do.
Dystopias tend toward fantasies of absolute control, in which the system sees all, knows all, and controls all. And our world is indeed one of ubiquitous surveillance. Phones and household devices produce trails of data, like particles in a cloud chamber, indicating our wants and behaviors to companies such as Facebook, Amazon, and Google. Yet the information thus produced is imperfect and classified by machine-learning algorithms that themselves make mistakes. The efforts of these businesses to manipulate our wants leads to further complexity. It is becoming ever harder for companies to distinguish the behavior which they want to analyze from their own and others’ manipulations.
This does not look like totalitarianism unless you squint very hard indeed. As the sociologist Kieran Healy has suggested, sweeping political critiques of new technology often bear a strong family resemblance to the arguments of Silicon Valley boosters. Both assume that the technology works as advertised, which is not necessarily true at all.
Standard utopias and standard dystopias are each perfect after their own particular fashion. We live somewhere queasier—a world in which technology is developing in ways that make it increasingly hard to distinguish human beings from artificial things. The world that the Internet and social media have created is less a system than an ecology, a proliferation of unexpected niches, and entities created and adapted to exploit them in deceptive ways. Vast commercial architectures are being colonized by quasi-autonomous parasites. Scammers have built algorithms to write fake books from scratch to sell on Amazon, compiling and modifying text from other books and online sources such as Wikipedia, to fool buyers or to take advantage of loopholes in Amazon’s compensation structure. Much of the world’s financial system is made out of bots—automated systems designed to continually probe markets for fleeting arbitrage opportunities. Less sophisticated programs plague online commerce systems such as eBay and Amazon, occasionally with extraordinary consequences, as when two warring bots bid the price of a biology book up to $23,698,655.93 (plus $3.99 shipping).
In other words, we live in Philip K. Dick’s future, not George Orwell’s or Aldous Huxley’s. Dick was no better a prophet of technology than any science fiction writer, and was arguably worse than most. His imagined worlds jam together odd bits of fifties’ and sixties’ California with rocket ships, drugs, and social speculation. Dick usually wrote in a hurry and for money, and sometimes under the influence of drugs or a recent and urgent personal religious revelation.
Still, what he captured with genius was the ontological unease of a world in which the human and the abhuman, the real and the fake, blur together. As Dick described his work (in the opening essay to his 1985 collection, I Hope I Shall Arrive Soon):
The two basic topics which fascinate me are “What is reality?” and “What constitutes the authentic human being?” Over the twenty-seven years in which I have published novels and stories I have investigated these two interrelated topics over and over again.These obsessions had some of their roots in Dick’s complex and ever-evolving personal mythology (in which it was perfectly plausible that the “real” world was a fake, and that we were all living in Palestine sometime in the first century AD). Yet they were also based on a keen interest in the processes through which reality is socially constructed. Dick believed that we all live in a world where “spurious realities are manufactured by the media, by governments, by big corporations, by religious groups, political groups—and the electronic hardware exists by which to deliver these pseudo-worlds right into heads of the reader.” (...)
In his novels Dick was interested in seeing how people react when their reality starts to break down. A world in which the real commingles with the fake, so that no one can tell where the one ends and the other begins, is ripe for paranoia. The most toxic consequence of social media manipulation, whether by the Russian government or others, may have nothing to do with its success as propaganda. Instead, it is that it sows an existential distrust. People simply do not know what or who to believe anymore. Rumors that are spread by Twitterbots merge into other rumors about the ubiquity of Twitterbots, and whether this or that trend is being driven by malign algorithms rather than real human beings.
Such widespread falsehood is especially explosive when combined with our fragmented politics. Liberals’ favorite term for the right-wing propaganda machine, “fake news,” has been turned back on them by conservatives, who treat conventional news as propaganda, and hence ignore it. On the obverse, it may be easier for many people on the liberal left to blame Russian propaganda for the last presidential election than to accept that many voters had a very different understanding of America than they do.
by Henry Farrell, Boston Review | Read more:
Image: NikiSublime
[ed. See also: It's the (Democratic-Poisoning) Golden Age of Free Speech (Wired)]
[ed. See also: It's the (Democratic-Poisoning) Golden Age of Free Speech (Wired)]