Saturday, February 16, 2019

How Tech Utopia Fostered Tyranny

The digital utopian dream of our age looks something like the 2016 concept video created by a Google R&D lab for a never-released product called the Selfish Ledger. The video was obtained in May by The Verge, which described it as “an unsettling vision of Silicon Valley social engineering.” Borrowing from Richard Dawkins’s notion of the “selfish gene,” the Selfish Ledger would be a self-help product on steroids, combining Google’s cornucopia of personal data with artificial-intelligence tools whose sole aim was to help you meet your goals.

Want to lose weight? Google Maps might prioritize smoothie shops or salad places when you search for “fast food.” Want to reduce your carbon footprint? Google might help you find vacation options closer to home or prioritize locally grown foods in the groceries that Google Express delivers to your doorstep. When the program needs more information than Google’s data banks can provide, it might suggest you buy a sensor, such as an Internet-connected scale or Google’s new AI-powered wearable camera. Or, if the needed product is not on the market, it might even suggest a design and 3D-print it.

The program is “selfish” in that it stubbornly pursues the self-identified goal the user gives it. But, the video explains, further down the road “suggestions may be converted not by the user but by the ledger itself.” And beyond individual self-help, by surveilling users over space and time Google would develop a “species-level understanding of complex issues such as depression, health, and poverty.”

The idea, according to a lab spokesperson, was meant only as a “thought-experiment ... to explore uncomfortable ideas and concepts in order to provoke discussion and debate.” But the slope from Google’s original product — the seemingly value-neutral search engine — to the social engine of the Selfish Ledger is slipperier than one might think. The video’s vision of a smart Big Brother follows quite naturally from the company’s founding mission “to organize the world’s information, making it universally accessible and useful.” As Adam White recently wrote in these pages (“Google.gov,” Spring 2018), “Google has always understood its ultimate project not as one of rote descriptive recall but of informativeness in the fullest sense.”

After plucking the low-hanging fruit of web search, Google’s engineers began creating predictive search technologies like “autocomplete” and search results tailored to individual users based on their search histories. But what we are searching for — what we desire — is often shaped by what we are exposed to and what we believe others desire. And so predicting what is useful, however value-neutral this may sound, can shade into deciding what is useful, both to individual users and to groups, and thereby shaping what kinds of people we become, for both better and worse.

The moral nature of usefulness becomes even clearer when we consider that our own desires are often in conflict. Someone may say he wants to have a decent sleep schedule, and yet his desire to watch another YouTube video about “deep state” conspiracy theories may get the better of him. Which of these two conflicting desires is the truer one? What is useful in this case, and what is good for him? Is he searching for conspiracy theories to find the facts of the matter, or to get the informational equivalent of a hit of cocaine? Which is more useful? What we wish for ourselves is often not what we do; the problem, it seemed to Walker Percy, is that modern man above all wants to know who he is and should be.

YouTube’s recommendation feature has helped to radicalize users through feedback loops — not only, again, by helping clickbait conspiracy videos go viral, but also by enticing users to view more videos like the ones they’ve already looked at, thus encouraging the user merely intrigued by extremist ideas to become a true diehard. Yet this result is not a curious fluke of the preference-maximizing vision, but its inevitable fruition. As long as our desires are unsettled and malleable — as long as we are human — the engineering choices of Google and the rest must be as much acts of persuasion as of prediction.

California Streamin’

The digital mindset of precisely measuring, analyzing, and ever more efficiently fulfilling our individual desires is of course not unique to Google. It pervades all of the Big Tech companies whose products give them access to massive amounts of user data, including also Facebook, Microsoft, Amazon, and to some extent Apple. Each company was founded on a variation of the premise that providing more people with more information and better tools, and helping them connect with each other, would help them lead better, freer, richer lives.

This vision is best understood as a descendant of the California counterculture, another way of extending decentralized, bottom-up power to the people. The story is told in Fred Turner’s 2006 book From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism. Turner writes that Stewart Brand, erstwhile editor of the counterculture magazine Whole Earth Catalog, “suggested that computers might become a new LSD, a new small technology that could be used to open minds and reform society.” Indeed, Steve Jobs came up with the name “Apple Computing” from living in an acid-infused commune at an Oregon apple orchard.

Not coincidentally, the tech giants are now investing heavily in using artificial intelligence to provide customized user experiences — not the information that is most useful to people in general, but to individual users.* The AI assistant is the culmination of utopian aspiration and shareholder value, a kind of techno-savvy guardian angel that perfectly and mysteriously knows how to meet your requests and sort your infinitely scrollable feed of search results, products, and friend updates, just for you. In the process, these companies run headfirst into the impossibility of separating the supposedly value-neutral criterion of usefulness from the moral aims of personal and social transformation.

For at the foundation of the digital revolution there was a hidden tension. First through personal computing and then through the Internet, the revolutionaries offered, as Brand’s Whole Earth Catalog put it, “access to tools.” A precious few users today grasp and take advantage of the full promise of networked computers to build ever more useful applications and tools. Instead, the vast majority spend their time and resources on only a few functions on a few platforms, consuming entertainment, searching for information, connecting with friends, and buying products or services.

And while in theory there are more “choices” and “flexibility” available than ever, in practice these are winner-take-all platforms, with the default choices and settings dominating user behavior. Google can return tens of millions of results for a search, but most users won’t leave the first page. Essentially random suggestions to users can become self-fulfilling prophesies, as Wired reported of the obscure 1988 climbing memoir Touching the Void, which by 2004 had become a hit due to Amazon’s recommendation algorithm.

Moreover, because algorithms are subject to strategic manipulation and because they are attempting to provide results unique to you, the choices shaping these powerful defaults are necessarily hidden away by platforms demanding you simply trust them. Ever since its founding, Google has had to keep its search algorithm’s specific preferences secret and constantly re-adjust them to foil enterprising marketers trying to boost their profits at the expense of what users actually want. Every other Big Tech company has followed suit. As results have become more personalized, it becomes increasingly difficult to specify why, exactly, your newsfeed might differ from a friend’s; the complex math behind it creates a black box that is “optimized” for some indiscernible set of metrics. Tech companies demand you simply trust the choices they make about how they manipulate results.

Much of the politics of Silicon Valley is explained by this Promethean exchange: gifts of enlightenment and ease in exchange for some measure of awe, gratitude, and deference to the technocratic elite that manufactures them. Algorithmic utopianism is at once optimistic about human motives and desires and paternalistic about humans’ cognitive ability to achieve their stated preferences in a maximally rational way. Humans, in other words, are mostly good and well-intentioned but dumb and ignorant. We rely on poor intuitions and bad heuristics, but we can overcome them through tech-supplied information and cognitive adjustment. Silicon Valley wants to debug humanity, one default choice at a time. (...)

Big Tech companies have thus married a fundamentally expansionary approach to information-gathering to a woeful naïveté about the likely uses of that technology. Motivated by left-liberal utopian beliefs about human progress, they are building technologies that are easily, naturally put to authoritarian and dystopian ends. While the Mark Zuckerbergs and Sergey Brins of the world claim to be shocked by the “abuse” of their platforms, the softly progressive ambitions of Silicon Valley and the more expansive visions of would-be dictators exist on the same spectrum of invasiveness and manipulation. There’s a sense in which the authoritarians have a better idea of what this technology is for.

Wasn’t it rosy to assume that the main uses of the most comprehensive, pervasive, automated surveillance and behavioral-modification technology in human history would be reducing people’s carbon footprints and helping them make better-informed choices in city council races? It ought to have been obvious that the new panopticon would be as liable to cut with the grain as against it, to become in the wrong hands a tool not for ameliorating but exploiting man’s natural capacity for error. Of the two sides, cheer for Dr. Jekyll, but bet on Mr. Hyde.

by Jon Askonas, The New Atlantis | Read more:
Image: uncredited