At the moment Carr started his blog, the agent of millenarian change was the internet — in particular, what enthusiasts were touting as “Web 2.0,” with its promise of universal collaboration, connectedness, and participation. User-created content like Wikis and blogs would displace the old media, and participants would ultimately congeal into a collective intelligence capable of acting on a global scale.
Carr’s blog first came to wide attention on the strength of his critique of an influential article called “We Are the Web,” by Wired’s “Senior Maverick” Kevin Kelly. Kelly wrote that the accumulation of content on the web — from music, videos, and news, to sports scores, guides, and maps — was providing a view of the world that was “spookily godlike.” By 2015, he predicted, the web would have evolved into “a megacomputer that encompasses the Internet […] and the billions of human minds entangled in this global network.” With chiliastic zeal, he announced, “There is only one time in the history of each planet when its inhabitants first wire up its innumerable parts to make one large Machine […] You and I are alive at this moment.” Future generations, he said, will “look back on those pivotal eras and wonder what it would have been like to be alive then.” Or, as Wordsworth might have put it, “Bliss was it in that dawn to be online.”
In a post called “The Amorality of Web 2.0,” Carr taxed Kelly with using a “language of rapture” that made objectivity impossible: “All the things that Web 2.0 represents — participation, collectivism, virtual communities, amateurism — become unarguably good things, things to be nurtured and applauded, emblems of progress toward a more enlightened state.” On the contrary, he countered, those features are invariably mixed blessings. As a manifestation of the age of participation, Wikipedia is certainly useful, but it’s also slipshod, factually unreliable, and appallingly written. “It seems fair to ask,” he said, “when the intelligence in ‘collective intelligence’ will begin to manifest itself.”
Similarly with blogs: Kelly described them as part of “a vast and growing gift economy, a visible underground of valuable creations” that turns consumers into producers. Carr, himself a blogger, pointed to the limits of the blogosphere: “its superficiality, its emphasis on opinion over reporting, its echolalia, its tendency to reinforce rather than challenge ideological polarization and extremism.” In short, “Web 2.0, like Web 1.0, is amoral. It’s a set of technologies — a machine, not a Machine — that alters the forms and economics of production and consumption.” (...)
by Geoff Nunberg, LARB | Read more:
Images: Utopia is Creepy
Carr’s blog first came to wide attention on the strength of his critique of an influential article called “We Are the Web,” by Wired’s “Senior Maverick” Kevin Kelly. Kelly wrote that the accumulation of content on the web — from music, videos, and news, to sports scores, guides, and maps — was providing a view of the world that was “spookily godlike.” By 2015, he predicted, the web would have evolved into “a megacomputer that encompasses the Internet […] and the billions of human minds entangled in this global network.” With chiliastic zeal, he announced, “There is only one time in the history of each planet when its inhabitants first wire up its innumerable parts to make one large Machine […] You and I are alive at this moment.” Future generations, he said, will “look back on those pivotal eras and wonder what it would have been like to be alive then.” Or, as Wordsworth might have put it, “Bliss was it in that dawn to be online.”
In a post called “The Amorality of Web 2.0,” Carr taxed Kelly with using a “language of rapture” that made objectivity impossible: “All the things that Web 2.0 represents — participation, collectivism, virtual communities, amateurism — become unarguably good things, things to be nurtured and applauded, emblems of progress toward a more enlightened state.” On the contrary, he countered, those features are invariably mixed blessings. As a manifestation of the age of participation, Wikipedia is certainly useful, but it’s also slipshod, factually unreliable, and appallingly written. “It seems fair to ask,” he said, “when the intelligence in ‘collective intelligence’ will begin to manifest itself.”
Similarly with blogs: Kelly described them as part of “a vast and growing gift economy, a visible underground of valuable creations” that turns consumers into producers. Carr, himself a blogger, pointed to the limits of the blogosphere: “its superficiality, its emphasis on opinion over reporting, its echolalia, its tendency to reinforce rather than challenge ideological polarization and extremism.” In short, “Web 2.0, like Web 1.0, is amoral. It’s a set of technologies — a machine, not a Machine — that alters the forms and economics of production and consumption.” (...)
Looking back over the history of technological enthusiasms in his American Technological Sublime, the historian David Nye notes that, in each generation, “the radically new disappears into ordinary experience.” By now, the internet is ubiquitous, and for just that reason no longer a Thing. There are between 50 and 100 processors in a modern luxury car, about as many as there are electric motors (think power steering, seats, wipers, windows, mirrors, CD players, fans, etc.). But you wouldn’t describe the automobile as an application of either technology.
So the futurists have to keep moving the horizon. One feature that makes this era truly different is the number of labels that we’ve assigned to it. Carr himself lists “the digital age, the information age, the internet age, the computer age, the connected age, the Google age, the emoji age, the cloud age, the smartphone age, the data age, the Facebook age, the robot age”; he could have added the gamification age, the social age, the wearable age, and plenty of others. Whatever you call it, he notes, this age is tailored to the talents of the brand manager.
In his more recent posts, Carr is reacting to these varying visions of a new millennium, where the internet is taken for granted and the transformative forces are innovations like wearables, biosensors, and data analytics. The 2011 post from which he draws his title, “Utopia is creepy,” was inspired by a Microsoft “envisionment scenario.” Direct digital descendants of the World’s Fair pavilions, these are the videos that companies produce to depict a future in which their products have become ubiquitous and essential, similar to the worlds pervaded by self-driving cars or synthetics described above. The Microsoft video portrays “a not-too-distant future populated by exceedingly well-groomed people who spend their hyperproductive days going from one computer display to another.” A black-clad businesswoman walks through an airport, touches her computerized eyeglasses, and a digitized voice lights up to define a personal “pick up” zone:
It’s that sense of ubiquitous presence that has made “creepy” our reflexive aesthetic reaction to the intrusiveness of new technologies — there is already a whole body of scholarly literature on the subject, with journal articles titled “On the Nature of Creepiness” and “Leakiness and Creepiness in App Space,” etc. Creepy is a more elusive notion than scary. Scary things set our imaginations racing with dire thoughts of cyberstalkers, identity thieves, or government surveillance. With creepy things, our imagination doesn’t really know where to start — there is only the unease that comes from sensing that we are the object of someone or something’s unbidden gaze.(...)
What’s most striking about these pictures of the sensor-saturated world isn’t just their creepiness, but how trivial and pedestrian they can be. The chief of Google Android touts interconnected technology that can “assist people in a meaningful way,” and then offers as an example automatically changing the music in your car to an age-appropriate selection when you pick up your kids. Microsoft’s prototype “Nudge Bra” monitors heart rate, respiration, and body movements to detect stress and, via a smart phone app, triggers “just-in-time interventions to support behavior modification for emotional eating.” (A similar application for men was judged unfeasible since their underwear was too far from the heart — “That has always been the problem,” Carr deadpans.) They’re symptomatic of Silicon Valley’s reigning assumption, writes Carr, that anything that can be automated should be automated. But automatic music programming and diet encouragement — really, is that all?
So the futurists have to keep moving the horizon. One feature that makes this era truly different is the number of labels that we’ve assigned to it. Carr himself lists “the digital age, the information age, the internet age, the computer age, the connected age, the Google age, the emoji age, the cloud age, the smartphone age, the data age, the Facebook age, the robot age”; he could have added the gamification age, the social age, the wearable age, and plenty of others. Whatever you call it, he notes, this age is tailored to the talents of the brand manager.
In his more recent posts, Carr is reacting to these varying visions of a new millennium, where the internet is taken for granted and the transformative forces are innovations like wearables, biosensors, and data analytics. The 2011 post from which he draws his title, “Utopia is creepy,” was inspired by a Microsoft “envisionment scenario.” Direct digital descendants of the World’s Fair pavilions, these are the videos that companies produce to depict a future in which their products have become ubiquitous and essential, similar to the worlds pervaded by self-driving cars or synthetics described above. The Microsoft video portrays “a not-too-distant future populated by exceedingly well-groomed people who spend their hyperproductive days going from one computer display to another.” A black-clad businesswoman walks through an airport, touches her computerized eyeglasses, and a digitized voice lights up to define a personal “pick up” zone:
As soon as she settles into the backseat the car’s windows turn into computer monitors, displaying her upcoming schedule […] [h]er phone, meanwhile, transmits her estimated time of arrival to a hotel bellhop, who tracks her approach through a screen the size of a business card.One thing that makes these scenarios disquieting, Carr suggests, is the robotic affectlessness of the humans — who bring to mind the “uncanny valley” that unsettles us when we watch their digital replicas. These figures are the direct descendants of those audio-animatronic families that Disney designed for the 1964 World’s Fair. As technologies become the protagonists of the drama, people become props. The machines do the work — observing us, anticipating our needs or desires, and acting on what they take to be our behalf.
It’s that sense of ubiquitous presence that has made “creepy” our reflexive aesthetic reaction to the intrusiveness of new technologies — there is already a whole body of scholarly literature on the subject, with journal articles titled “On the Nature of Creepiness” and “Leakiness and Creepiness in App Space,” etc. Creepy is a more elusive notion than scary. Scary things set our imaginations racing with dire thoughts of cyberstalkers, identity thieves, or government surveillance. With creepy things, our imagination doesn’t really know where to start — there is only the unease that comes from sensing that we are the object of someone or something’s unbidden gaze.(...)
What’s most striking about these pictures of the sensor-saturated world isn’t just their creepiness, but how trivial and pedestrian they can be. The chief of Google Android touts interconnected technology that can “assist people in a meaningful way,” and then offers as an example automatically changing the music in your car to an age-appropriate selection when you pick up your kids. Microsoft’s prototype “Nudge Bra” monitors heart rate, respiration, and body movements to detect stress and, via a smart phone app, triggers “just-in-time interventions to support behavior modification for emotional eating.” (A similar application for men was judged unfeasible since their underwear was too far from the heart — “That has always been the problem,” Carr deadpans.) They’re symptomatic of Silicon Valley’s reigning assumption, writes Carr, that anything that can be automated should be automated. But automatic music programming and diet encouragement — really, is that all?
by Geoff Nunberg, LARB | Read more:
Images: Utopia is Creepy