The tech backlash can be framed, in part, as a reaction to the technological accident. “When you invent the ship,” the French tech theorist Paul Virilio wrote, “you also invent the shipwreck; when you invent the plane, you invent the plane crash.... Every technology carries its own negativity, which is invented at the same time as technical progress.” This negativity, the ever-looming accident, is the potential for harm that every new technology inevitably brings into existence.
Along these lines, one type of critique of the Cambridge Analytica scandal described it as the event that should awaken the field of computer science to the ethical ramifications of its work, in the same way that other disciplines have had their own moral wake-up calls, some of them deliberate outcomes and others accidents. For chemistry, perhaps it was the invention of dynamite and later poison gas, for physics the atomic bomb, for civil engineering bridge and dam failures, for biology eugenics, and for medicine the infamous Tuskegee syphilis study. Now computer science has had its own moment of reckoning, should it choose to perceive it as such, one that should spur the development of a professional code of ethics and institutional safeguards against unethical design practices.
The tech backlash can also be understood as a backlash against corporations and bad actors rather than technology per se. The problem, on this view, does not lie with the nature of digital technology’s progress, but rather with the corporations that have designed, developed, and deployed digital tech for the sake of their bottom line, or else with malevolent users who have used it to unethical ends. In their own often specious defense, companies or bad actors may then talk about “accidents” and “unintended consequences” in order to deflect and diffuse responsibility for their actions.
These interlocking framings of the tech backlash are not altogether wrong, but they are incomplete and sometimes misleading. Focusing on the technological accident or intentionally malicious use can obscure what matters most: how a technology, used well and as intended, ultimately settles into the taken-for-granted material infrastructure of our daily lives.
How the Tech Backlash Fails
Social media platforms are the most prominent focal point of the tech backlash. Critics have understandably centered their attention on the related issues of data collection, privacy, and the political weaponization of targeted ads. But if we were to imagine a world in which each of these issues were resolved justly and equitably to the satisfaction of most critics, further questions would still remain about the moral and political consequences of social media. For example: If social media platforms become our default public square, what sort of discourse do they encourage or discourage? What kind of political subjectivity emerges from the habitual use of social media? What understanding of community and political action do they foster? These questions and many others — and the understanding they might yield — have not been a meaningful part of the conversation about the tech backlash.
We fail to ask, on a more fundamental level, if there are limits appropriate to the human condition, a scale conducive to our flourishing as the sorts of creatures we are. Modern technology tends to encourage users to assume that such limits do not exist; indeed, it is often marketed as a means to transcend such limits. We find it hard to accept limits to what can or ought to be known, to the scale of the communities that will sustain abiding and satisfying relationships, or to the power that we can harness and wield over nature. We rely upon ever more complex networks that, in their totality, elude our understanding, and that increasingly require either human conformity or the elimination of certain human elements altogether. But we have convinced ourselves that prosperity and happiness lie in the direction of limitlessness. “On the contrary,” wrote Wendell Berry in a 2008 Harper’s article, “our human and earthly limits, properly understood, are not confinements but rather inducements to formal elaboration and elegance, to fullness of relationship and meaning. Perhaps our most serious cultural loss in recent centuries is the knowledge that some things, though limited, are inexhaustible.”
We also often fail to question our commitment to the power of tools and technique. The Cambridge Analytica scandal revolved around the unethical manner in which data was collected from unsuspecting Facebook users by exploiting Facebook’s terms of service as well as around Facebook’s complicity and failure to acknowledge responsibility for its role in the affair. When Zuckerberg appeared before Congress, the few pointed questions he was asked also centered on Facebook’s responsibility to protect user data. While privacy is clearly important, the questions offered little concern about the legitimacy or advisability of data-driven politics — about the acquisition and exploitation of voter data and the manipulation of increasingly sophisticated means of precision advertising. No one seemed to worry that the political process is being reduced to this type of data sophistry. While Congress rightly condemned a particularly nefarious method of data acquisition, the capture of political life by technique remained unchallenged.
This line of questioning opens up a broader set of concerns about the project to manage human life through the combined power of big data and artificial intelligence. In an earlier age, people turned to their machines to outsource physical labor. In the digital age, we can also outsource our cognitive, emotional, and ethical labor to our devices and apps. Our digital tools promise to monitor and manage, among other things, our relationships, our health, our moods, and our finances. When we allow their monitoring and submit to their management, we outsource our volition and our judgment. We seem incapable, however, of raising any deeper concerns than whether the terms of service are intelligible and our data secure.
The tech backlash, in other words, leaves untouched the consequences of technologies that are successfully integrated into our social milieu. From this perspective, the tech backlash is not so much a rejection of the machine, to borrow an older, more foreboding formulation, but, at best, a desire to see the machine more humanely calibrated. It reveals, in fact, how deeply committed we are to our technologies. It reveals as well how thoroughly our thinking and our public debates unfold within parameters determined by a logic that may justly be called technological.
by L. M. Sacasas, New Atlantis | Read more:
Image:Tom Williams/CQ Roll Call via Getty Images
[ed. See also: Collective Awareness (Edge)]
Along these lines, one type of critique of the Cambridge Analytica scandal described it as the event that should awaken the field of computer science to the ethical ramifications of its work, in the same way that other disciplines have had their own moral wake-up calls, some of them deliberate outcomes and others accidents. For chemistry, perhaps it was the invention of dynamite and later poison gas, for physics the atomic bomb, for civil engineering bridge and dam failures, for biology eugenics, and for medicine the infamous Tuskegee syphilis study. Now computer science has had its own moment of reckoning, should it choose to perceive it as such, one that should spur the development of a professional code of ethics and institutional safeguards against unethical design practices.
The tech backlash can also be understood as a backlash against corporations and bad actors rather than technology per se. The problem, on this view, does not lie with the nature of digital technology’s progress, but rather with the corporations that have designed, developed, and deployed digital tech for the sake of their bottom line, or else with malevolent users who have used it to unethical ends. In their own often specious defense, companies or bad actors may then talk about “accidents” and “unintended consequences” in order to deflect and diffuse responsibility for their actions.
These interlocking framings of the tech backlash are not altogether wrong, but they are incomplete and sometimes misleading. Focusing on the technological accident or intentionally malicious use can obscure what matters most: how a technology, used well and as intended, ultimately settles into the taken-for-granted material infrastructure of our daily lives.
How the Tech Backlash Fails
Social media platforms are the most prominent focal point of the tech backlash. Critics have understandably centered their attention on the related issues of data collection, privacy, and the political weaponization of targeted ads. But if we were to imagine a world in which each of these issues were resolved justly and equitably to the satisfaction of most critics, further questions would still remain about the moral and political consequences of social media. For example: If social media platforms become our default public square, what sort of discourse do they encourage or discourage? What kind of political subjectivity emerges from the habitual use of social media? What understanding of community and political action do they foster? These questions and many others — and the understanding they might yield — have not been a meaningful part of the conversation about the tech backlash.
We fail to ask, on a more fundamental level, if there are limits appropriate to the human condition, a scale conducive to our flourishing as the sorts of creatures we are. Modern technology tends to encourage users to assume that such limits do not exist; indeed, it is often marketed as a means to transcend such limits. We find it hard to accept limits to what can or ought to be known, to the scale of the communities that will sustain abiding and satisfying relationships, or to the power that we can harness and wield over nature. We rely upon ever more complex networks that, in their totality, elude our understanding, and that increasingly require either human conformity or the elimination of certain human elements altogether. But we have convinced ourselves that prosperity and happiness lie in the direction of limitlessness. “On the contrary,” wrote Wendell Berry in a 2008 Harper’s article, “our human and earthly limits, properly understood, are not confinements but rather inducements to formal elaboration and elegance, to fullness of relationship and meaning. Perhaps our most serious cultural loss in recent centuries is the knowledge that some things, though limited, are inexhaustible.”
We also often fail to question our commitment to the power of tools and technique. The Cambridge Analytica scandal revolved around the unethical manner in which data was collected from unsuspecting Facebook users by exploiting Facebook’s terms of service as well as around Facebook’s complicity and failure to acknowledge responsibility for its role in the affair. When Zuckerberg appeared before Congress, the few pointed questions he was asked also centered on Facebook’s responsibility to protect user data. While privacy is clearly important, the questions offered little concern about the legitimacy or advisability of data-driven politics — about the acquisition and exploitation of voter data and the manipulation of increasingly sophisticated means of precision advertising. No one seemed to worry that the political process is being reduced to this type of data sophistry. While Congress rightly condemned a particularly nefarious method of data acquisition, the capture of political life by technique remained unchallenged.
This line of questioning opens up a broader set of concerns about the project to manage human life through the combined power of big data and artificial intelligence. In an earlier age, people turned to their machines to outsource physical labor. In the digital age, we can also outsource our cognitive, emotional, and ethical labor to our devices and apps. Our digital tools promise to monitor and manage, among other things, our relationships, our health, our moods, and our finances. When we allow their monitoring and submit to their management, we outsource our volition and our judgment. We seem incapable, however, of raising any deeper concerns than whether the terms of service are intelligible and our data secure.
The tech backlash, in other words, leaves untouched the consequences of technologies that are successfully integrated into our social milieu. From this perspective, the tech backlash is not so much a rejection of the machine, to borrow an older, more foreboding formulation, but, at best, a desire to see the machine more humanely calibrated. It reveals, in fact, how deeply committed we are to our technologies. It reveals as well how thoroughly our thinking and our public debates unfold within parameters determined by a logic that may justly be called technological.
by L. M. Sacasas, New Atlantis | Read more:
Image:Tom Williams/CQ Roll Call via Getty Images
[ed. See also: Collective Awareness (Edge)]