Friday, May 2, 2025

Medusa: Don’t Be a Stranger

The Israeli military and tech industry collaborate on user-friendly software tools that automate war and occupation

I met Isaac, an intelligence veteran, in a West Jerusalem café on a quiet Saturday morning in late May. We sipped iced coffee under an awning shading us from the heat wave. It was seven months into Israel’s war on Gaza. Upward of 30,000 Palestinians had been killed and millions more displaced in a protracted and bloody military offensive that had failed to achieve the military’s stated goals of decimating Hamas and bringing the remaining hostages home. Next to us, a table of reservists back from Gaza for the weekend rolled tobacco and knocked back pint glasses of draft beer. M-16 rifles were nestled between their knees or propped up against the graffitied table legs. An unremarkable scene. Isaac is also a reservist in the Israeli military—one of the over 350,000 mobilized after Hamas militants massacred more than 1,000 people in a historic security failure. Like many veterans I have interviewed for my academic work and reporting, Isaac spent the first few months of the war sitting in an intelligence base encouraged to use algorithmically generated targeting lists to help coordinate where and when bombs fell. A program called Lavender displayed lists of civilians who—because of the contacts in their phones, the content of their WhatsApp inbox, or their social media activity—had been greenlighted for assassination. Another, called Where’s Daddy, displayed alerts when those targets entered their family homes, helping to determine when and where the Air Force should strike. Over the next few months, “dumb bombs” dropped from the sky and explosives detonated by troops on the ground replaced universities, mosques, and apartment complexes with 500-foot-wide craters. The fabric of Palestinian life was scorched to earth.

Isaac, who spoke on the condition of anonymity, said the AI-powered targeting systems felt like any other search engine: type a name into a search bar and scroll through mountains of data seamlessly integrated into a user-friendly interface. The formal similarities are hardly a coincidence. Instead, they lay bare long-standing collaborations between civilian technology conglomerates and Israel’s military. Google provides some of the facial recognition algorithms powering classified surveillance databases that soldiers toggle between. Microsoft supplies speech-to-text software that expedites the work of surveilling and killing. The army uses Amazon cloud services to store troves of data used in lethal operations. These collaborations mean classified surveillance and targeting databases are even nicknamed after tech giants: Google Gaza or Facebook for Palestinians. “Like looking up a friend on social media,” Isaac admits, “they are familiar.”

How did we get to a place where seeking a target to kill remotely feels little different than scrolling through profiles of high school friends on Facebook? The AI systems deployed in Gaza are the apotheosis of a process set in motion in the mid-20th century, when early cyberneticians built up surveillance databases and rudimentary targeting systems for the United States’ Department of Defense (DoD). They hit the battlefield in the second half of the 20th century, when US troops scorched Vietnam to the ground, and were refined over four more decades of counterinsurgency warfare abroad and swelling surveillance at home. The Cold War made networked surveillance and killing a big business, largely bankrolled by the DoD. Slowly, innovations seeped into civilian markets—powering a revolution in personal computing, e-commerce, and dot com booms and busts, all predicated upon the expropriation of users’ information for corporate gain. In turn, civilian technology firms staked out new monopolies over mass surveillance and data analysis, which they sold back to governments. Underneath the user-friendly interfaces engineered by Google and Facebook employees was an enduring politics of death.
In the early 2000s, the CEO of PayPal, Peter Thiel, refashioned himself as an apostle of the military-industrial complex 2.0. His gospel was simple: re-engineer the algorithms powering platform capitalism for warfare. Thiel, along with others including businessman Alex Karp, who serves as CEO, founded Palantir, a start-up which ran troves of personal data through the same algorithms pinpointing credit fraudsters to hunt down terrorists on Middle Eastern battlefields.

Palantir promised to do what no technology firm had done before: leverage the civilian technology sector’s new monopoly over data analysis, pattern detection, and machine learning to revolutionize warfare, making military operations bloodless and precise. The product came at the cost of the privacy protections liberal democracies are supposed to enshrine. But Palantir’s early investors—namely the CIA—didn’t care; the power afforded by expansive surveillance databases was thrilling. Security states scrambled to drop cash onto an increasingly automated arms industry. For Thiel, Palantir was a realization of the “in-between space,” a vision of collaboration between militaries and Silicon Valley he had been boosting since 9/11. As the United States’ “war on terror” went global, Thiel promised Silicon Valley firms could develop and sell lethal systems back to governments and militaries struggling to keep up with the technology sector’s breakneck pace of innovation. The alliance was a return to Silicon Valley’s origins in a Cold War military-industrial complex, and Thiel said it would give the US and its allies an advantage over adversaries, so long as governments cultivated a welcoming climate for such operations. Features of conviviality included minimal regulations on data extraction, categorically denying civilians privacy protections, and relaxed oversight of AI development. Overpoliced cities in the United States, border zones in Europe, securitized regions of Northwest China, and the occupied Palestinian territories—spaces of exception, where civil liberties are non-existent—would be particularly hospitable.

Long a hub for military and security industries, by the late 2000s Israel would make the “in-between space” a national brand. Billions pumped into expanding military technology trained the next generation of start-up founders well-versed in military demands. Many secured lucrative contracts with an army eager to prototype and refine surveillance systems and weaponry across the occupied Palestinian territories. Politicians and military heads celebrated a revolving door between Israel’s booming start-up ecosystem and the army as the key to military prowess. Scandals surrounding boutique Israeli surveillance and weapons tech firms peddling their wares to foreign dictators, or eroding the rights of Palestinians, only boosted the country’s aspirational image as the World’s Ultimate Security State. (...)

The Israeli army couldn’t do it alone. Ben, a veteran who served in an Israeli intelligence unit devoted to big data and machine learning in 2014, told me his military base hosted many private contractors. When we spoke in June, he said some of these technologists worked for international firms while others were paid by domestic boutique surveillance start-ups founded by veterans of elite Israeli intelligence units. From 9 am to 5 pm, the contractors waltzed around in jeans and t-shirts, building up predictive targeting systems and surveillance interfaces between lunch breaks and trips to the gym. “You could be sitting there in your uniform, and next to you is a civilian making six times your salary, commuting from Tel Aviv.” Ben said the “civilian tech vibe” made it easy to view the military as a networking opportunity for those eager to land a job in the country’s burgeoning technology sector. Sometimes his team would tour the Tel Aviv offices of the tech firms supplying services. (...)

The industrial scale of automated warfare today implicates many in the violence unfolding in Gaza: not only Israeli soldiers and civilian technology workers but also everyday users scattered across the world. Some of us sit in Silicon Valley technology complexes, engineering the cloud servers or databases informing lethal operations. More of us offer up the data and supply the free labor that trains and refines the algorithms driving bombing campaigns abroad each time we go online, even if you caption your selfies with the words “Free Palestine.” Selfies and search engine queries feed the surveillance databases and predictive models undergirding lethal weapons systems. Broad swaths of the world’s population is, in some way or another, what the media scholar Tung-Hui Hu has called “freelancers for the state’s security apparatus.”

by Sophia Goodfriend, Document | Read more:
Image: Robin Broadbent
[ed. See also: Welcome to the Future (Noahpion):]

Neural interface technologies are proliferating, as are new generations of wearable computer interfaces. Bionic eyes are getting better and better. 

It’s not just that cyberpunk predicted the ways we’d use technology. It did an amazing job at anticipating the aesthetics and the feel of a world in which, in William Gibson’s famous phrase, “The future is already here, it’s just not evenly distributed.”

Police can now shoot GPS trackers that attach themselves to suspects’ cars. Homeless people stole so many electrical boxes in Oakland that the city started switching traffic lights to stop signs. The app Protector will let you order an armed security team to wherever you are — basically, Uber for street samurai. Chinese government officials and contractors are stealing and reselling the surveillance state’s data at a profit.