Sunday, March 24, 2024

Shrimp Jesus

Who among us will cast the first stone at shrimp Jesus? I hesitate to talk about him because I believe that AI-generated content is categorically no better or worse than other clickbait, and the best way to reckon with clickbait is to deny it that for which it seeks: attention. Writing to marvel at or deride AI clickbait seems to invite more of it, which in turn will entice us to write more critiques about it, which will only further feed the downward spiral.


The bot that invented shrimp Jesus has no doubt procedurally generated thousands of other equally zany would-be memes, but it requires scholarly attention, like this from Stanford University researchers Renee DiResta and Josh A. Goldstein, and media attention like that provided here by Jason Koebler of 404 Media to make shrimp Jesus into something culturally relevant. DiResta and Goldstein write:
The magnificent surrealism of Shrimp Jesus—or, relatedly, Crab Jesus, Watermelon Jesus, Fanta Jesus, and Spaghetti Jesus—is captivating. What is that? Why does that exist? You perhaps feel motivated to share it with your friends, so that they can share in your WTF moment. (We encourage you to share this post, of course.)
And I encourage you to share this post too. Anyone who wants to circulate content on social media has a touch of shrimp Jesus and the purity of his cynicism in their heart.

The Stanford researchers want to use shrimp Jesus to examine, in the words of their post's title, “How Spammers, Scammers and Creators Leverage AI-Generated Images on Facebook for Audience Growth.” Of course, spammers and scammers would basically leverage anything for growth on Facebook, so the stakes of this analysis are in the composition of Facebook’s recommendation algorithms: Shouldn’t Facebook shadow-ban AI images (since they are a kind of “inauthentic behavior”), especially given the company’s recent announcement that it would seek to label the generated images it hosts as “imagined by AI”?

Facebook claims that it is “working with industry partners to align on common technical standards that signal when a piece of content has been created using AI. Being able to detect these signals will make it possible for us to label AI-generated images that users post to Facebook, Instagram and Threads.”

But Facebook is apparently already able to detect AI-generated images well enough to boost them in people’s feeds when they show any interest in any other AI-generated material, as Koebler and the Stanford researchers point out. “We don't know why this is happening exactly,” Koebler writes, “but something is happening where, when you interact with one AI-generated image, you will be recommended other ones regardless of what type of content is being shown.”

That is not surprising, since that is how algorithmic recommendation is designed to work. The algorithms generate the spammers who make the shrimp Jesus images, and the spammers use AI because they are incentivized to make the most content with the least effort. They can use AI to churn out images arbitrarily and then optimize for the ones that gain traction, much as Facebook itself does with everything on its platform.

But why should it be more concerning that Facebook treats “AI-generated” as a formal category to guide algorithmic recommendations, given the innumerable other undisclosed, nonintuitive correlations it identifies to classify and condition its users? The algorithms predict what users are supposed to like, and spammers/“creators” find ways of providing fodder for fulfilling the predictions. When I use Facebook, Facebook effectively makes a fantastical and bizarre AI-generated image of me that I can’t see directly but is refracted in everything it chooses to feed to me. In other words, I don’t just look at shrimp Jesus; I am shrimp Jesus.

The AI-generated identities that platforms make for us seem more problematic than anything that might appear in a given AI-generated image. Weird images like shrimp Jesus seem to reflect the underlying weirdness of submitting to algorithmic control. It doesn’t seem useful to act as though there is some form of “authentic” image that is appropriate for algorithmic circulation or virality, or that algorithmic recommendation is justified as long as the content is “real.”

by Rob Horning, Internal Exile |  Read more:
Image: uncredited