Simulacrum Blues

UNDERSTANDING UNDERSTANDING ISSUE 11

Image: Midjourney AI, prompted by Michael Ventura

One believes things because one has been conditioned to believe them.

– Aldous Huxley, Brave New World


At first, it’s not entirely obvious what’s off about them. You stare a bit longer, feeling into the edges and details of an image or video or assemblage of words, sifting around in the dark to find a sense of that which feels amiss. Is it a dearth of spirit? A lack of ‘hand’ or human touch imbued within the work? It feels almost real but not quite. This is the dawn of AI-generated content. 

The work generated by these lines of code feels mostly right. But look closely at the mildly misshapen hands of the rendered subject or the slight jitter in the eyes of the human depicted in a deepfake video. They’re off. Sometimes in obvious ways but other times, much like your dog or cat will clock an energy that’s imperceptible to you, your human senses know what your cognitive mind may not immediately observe. Perhaps these “soul gaps” (I’ve made that up, but let’s go with it) go unnoticed by the vast majority of observers, but for those who are looking, and for those who care, what’s lacking is in fact a harbinger of what’s to come. A world where “realness” is being redefined before our eyes.   

For years artists, writers, and dealers in the subjective forms of human expression felt like their work was immune to the rising tides of technology. Machines will take the jobs of doctors and engineers but they’ll never understand how to make real art, quipped right-brained naysayers while news headlines regaled the advent of these early AI-times. But recently, things have started to change. The growing ubiquity and allure of AI-enabled creative tools have begun to hit the mainstream and, good or bad, this is the new normal.

By now you may have already heard of the generative AI darling Chat GPT. Bolstered by a multi-billion dollar investment from Microsoft, the platform is essentially the apex predator of self determined thought. With a few quick keystrokes anyone from half-baked, plagerization-prone undergrads to overwhelmed and underpaid social media managers can generate an output designed to pass muster. Today, the technology offers, if it were being graded, what some would consider a solid B. But in a world run by C students, the solid B is king. 

Let’s develop that a bit further.

Moons ago, former president Harry Truman was quoted as saying, “The world is run by C students.” At first blush, this seems like a jab at the abundance of mediocrity. In truth, what Truman was alluding to was the research-backed evidence that C-students are actually more innovative, resourceful, and creative. They didn’t lock themselves into the strictures of the machine too early. They had dalliances with diverse interests. They were scrappier, and more curious, and often, to their own detriment, the industrial education complex didn’t reward them for this. But it is the proverbial C-students who will use the B-level information that can now be generated faster and more efficiently than they may create it on their own, to yield top scores in their chosen work. 

But a word of caution is important. We cannot overlook the humanness that seems to be lacking. In his regular newsletter, The Red Hand Files, musician and demigod of gothic rock Nick Cave responds to fan questions. A recent fan letter proffered, “I asked Chat GPT to write a song in the style of Nick Cave and this is what it produced. What do you think?” The fan went on to share lyrics that, taken as a gestalt piece of written music, may seem Nick Cave-ish. But Cave responds (after first affirming, “This song sucks.”) with the following:

What ChatGPT is, in this instance, is replication as travesty. ChatGPT may be able to write a speech or an essay or a sermon or an obituary but it cannot create a genuine song. It could perhaps in time create a song that is, on the surface, indistinguishable from an original, but it will always be a replication, a kind of burlesque.

Songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend. ChatGPT’s melancholy role is that it is destined to imitate and can never have an authentic human experience, no matter how devalued and inconsequential the human experience may in time become.

What makes a great song great is not its close resemblance to a recognizable work. Writing a good song is not mimicry, or replication, or pastiche, it is the opposite. It is an act of self-murder that destroys all one has strived to produce in the past. It is those dangerous, heart-stopping departures that catapult the artist beyond the limits of what he or she recognises as their known self. This is part of the authentic creative struggle that precedes the invention of a unique lyric of actual value; it is the breathless confrontation with one’s vulnerability, one’s perilousness, one’s smallness, pitted against a sense of sudden shocking discovery; it is the redemptive artistic act that stirs the heart of the listener, where the listener recognizes in the inner workings of the song their own blood, their own struggle, their own suffering. This is what we humble humans can offer, that AI can only mimic, the transcendent journey of the artist that forever grapples with his or her own shortcomings. This is where human genius resides, deeply embedded within, yet reaching beyond, those limitations.

This is the soul gap. The void in the machine. The thing it cannot, in spite of its many merits, approximate. And it’s not limited to the written word.

Have you been fooled by a deepfake? Would you even know if you were? 

As visual AI has gained speed and sophistication, content creators (prompters?) have made attempts to alter reality, at times for personal or professional gain:

-Deepfake accounts featuring celebrities like Tom Cruise and Keanu Reeves sit among real profiles onTikTok.

-Parodies like Jerry Seinfeld replacing an actor in Pulp Fiction make playful attempts to put the technology to use.

-South Korean news network MBN has created an analog of one of their anchors, utilizing AI to replicate Kim Joo-Ha’s likeness for breaking news broadcasts when she is away from the studio.

At times these efforts can be playful and meme-worthy, but it is a short step from risky, problematic content that could cause harm. 

Perhaps the best example of where this could go awry was made evident by filmmaker Jordan Peele where he depicts former president Barack Obama saying things he’s never said (or would say) in public.

Not too long ago MIT published a thoughtful piece on how to detect a deepfake. While in and of itself it is helpful, what’s interesting is that some of the tips it asks viewers to employ are essentially begging us to be present and conjure our own sense of humanity. “Does the person blink enough or too much?”, “Does the skin appear too smooth or too wrinkly?”, “Do the lip movements look natural?” 

What is it they are asking us to consider if not our own innate sense of what it means to be human in an increasingly blurred world.  

Visually, those of us experimenting with AI-generative art platforms such as Dall-E2 and Midjourney are encountering similar experiences. Personally, I have found these platforms (as well as Chat GPT, frankly) to be exceptional cerebral playthings, but not something I feel good about utilizing for professional purposes. They are helpful catalysts for idea exploration and at their best, a way to extract the exceptionally hard to describe worlds and ideas that sometimes get stuck in the recesses of our minds. For me personally, this often takes the form of asking Midjourney’s AI to output goofball ideas and mashed-up dreamstates that I’d like to see made manifest. A few examples I’ve recently made are depicted here for context:

Futuristic feudalism

The octopus diner

Crew Portraits for the Defunct 1970s 'Aya Air'

One of the first things I noticed about these outputs was what they made me feel. In truth, my first experience was deep satisfaction. I got that oft-sought dopamine squirt in the brain that told me something good happened. In effect, what drove that was a sense that the AI understood me. That it knew what I wanted it to translate from my mind’s eye and it got it (mostly) right. These singular expressions of my omnium gatherum of prompts felt good. I felt seen. 

But as I sat with the outputs, the details (or lack thereof) started to emerge, the aforementioned soul gap appeared. My friend Gabe is as serious about the craft of painting as pretty much anyone I know. When looking at a few of my recent Midjourney exports inspired by the prompt, “alien autopsy in the style of Lorenzo Lotto,” he pointed out that while the AI knew many of Lotto’s paintings include a burst of white light meant to reference the holy light of the universe (aka god), the AI didn’t understand the symbolism and so the strokes of light were erroneously placed in the image; divorced of meaning. 

Alien Autopsy in the Style of Lorenzo Lotto

Today these outputs from generative AI are B’s, but in the not too distant future, they will be A’s. Soon AI will know where the white light belongs to ensure the viewer understands the reference to the divine, or how to write a song that might actually make Nick Cave cry (good luck). As AI moves closer and closer to understanding these things, in effect, it moves closer and closer to sentience. Closer to innately knowing the meaning behind or within something. A beautiful rabbit hole of information on the topic of AI sentience and the moral dilemma it presents can be explored here and here

As the days pass, our relationship with AI is growing more ornate. In time, might we come to know our newfound collaborators in the same way we acknowledge our own programmed intelligence? After all, none of us was born with the knowledge we consider innate to us today. It was human influenced and trained into the meaty machine inside our skulls. Perhaps in creating AI we’ve simply made a simulacrum of a simulacrum. A way to midwife our own understanding of sentient intelligence into a new world. 

Take good care,

MV

Next
Next

A Matter of Trust