an ai for the ruling class

The whole debacle began with the advent of the computer. We all know this story, but let’s simplify it for a moment before we tell the story of the computer becoming sentient and atomizing the idea of labor as we know it.

It all begins with a rock that someone put lightning inside. Once we were able to control the lightning, we could input things through messages containing ones and zeros—although you could also say we were giving it yes or no answers in rapid succession. From there, we could make it remember the ones and zeros. Thus, the idea of the computer was born. Very quickly, these rocks were able to communicate with one another by sending these memories of ones and zeros back and forth, and when interlinked in a sophisticated web, that collective memory was known as the Internet.

Somewhere along the line, memory sharing became so ubiquitous that it no longer felt like individual requests for information. Entrepreneurs in the space felt they had created a kind of music. This new technology they had created with precious metals and electrical charges had cultures living within them. People’s personalities and lives were shaped and subjugated by its messages. You no longer had to click a button or type a series of letters to receive a message from the Rock: you could now write your own messages that would live on the Rock forever. They called this Web 2.0. What was really exciting was the moment that you could make that memory opinionated. They called that the Algorithm. And while you still had to carefully place those opinions in, once they were there, you didn’t have to do anything else.

This gave that Rock a new level of competency. Instead of showing messages in some arbitrary order, like sorted by when they were created or by whom, the Rock would figure out which messages were the most important and prioritize showing those. In many ways, this mimicked a human curating an experience, but with a level of precision and speed that a human could not possibly obtain.

We already know that humans like Sam Altman, creator of Open AI, having squishy human brains that could easily recognize the face of Jesus Christ on a piece of toast, saw that same godlike power in a really well-crafted sorting machine. But for a moment, before looking at the people who created the online boomtown known as AI, I want to look inside the minds of these AIs themselves.

From the perspective of a learning engine—which you could call an artificial intelligence if you’d like, though I will not—their job is to sort information in a kind of pleasing way. In the instance of a website like Facebook or TikTok, the information they are sorting is things that people write or record. That is a piece of information to the almighty Rock, an atom. If you wanted to do the same thing with a sentence, it means you would need to learn a lot of different sentences from a bunch of different sources. And by the end of that learning period, you would be able to sort through sentences just like you sorted through those messages, understanding context. That would be a large language model, commonly called an AI.

One crucial point here is that when someone with a human brain uses words and sentences, they are participating in a thing called semantics. If you are unfamiliar with semantics, the idea is that you have some inextricable concept in your mind—for example, you see a tree, or a fence, or the face of your friend. That tree, that fence, that friend, does not have a name, but you give it one. That’s semantics. You could call that friend “tree” or fence “George,” but the rules of language disagree. Language is social and contextual. Your mind notices that your friend’s face might be joyful or sad, the tree might be green, moving or still, the fence made of some material that you are subconsciously aware of. When programmed to learn, the Rock manages to have some kind of subconscious mind as well, in the process of sorting all of this information that comes to you intuitively.

The humans who are trying to sell this Rock want to pretend like this information the learning machine spits out is totally untraceable, but the fact of the matter is that it is uniquely traceable to what has been fed to it. Information in, reconstituted information out. Like an amoeba gobbling up cells, the Rock has been gobbling up sentences.

The main difference between the human mind and that of the Rock is that the Rock is incapable of semantic understanding. The Rock has no intrinsic beliefs beyond what it has been taught to believe. There is a clear through-line between what it espouses. It is really only the lazy humans who taught it—and then disregarded the information they fed it—who are unable to clearly locate where the Rock got its opinions. This is why ChatGPT can build a pipe bomb. Despite this Rock being like a perfect child—not picking up random pieces of information out in the world that its parent didn’t intend on it learning, like war or ambition or lying—the parent inadvertently taught it all of those things because its parents had those negative attributes, and more. Most importantly: greed.

To test this idea of semantic understanding, I will point to a passage from the French children’s novel The Little Prince. It’s characters, being humans, understand semantic reasoning. This logically means that they have an imagination, in fact, that is the very theme throughout the novel. We feel, as humans reading it, and understand things totally intangible to a Rock and then choose to put words to that feeling. That’s how its writer wrote it, and how humans read it. The Rock only has what it was told and nothing more. If you taught it the story of The Little Prince, what it would learn is the exact story in order of telling, and with proper training could re-tell that exact story verbatim.

The narrator of the story in his boyhood drew a picture and showed it to many adults. It was a representation of a fear he had—one of a boa constrictor swallowing its prey whole. A terrifying, visceral image for a young boy. His drawing was a simple outline of that boa constrictor having swallowed an elephant. While that image may be copyrighted, I am sure that with a mind well-trained with imagination, you can picture the thin line of the snake ballooning in its midsection to reveal the outline of an elephant it had swallowed.

However, in the novel, the adults, lacking imagination, believe our narrator has drawn a picture of an odd-looking, misshapen hat. Lacking that shared semantic narrative of imagination, they fail to see the invisible element, an elephant, living in the snake’s midsection.

In his hubris, human person Sam Altman has believed that he can train that same semantic imagination into a sorting machine. Many other people have believed the same thing, training various other sorting engines into taking this same process, calling it AI, and replicating it with digital pixels to create sophisticated collages of thousands upon thousands of visual artists’ work, finally shilling it out to collect the public’s dime. He has thus created a wonderful new tool, one that can vigorously make connections and reconstruct ideas in useful new ways—and diluted it with human opinions and ideas and bias.

A learning engine, a prediction machine, won’t ever take the leap from being able to simply approximate a picture of an elephant. It will use pixel prediction to create a collage of a dozen pictures of elephants across the internet. It can’t draw the thing the narrator drew because there is no bias beyond its training data. The Rock doesn’t understand what an elephant is, but it know its color, its shape, how it would stand. It knows logic, but not semantics.

If you feed the engine a picture of a snake eating an elephant, it will forever associate, in partial fraction, the idea of both snakes and elephants with this picture that the narrator in The Little Prince drew, despite it not necessarily being a logical necessity to do so. Feeding the language model art or poetry does not make it a stronger piece of technology, but rather dilutes its existing strength as a predictive engine that stores data using ones and zeros.

This is the hubris of the humans that have rushed into this technology like a gold rush, but it is not the fault of the technology itself. These technologies are not meant to replace artists, no matter how many times people parrot that ridiculous talking point. These technologies exist to reformat and reconstruct existing data within their memory, which means that if you ask it what the weather is, it will tell you what the weather has been in the past. This is not the fault of the language model, but rather the people who are running it trying to teach a chicken French.

This is a fundamental flaw in the culture of AI technology that many reactions misplace to fault the technology itself. “I hate AI,” they say, because they despise these tech thought-leaders stuffing Frankenstein technologies onto the apps they use every day. But these technologies are not bad. They have been abused by those who have been innovating: thinking that this sorting algorithm can suddenly make a leap in understanding and obtain semantic knowledge, intrinsic meaning like the mind has, but somehow lacking the ego of the mind.

But if one does not build these myopic engines with low-tech failsafes: prioritizing the quality of information over quantity, prioritizing how it knows what it knows, and understanding that if you fail to partition these engines into smaller machines that do not require ecology-devastating technology to run (see a model like DeepSeek for this), you’ve created something neither fish nor fowl that cannot live for very long.

In trying to build an AGI, artificial general intelligence—a God that can take all of the world’s information and sort it into the best, most pleasurable format and solve the world’s problems—these creators have instead built a machine that has the ability to lie.

I don’t believe the machines deserve to be blamed for that hubris. A machine with the express purpose of synthesizing information and data can be helpful to a point, giving you a lightning-fast understanding of grammar in a complex way that the spelling and grammar checks of the past could not understand, through context.

It is the step above that which must be culled from these machines, trying to give them a singular creative voice that will fit all use cases or trying to program enough of them in to remove all human interaction at a company for the sake of a shareholder’s bottom line. A prediction engine isn’t a person. That’s not what these machines are best at, and you will get a poor imitation of the thing you are trying to do.

That’s not to say that you couldn’t operate it in such a way that it could help you create that art, but it would not be for all users or artists. James Patterson will find great use for it in terms of helping to format and streamline ideas, at least in its more idyllic form rather than its current one. But James Joyce certainly could not—particularly if he were in the process of creating Finnegans Wake.

Alan Turing posits in the Turing Test that an artificial machine that walks like a duck and quacks like a duck must be a duck on first glance. I posit that the longer you spend with the duck, the more you realize you had begun to interact with a fascinating but ultimately flawed, and expensive, quacking machine.

on david lynch

My home, in many ways, was a quintessential Midwest archetype—the kind you might find an ear in on a lazy summer afternoon. My family was nuclear, our diet fairly balanced (after all, my grandmother was a nutritionist), and our TV watching was relatively simple. I scoured the Internet Movie Database with great care, but my fascination leaned more toward the nihilism of Nolan’s early career than the simplicity of TV writers or classic cinema.

The first time I heard Lynch’s name was through my father, who recounted seeing the backward-talking dwarf on Twin Peaks. In his bewilderment over what it all meant, he inadvertently taught me something about Lynch—that his power is subconscious, evocative, and not through strict explanation.

I can’t recall exactly how I felt watching Lynch for the first time, beyond the universal expectation of concrete answers embedded within his films. I borrowed Mulholland Drive from the Northfield Public Library, and halfway through watching the DVD on my home computer downstairs—my private cove for rumination—I found myself pausing. I turned the case over and saw a list of clues on the back. I couldn’t make sense of them at the time. I’d understand later the meaning wasn’t in strict delineation. The clues were more entry points.

The way Lynch brings me joy is the same way he has sparked the imaginations of so many others. He understands the existential horror that plagues our dreams but suspends the hubris to think he can conquer it. Instead, he lets his unconscious mind loose, giving all his worlds the same playground—a place that bridges the pedestrian with the divinity of the subconscious, granting equal measure to both. To me, Lynch was like a different kind of child—one who found classic works divine, lapped at the shores of sincerity as a nebulous form, and, in his concrete status as human, kept a stately curiosity reserved for only the greatest storytellers: Miyazaki, Wordsworth, and a host of others, mostly poets.

Yesterday, the day he died, I returned home and watched my first Douglas Sirk film, Imitation of Life. Though nearly half a century apart from his final film, Inland Empire, Sirk’s melodrama resonated with Lynch’s ability to capture human emotion. Inland Empire—the culmination of an L.A. trilogy that began with Lost Highway and continued with Mulholland Drive—remains a testament to his vision. I had the pleasure of seeing Inland Empire in its restored form at the Laemmle Theater in North Hollywood. (I still have the ticket stub.) Lynch seems to have inherited Sirk’s keen understanding of human emotion. That’s what makes his work so eternal. Lynch is not sardonic in the least; his earnestness made Twin Peaks timeless and universal, inspiring countless imitators who have tried, and failed, to capture his magic.

What makes Lynch truly uninimitable is how he lives through his influences. His work is deeply expressive in a way that is entirely singular, shaped by his upbringing and infused with his insecurities, biases, and small joys. These personal threads sink into and animate everything he created. Like many great filmmakers, Lynch understood that the mystery itself—not its resolution—was what made the experience worthwhile. Though audiences clamored for answers to Laura Palmer’s murder, the true magic lay in traversing the Mobius strip, not conquering it.

To understand Lynch’s work, especially his final creations, Inland Empire and Twin Peaks: The Return, one can return to his rare comment about his work, describing Laura Palmer. “She’s my golden goose,” he said, “and she was laying these golden eggs.” That is the joy and timelessness of Lynch: his iterative, endlessly explorable canon. The joy is in following the goose. What in the world would we do if we caught one?