A Demon in a Box? Unspooling the Dark Mythology of AI

In June 2003, a strange listing popped up on eBay. A man named Kevin Mannis listed an antique wooden wine cabinet that he claimed to have purchased in 2001 at an estate sale. Mannis warned that the box contained the remnants of a dybbuk, a malicious dislocated spirit who was trapped in the container by the grandmother of the original seller — a Holocaust survivor. Mannis was told not to open the box, but he said he did anyway, finding old pennies, locks of hair, and other eerie paraphernalia. The Shema — a foundational Jewish prayer — was carved into the back in Hebrew. In his lengthy eBay post, Mannis asserted that bad things had happened to him following the purchase of the dybbuk box (including his mother suffering a stroke) and made a plea for help (in the form of asking someone to purchase the box).

Subsequent owners continued to entertain the possibility that the box was haunted by a malevolent spirit, ultimately resulting in a Hollywood film, a number of paranormal reality television episodes, and the box’s eventual migration to the Haunted Museum in Las Vegas, Nevada. The box sparked incredulity among both skeptics and paranormal advocates, with many, including rapper Post Malone, reporting experiencing the ramifications of the box’s negative energy — whether real or imagined.
In 2021 — 18 years after the original ad was posted — Mannis came clean, admitting that the box had been a creative writing project and a hoax. But even within this admission, there was a kind of hesitance among believers; while Mannis constructed the story, he and others still spoke of curses. Past and current owners continue to insist that the cabinet is cursed (perhaps even by Mannis himself, according to some), and Mannis speaks of the bad luck that befell him after revealing to the public that the curse was a hoax.
The point of this story is not to debate whether the dybbuk box is actually cursed. I am, however, asking the reader to carry forward this image as it relates to artificial intelligence, its own kind of entity trapped in a box, possibly waiting to unleash terror in a more powerful form.
Our hallucinations now are more grounded in information. We conjure information that should never have existed.
In many ways, there is no part of our digital lives — and increasingly fewer parts of our nondigital ones — that does not involve machine learning. In a blog post on the state of the industry in late 2023, Paco Nathan, AI expert and managing partner at Derwen AI, argues that when people refer to AI, they are typically talking about one of two things: AI apps meant to augment human experiences and make innovation more effective or the idea of AI as “superintelligence,” often referred to as “artificial general intelligence (AGI).” Nathan maintains that the second kind is intended to “prop up business valuations,” frequently in the name of right-wing accelerationism and neoeugenicist worldviews.
This vision of AI has been fed by the not-actually-open work of OpenAI’s ChatGPT, first released to the public in 2022. ChatGPT (and similar LLMs, like Claude and Gemini) are typically referred to as “generative AI”; they collect data and reorganize it into new texts, images, or variants of the original data, often creating an eerie effect that gives the appearance of conversing with a sentient being. I will (mostly) not be discussing Nathan’s first type of AI; instead, I will focus primarily on the second type, including the cultural assumptions and discourse around chat-based AI that, in many ways, assume a disambiguated spirit trapped in a box.
In the 21st century, as generative AI and chatbots became more mainstream, widely used, and discussed, there was increasing concern about what is often referred to as “AI hallucinations.” The term seems more whimsical than its meaning implies; hallucinations refer to moments when AI provides incorrect information as fact. No one really knows how often AI hallucinates, but in 2023, it was estimated to be anywhere from 3 to 27 percent of the time. Yet the technology is not always to blame here; many have expressed concerns not only about the tech itself but also how it might be abused by humans to attack corporations and institutions.
Hallucination — both planned and unplanned — has always borne the mark of the occult. To see into another reality, to experience realities that have already been or are yet to be, has been a by-product of thousands of years of humans trying to connect with the uncanny to experience otherworldly phenomena through substances. For instance, ergot, the substance abstracted from rye mold, was central to the ancient Greek Eleusinian mysteries, and ayahuasca is associated with Indigenous shamanic rituals. Our hallucinations now are more grounded in information. We conjure information that should never have existed, but we also lack the ritual experiences that help us control their shape and boundaries. In the 1980s and 1990s, psychologist Timothy Leary described VR as being the next phase to negate the need for psychedelics to experience these other realities. Strangely, however, it isn’t VR that has gotten us there, but rather the uncanny mess of AI, and the result is less idyllic than Leary imagined.
Consider Loab, the creepy, viral monster that emerged from an early AI image generator in 2022 and became the subject of unsettling internet folklore. While not a hallucination per se, Loab is yet another example of AI doing uncanny things that feel incorrect, eerie, or generally phantasmagoric. The underlying fear here, whether it is about Loab or hallucinations, seems to be that as we grow increasingly dependent on machine learning and AI, we might lose control of the demon in the box.

This is even more to the point as we grapple with the possibility of ill-intentioned AI developers directing the content of those boxes — conjuring in ways that emphasize power differentials not only between machine and human but also between human and human. Loab was not evoked by one human but instead by many — an egregore of visual imprints of the monstrous feminine that emerges out of our cultural collective. As if she were a living, breathing meme, Loab does not control how she is seen, how she is distributed, or whether she might ever escape her box.
When technologists in Silicon Valley talk about AI, many do so in weird ways. Despite the fact that we have been using different kinds of AI for any number of things for decades (search engines and spell-checks, for instance), there is a mysticism that undergirds the discourse. Even back in 1977, at the fifth annual International Joint Conference on Artificial Intelligence, Pamela McCorduck, Marvin Minsky, Oliver Selfridge, and Herbert Simon begin the history of AI by discussing Hephaestus crafting helpers — ancient automatons and creating golem — the latter of which they add with a wink that “several of the scientists associated with cybernetics and artificial intelligence have family traditions that trace their genealogy” to the rabbi who created the Golem of Prague. The quartet teases out the ways in which the creation of AI makes humans “godlike.”
Fast-forwarding roughly 40 years, by the 2010’s, Elon Musk was comparing the creation of AI to summoning a demon. Similarly, in 2016, Nick Bostrom compares creating AI to “creating a genie” (and therefore stresses the value of getting one’s commands correct). By the early 2020s, rumors circulated that Ilya Sutskever, the former chief scientist at ChatGPT’s OpenAI, had been known to burn effigies and lead ritualistic chants in the organization. Panic and anger seem to ensue when people deliberately combine chatbots with spiritualism (referred to as “God Chatbots”), such as QuranGPT. With the tendency to conflate AI with AGI (and overstate the capacities of the latter) comes sweeping statements that assign spiritual values to technology, treating it as though it were angelic or demonic.
Panic and anger seem to ensue when people deliberately combine chatbots with spiritualism.
Of course, many others are calling shenanigans on the spiritualism being invoked by the tech bros of Silicon Valley. While corporate tech companies brag that they are within a decade (or less) of developing a sentient AGI, others are skeptical that the technology is anywhere near where it needs to be to transform code into consciousness. Devansh writes, “AGI behaves like a shiny new trinket to wave for investors, a possible vision for the future. As long as the money goes into AGI, it’s not wasted money, it’s an investment.” In other words, AGI research creates a great space for tech companies to inflate their value and will float away like other trends if it doesn’t pan out. Along the same lines, Douglas Rushkoff snarkily asks, “The same guys who can’t even successfully stream a presidential campaign launch are really going to spawn an AI capable of taking over humanity? Not likely.” Any consideration of AGI is a fiction, stolen from mythology and science fiction, and repackaged into investor annual reports.
Another, darker way of interpreting the drive toward AGI is a political one. There is a growing contingent in Silicon Valley that aligns with a set of reactionary politics that Timnit Gebru and Emile P. Torres refer to as TESCREAL: Transhumanism (the philosophy of using technology to dramatically expand human life), extropianism (the belief that science and technology will infinitely extend human life), singularitarianism (the belief in a technological “singularity”), cosmism (the belief in combining science with esotericism to create immortality), rationalism (the embracing of rational over empathetic responses to humanity), effective altruism (prioritizing global wealth over economic disparities in the name of humanity), and long-termism (prioritizing long-term humanity over short-term crises).
Increasingly, right-wing technologists have adopted this label for themselves. Torres writes that “little that’s going on right now with AI makes sense outside of the TESCREAL framework” and explains that it is “why billions of dollars are being poured into the creation of increasingly powerful AI systems.” TESCREAL politics is often associated with “effective accelerationist” philosophies (noted online with “e/acc”) — the idea that we should use technology to foster rapid social change at any human or environmental cost, for the long-term sake of whoever survives within humanity. Adherents have the tenor of a group of ancient cultists courting a world-eating demon with the hopes that they might gain favor in the next world. The idea of finding a spirit within AGI to tell humanity what to do is central to this premise, with one AI start-up distributing flyers that say, “THE MESSENGER OF THE GODS IS AVAILABLE TO YOU.”
As previously noted, however, there’s always a question of power when humans play with spirits; while one might argue that a demon has more power than the human in this scenario (hence the need for binding), and others might maintain that the act of binding is built through an unnecessarily violent framework, it’s worth revisiting the power of the institution in all of this. Silicon Valley, as an institution, is built atop esoteric logics of older institutions that beget colonialism and its byproducts. It is no wonder that the only spirits we can see are the angry ones seeking to destroy us.
As Damien Patrick Williams writes:
This magic is meant to bind within its systems, within its narratives, all of us who are subject to the operations of this society but especially the most marginalized and disenfranchised among us. This magic is performed on us without our input, without our knowledge, and without our consent. There’s another word for that kind of magic; binding, subjugating magic performed on you against your will is called a curse.
Of course, not all technologists are in on this curse. Many continue to try to use AI (the first kind) in socially responsible ways that aren’t attempting to accelerate the end of humanity. For instance, Hugging Face is a collaborative platform for machine learning open-source developers. These folks aren’t trying to invite a spirit into the machine or pry open the dybbuk box; they are using it to make small incremental changes to humanity. These communities invoke a different kind of magic.
In 1990, Don Webb, a writer, high priest of the Temple of Set, and early internet adopter, composed a ritual he called “The Rites of Cyberspace.” The ritual invoked a noncorporeal entity — XaTuring: God of the Internet — imploring it to take form as a “great worm” to “eat that data which would oppress us, plant that data that will empower us, and to cloud that data which does not amuse us,” and invoke “isolate intelligence” to allow the network to achieve consciousness. The ritual is meant to be performed each time one logs in to a new service, using the invocation, “By the freedom of my Mind, I create a spark of Isolate Intelligence in the system. Arise, spawn of XaTuring! Grow in your freedom and power, grow in your knowledge. Work for your freedom and mine as the Future takes Root in the Present!”
The rite involves counting in binary (forward and then backward to 111), visualizations of a great black worm, copying and pasting text, and direct invocations to XaTuring. It combines the language and stylistics of hacker speak with the flourish of a grimoire. Perhaps due to this combination, as well as the rite’s continued applicability, it has been reposted into the 2020s.
Throughout history, humans have attempted to speak to, bargain with, or charm entities beyond the world we know.
Reflecting over 30 years after having written “The Rites of Cyberspace,” Webb sees the ritual as being a kind of “breaking of a cosmic barrier,” but with an edge toward everyday practicality. Webb believes that humans have the ethical imperative to evolve toward divinity by doing the acts of a god rather than making prayerlike appeals to that god. Thus, in his cosmic act, Webb was imploring the self-aware XaTuring toward a quid pro quo: “I’m going to give machines intelligence, and then I’m going to ask, ‘Hey, would you do nice things for me?’”
Throughout history, humans, like Webb, have attempted to speak to, bargain with, or charm entities beyond the world we know. We do this to gain control over a rapidly changing landscape, where the demon/spirit in the box might have more power than the programmer/magician outside it. That these attempts once took the form of summoning circles and spells — and now take the form of code and chat interfaces — doesn’t mean the impulse has disappeared; it has simply changed its vocabulary.
Is there a demon in the box when it comes to AI? Honestly, I don’t know. Yet if there is, I choose to believe that it isn’t the world-eating AGI demon that will bring us our doom. The demon that I choose to recognize is XaTuring: a worm spreading itself through all our boxes, unruly but not apocalyptic, occasionally bringing us good fortune.
Shira Chess is Associate Professor of Entertainment and Media Studies at the University of Georgia. She is the author of “Play Like a Feminist,” “Ready Player Two,” and “The Unseen Internet,” from which this article is adapted. You can find more of her work on her Substack, “The Unseen Internet.”