The AI prompt as secular prayer

Around 2019 I wrote a 12-thousand-word outline of a book entitled “Encoding gods”. The central theme was that we increasingly engage with technology as sacred.

Then the pandemic happened and I never got back to it.

Abstract painting of a land / waterscape
Painting by David White

I’m glad I didn’t complete the project back then as I would not have had mainstream AI as an exemplar (although there was plenty of reference to less obvious forms of AI such as Google Translate). The book would have immediately appeared dated even though the main line of argument can be transposed to include AI. In recent talks I’ve started to fold-in thinking from the book as chatbot-style AI amplifies a tech-as-sacred framing.

It’s almost gratifying to see research and reports appearing over the last couple of years which claim that the top use for chatbots is as a kind of therapist. Add to this the occasional articles such as this one from the Guardian which suggest that AI, in it’s chatbot form, is taking the place formerly reserved for religion.

These pieces tend to make the point that the sycophancy of a chatbot is more of a bolster to individualism rather than a connection with the universal, but the notion that we are attempting to fill the ‘God-void’ with technology does hold some water. The AI prompt rapidly becoming a form of prayer for many. A plea for comfort to an ineffable other.

The magic inversion

This can be seen as a failure of secularism in that, on the surface, our dominant ideology is the rational while our actions and beliefs tell a different story. We reach for certainty via that which we don’t understand, mystery has always been a haven. Once this was the numinous, now it is neural networks. Our desire for comfort-within-complexity cannot be met by the rational alone.

What fascinates me is how a mis-mapping of the rational and the extra-rational underpins an audacious inversion. Namely, that we engage with technology as sacred while assuming we are no more than complex computers. The ineffable has been cut-and-pasted from the human to the machine.

Pages, Place, Person – the shift in the metaphor

Supporting this inversion is a shift in, or layering of, the central metaphor employed to conceptualise the digital in the mainstream networked era. My simple history of this is as follows:

1993 – : The digital as Pages

Early Web browsers provide relatively easy access to the World-Wide-Web which is presented as a collection of interconnected ‘Pages’. The metaphor is skeuomorphic in that it extends the understanding of a dominant paradigm into a new technology, in this case, the paradigm of information-as-paper. This is a surprisingly resilient metaphor as the notion of files and folders is alive and well, as is the ‘desktop’ etc.

2006 – : The digital as Place

Social media arrives and brings with it the new metaphor of Place. The idea that ‘online’ is a ‘town square’ and digital environment is a gathering place for community. As discussed in my Digital Visitors and Residents framing, a common motivation to go online became the desire to connect with others in some form. While the internet had facilitated communication and connection for many years, the arrival of social media opened up this Place-based mode of digital interaction to millions. The Web suddenly became as much about people as about information.

2022 – : The digital as Person

In education, as in other fields, many metaphors for AI have been proposed. For example, this editorial curates proposed AI metaphors from 14 academic papers. They include ‘autotune’, ‘a parrot’, ‘a demon’, ‘an alien’ and ‘a kind of magic’. What is omitted is the metaphor of Person. The authors of these papers have missed something that is so obvious it’s hiding in plain sight.

This, I suggest, is because the ability of chatbots to operate in natural language is so refined that we have disintermediated the metaphor. ‘Person’ has ceased to operate as a model-of-understanding for chatbots and become a reality (or hyperreality). It is extremely difficult to encounter anything which is so ‘articulate’ and not interact with it as if it were a person. We all know what a person is and how to interact with one. This is much easier than trying to ‘talk to a computer’.

This collapsing or erasing of the metaphor is a beguiling idea and a simple value proposition which can be easily promoted by those selling the technology. It is a form of techno-enchantment.

It’s also a convenient business-model move away from the difficult to control approach of facilitating fellowship-through-Place towards selling one-to-one connections on a digital as Person basis. It offers refuge from the noise and complexity of our hyper-connected lives while also being woven into the very network we are trying to find respite from. It creates the conditions for a state of perpetual dissonance; the feeling of profound isolation coupled with the disquiet of being collectively manipulated.

Unquestioned, personification becomes the mechanism that allows us to confer the ineffable on a technology which appears tantalisingly close to all-knowing. When asked directly, most would claim to be able to between technology and magic. However, if the machine is a person, if the metaphor has collapsed, then we encounter it as mystical and possibly sacred. It becomes guru, mystic, confessional and plays a deity-like role.

Person-as-digital

In parallel to digital-as-person we are also being sold person-as-digital. The second most dangerous metaphor in circulation is that the brain is like a computer. This line of thinking is bolstered each time we successfully engineer our technology to simulate a practice we previously thought of as particular to humans. For example, the ability of a computer to create ‘art’ with a simple prompt is used to imply that humans must be no more than a technology-made-flesh. We are led to believe that if it can be simulated it contains no mystery while we quietly repress our fear that is never more than hollow performance.

So, with each new simulation we become less: less human, less ineffable and more ‘known’ as inconveniently chaotic humanity-machines. We see this in contemporary business models which claim that the system would work brilliantly, if only the messy humans could fall in line and operate ‘rationally’. More specifically, the case is made that AI would be even more amazing if only the humans were clever enough to figure out what it’s for.

Even though we know that these impressive simulations are only possible because the technology has consumed inconceivable amounts of human-labour and creative work we are still strangely amenable to conferring our mystery on the machine and reducing ourselves to that which is yet to be simulated. AI is then understood as both a powerful technology and as a more-effective-human while we become a less effective, disordered machine with each passing day.

The temptation of certainty

An understanding of theology 101 is a useful lens to avoid getting caught up in yet another technological hype cycle. However, I’m not going to go down that path directly. Instead, I suggest that our need for certainty and comfort are always at risk of being co-opted. In 2026 we could say that the digital has generated the complexity and anxiety which AI, also the digital, is now offering itself up as a haven from. As ever, technology can be read as both the problem and the solution.

Another, better, reading is that given that we invented all of this stuff we are just doing it to ourselves. There will always be those who look to gain power through subjugation; and certainty, whether real or simulated, can often feel like a fair trade for freedom.

My concern is that in reaching for digital comfort we are imbuing the inert with powers it does not possess and impoverishing our own being. This digital animism is a misplaced hope in our own invention, at our own expense. Technology is not other enough, but rather an oblique narcissism which cannot heal and will always be abused.

Please don’t misunderstand me, I am not against the technology in of itself but rather the way it is being presented. AI, for example, is a spectacular example of human invention which has been packaged in a dangerous and disingenuous manner. There are many other forms this technology could take which would not erode our agency or steal our humanity. It’s the business model, not the machine which is a fault. I don’t want my humanity stolen, then sold back to me, by Silicon Valley.

My response is not directly spiritual even though it could be understood in those terms. What I suggest is that we must learn to navigate, not simplify, complexity. We require the literacy, patience and strength to sit-with-unknowing and to understand that simple answers are useful for simple questions but that they will never erase the infinite; and why would we want them to?