The Increasing Role of Faith in Developing Artificial Intelligence
Another week, another surge of AI news in the video and animation industry. The release of Seedance 2 and the latest version of ChatGPT, ‘Spud’, suggests a noticeable leap forward. Early impressions point to sharper outputs, faster responses, and a general sense that the tools are becoming more capable – even if we’re all still testing their limits.
At the same time, I found myself in conversation with Jamie Bartlett about his book Talking to AI. What began as a discussion about human-machine interaction drifted, as these things tend to, into the philosophical territory of Neil Postman and Marshall McLuhan. Not just how we talk to AI, but what it means that we do.
From Enlightenment to Uncertainty
There’s a strange tension sitting underneath all of this. Since the Enlightenment, the project has been relatively clear – move away from superstition, lean into reason, and use technology to make life more predictable and, ideally, more comfortable.
In many ways, that’s worked. Large parts of the world are safer, more stable, and more controllable than ever before. Even something as mundane as the hot shower – a relatively recent luxury – speaks to that progress.
But there’s a creeping sense that we may be drifting in the opposite direction. Not away from uncertainty, but back into it. Only this time, it’s dressed up in code.
The Return of Faith in AI Systems
The ongoing noise around “Mythos” – and the suggestion that versions of it are already circulating in unexpected places – highlights something uncomfortable. The people building these systems don’t fully understand how they work.
That’s not a criticism as much as an observation. Large language models are complex, emergent systems. Even those closest to them often describe their behaviour in probabilistic terms. We are, in many respects, relying on something we cannot fully explain.
It begins to feel less like engineering and more like faith.
Tech Optimism and Its Cracks
There’s no shortage of techno-optimism. Leaders in the space continue to promise a future where AI enhances human life in profound ways. They speak confidently about alignment, safety, and control.
And yet, there are caveats. A 20 percent risk of human extinction tends to sit awkwardly alongside the sales pitch.
When figures like Alex Karp enter the conversation with sweeping manifestos, it raises a more uncomfortable question – what are the real incentives driving this race? Are we looking at genuine belief in a better future, or something more self-serving?
It’s difficult to ignore the possibility that both are true at once.
A Familiar Pattern of Overconfidence
History doesn’t offer much reassurance. Engineering has always involved risk, and we’ve repeatedly seen what happens when confidence outpaces understanding. The Chernobyl disaster and the sinking of the Titanic remain stark reminders of how systems can fail, often in ways their creators didn’t anticipate.
Economic history tells a similar story. Entire industries have operated with full knowledge of harm – tobacco, leaded petrol, addictive pharmaceuticals – while continuing regardless.
The difference now is scale. AI doesn’t sit neatly within one sector. Its reach is systemic, and so are its risks.
Alignment, Incentives, and ‘Folk Theory’
There’s a growing sense that much of AI development still rests on what Eliezer Yudkowsky has described as “folk theory” – intuitive, incomplete models of how these systems behave.
That would be less concerning if the incentives were aligned with caution. But as Upton Sinclair once noted, it’s hard to get someone to understand something when their salary depends on them not understanding it.
Whether that’s consciously true or not, it lingers in the background.
A Global Race Built on Uncertainty
The broader context only amplifies the unease. US AI companies are competing aggressively, not just with each other but with China. The stakes are geopolitical, economic, and, potentially, existential.
This isn’t a slow, cautious rollout. It’s a race.
And like many races, it’s being run on incomplete information, imperfect models, and a surprising amount of belief.
God in a Box?
There’s a phrase that keeps coming back – “God in a box.” The idea that we are building something immensely powerful, not entirely understood, and hoping it behaves in ways that benefit us.
It’s a compelling idea. It’s also a risky one.
For all our talk of progress, control, and optimisation, we may be entering a phase where faith plays a larger role than we’d like to admit. Not faith in the traditional sense, but faith in systems, in people, and in incentives that we don’t fully see.
Enjoy yourself. It’s later than you think.
by Quint Boa, AI Video Executive & Producer
Quint is an Executive Producer specialising in AI video production for the healthcare sector. Quint has worked for over 40 years in the film, radio, and television industries. Twenty-five years ago, he founded Synima, a global video production company. Quint has embraced artificial intelligence in the creative process. Working with trusted colleagues, he’s developed a hybrid approach to AI within video production that expedites workflows and reduces costs. Quint believes ‘your health is your wealth’ and is enthiastic about every aspect of healthcare. As a UKCP-qualified psychologist, Quint feels uniquely equipped to support the communication challenges the healthcare faces by combining his experience with AI video production techniques, psychological insight and practical solutions.
