Skip to content
Only a few spaces left at the 2026 OCCA Summer School – Join us in Oxford

The following article is a summary of a talk given by Prof. John Lennox at The OCCA’s, Confident Faith Conference, 2024.

Can we live with AI?

The truth is, we already are. We live with what is known as narrow AI. That is, every day systems driven by huge databases and algorithms that simulate intelligence without possessing consciousness or understanding. We encounter it in digital assistants, online shopping, healthcare systems, autonomous vehicles, and facial recognition technologies.

Some of these developments sound very interesting. Sam Altman, CEO of OpenAI, has spoken about transforming healthcare through AI-driven personal chatbots. There is enormous potential for developing systems for the future. But alongside the benefits come some very serious dangers.

While many of these applications are adopted voluntarily, not everyone has a choice about them. Facial recognition technology designed for security is already being used to monitor and supress minority populations, particularly Uyghur Muslims in China. Deepfake technology threatens democracy itself. Ken McCallum, Director General of MI5, warns that AI-generated impersonation could make it impossible to distinguish truth from falsehood:

The fabric of society could be undermined by AI’s impersonating real people so that it would no longer be possible to distinguish truth from falsehood. Deep fake technology is a threat to democracy and could be harnessed by hostile states to sow confusion and disinformation at the next general election.[1]

And that raises the question,

What can we actually live with AI?

 

 

One of the leading voices in AI, Yoshua Bengio, has warned that foreseeable AI systems will never possess a true moral sense:

People need to understand that current AI and the AI we can foresee in the reasonable future – does not, and will not, have a moral sense or moral understanding of what is right and what is wrong.[2]

They can process information, but they cannot understand right from wrong. That’s deeply significant, because we are creating increasingly powerful systems without any built-in moral framework.

When we think about the question, ‘what can we live with?’, my mind immediately goes back to two famous dystopias: George Orwell’s 1984 and Huxley’s, Brave New World. A brilliant analysis of those two books was given by Neil Postman, who said:

Orwell warns that we will be overcome by an externally imposed oppression. But in Huxley’s vision, no big brother is required to deprive people of their autonomy, maturity and history… people will come to love their oppression and to adore the technologies that undo their capacities to think…

Orwell feared that what we hate will ruin us.

Huxley feared that what we love will ruin us.[3]

And perhaps both are happening at once. We fear technology, yet we are also increasingly attached to it.

Some time ago, E.O. Wilson, the brilliant entomologist wrote this:

The real problem of humanity is the following: we have Paleolithic emotions, medieval institutions and godlike technology. And it’s terrifically dangerous… until we answer those huge questions of philosophy that the philosophers abandoned a couple of generations ago.

Where do we come from? Who are we? Where are we going? – rationally, we are on very thin ground.[4]

These three questions echo philosopher Immanuel Kant. E.O. Wilson is wrong of course. There are many serious philosophers dealing with precisely these questions – where do we come from? Who are we? Where are we going? And where we are going in the opinion of some, is towards Artificial General Intelligence.

AGI and the Fear of Superintelligence

Much of the discussion revolves not just around narrow AI, but around Artificial General Intelligence – AGI. This is the attempt to build machines that equal or surpass Artificial Intelligence.

Some of the world’s leading thinkers believe this could transform civilisation. Former Astronomer Royal, Martin Rees warned that,

We can have zero confidence that the dominant intelligences a few centuries hence will have any emotional resonance with us – even though they may have an algorithmic understanding of the way we behaved.[5]

The late, great Cambridge physicist, Stephen Hawking said,

The real risk with AI isn’t malice, but competence. A superintelligent AI will be extremely good at accomplishing its goals, and, if those goals aren’t aligned with ours, we’re in trouble.[6]

And Eliezer Yudkowsky of the Machine Intelligence Research Institute has gone further still, warning,

If somebody builds a too powerful AI under present conditions, I expect that every single member of the human species and all biological life on earth dies shortly thereafter.[7]

That kind of concern has led organisations such as the Center for AI Safety (CAIS) to place AI extinction risk alongside pandemics and nuclear war.

But not everyone agrees that AGI is even possible. Jobst Landgrebe and Barry Smith argue that the mathematics of complex systems shows AGI can never truly be engineered – rather like the impossibility of building a perpetual motion machine in physics. I’m sympathetic to this view as a mathematician, a view that is also supported by thinkers such as Physics Nobel Prize winner, Roger Penrose.

Perhaps the most important point is this: even if AGI never arrives, AI already poses enormous societal dangers. The scientific journal, Nature argued in 2023 that we should focus less on speculative machine doomsday scenarios and more on the very real harm AI is already causing.[8]

That is exactly right. The immediate danger is not necessarily a superintelligent machine taking over the world. It is the gradual erosion of human judgement, critical thinking, and freedom.

As Nir Eisikovits in Scientific American put it:

AI won’t blow up the world. But the increasingly uncritical embrace of it, in a variety of narrow contexts, means the gradual erosion of some of humans’ most important skills. Algorithms are already undermining people’s capacity to make judgments, enjoy serendipitous encounters, and hone critical thinking.

The human species will survive such losses. But our way of existing will be impoverished in the process.[9]

The Omega Project and Digital Totalitarianism

M.I.T. physicist, Max Tegmark explores several future AI scenarios in his book, Life 3.0. but the one that particularly stands out is the so-called ‘Omega Project’. In this scenario, a single leader gradually consolidates global power, eventually establishing a world government. That idea, of course, is not new; many leaders throughout history have aspired to some form of global control or unified world order, and it remains an aspiration in certain quarters today.

In Tegmark’s scenario, this centralised authority gains control of the world economy under the banner of fighting crime, combating terrorism, and protecting people during medical emergencies. The methods sound disturbingly plausible because they are framed as safeguards for public security and wellbeing. For example, everyone could be required to wear a security bracelet – something like an advanced Apple Watch – that continuously uploads a person’s location, health data, and even conversations. And here comes the chilling part: if the system detected an unauthorised attempt to remove or disable the device, it could respond by injecting a lethal toxin into the wearer’s forearm.[10]

Part of what makes the scenario so compelling and unsettling, is that it doesn’t begin with any obvious tyranny. It begins with convenience, safety and promises of protection.

When I read that, I could not help thinking of the imagery in Revelation 13. The Bible speaks about a future system of global economic and political control:

And it was allowed to give breath to the image of the beast so that the image might even speak and might cause those who would not worship the image of the beast to be slain… also it causes all… to be marked on the right hand, so that no one can buy or sell unless he is the mark that is the name of the beast or the number of its name. – Revelation 13:15-17

We should not dismiss biblical passages like these too quickly as mere symbolism. C.S. Lewis rightly pointed out that symbols represent realities.

Paul speaks plainly in the New Testament of a future, ‘man of lawlessness’ who exalts himself against God. The convergence between some modern technological projections and these biblical warnings is striking.

And now we are even beginning to see AI discussed in almost religious terms.

Some ethicists argue that we may soon witness the birth of a new AI religion. Certain AI systems are increasingly treated as if they possess godlike attributes: omniscience, immortality, limitless knowledge, constant availability.

It is already happening.

The Real Answer to Human Death

At the heart of transhumanism is the belief that humanity can become godlike through technology. We are told that AI may one day solve the problem of death itself.

But Christianity presents the exact opposite movement.

Not humans becoming gods, but God becoming human.

God has already shown the value and dignity of humanity by becoming one of us in the person of Jesus Christ. So when people tell me they hope AI will eventually conquer death, I often smile and say, ‘you’re too late.’ The problem of human death was dealt with twenty centuries ago when God raised Jesus Christ from the dead.

The Christian hope is not uploading consciousness into silicon. It is resurrection.

I sometimes describe it as the true ‘divine upgrade’, unfolding in two stages:

  • “To as many as received him, to them he gave the right to become children of God.” John 1:12

And then ultimately:

  • “The dead will be raised imperishable, and we shall be changed.” 1 Corinthians 15:52

That is a far greater hope than technological immortality.

A Choice Before Us

So where does that leave us?

It leaves us facing a choice.

Will we place our ultimate trust in technology and the dream of transhumanism? Or will we trust the God who entered human history in Jesus Christ?

The Bible warns repeatedly that deception will increase as history moves forward. Jesus himself warned his disciples to be alert to deceit. Living in our age of deepfakes, manipulation and digital control, these warnings of Jesus feel remarkably contemporary.

But despite all the fears surrounding AI, the Christian message ultimately ends in hope, not in despair.

Whatever developments come in AI during our lifetime, or that of our children, Christians can lift up their heads with confidence that this world has not heard the last of Jesus Christ.

Technology will not have the final word. Christ will.

 

References

  1. Ken McCallum (Director General of MI5) on AI and Truth
  2. Yoshua Bendigo, on AI and Morality
  3. Neil Postman, Amusing Ourselves to Death: Public Discourse in the Age of Show Business, Viking Penguin, 1985
  4. James Watson and Edward O. Wilson: An Intellectual Entente, Harvard Magazine, September 10th, 2009 https://www.harvardmagazine.com/2009/09/james-watson-edward-o-wilson-intellectual-entente
  5. Lord Rees, UK Astronomer Royal from 1995 -2025
  6. Stephen Hawking, Reddit ‘Ask Me Anything’ Series, 2015 https://www.reddit.com/r/science/comments/3nyn5i/science_ama_series_stephen_hawking_ama_answers/?rdt=46083
  7. Eliezar Yudkowsky, Machine Intelligence Research Institute, https://intelligence.org/
  8. Nature, Vol. 618, 29thJune 2023, p.885
  9. Nir Eisikovits, “AI is an Existential Threat – Just not the way you think”, Scientific American, July 12, 2023 https://www.scientificamerican.com/article/ai-is-an-existential-threat-just-not-the-way-you-think/
  10. Max Tegmark, Life 3.0: Being Human in the Age of Artificial Intelligence, Allen Lane, 2017

 

Learning More

OCCA The Oxford Centre for Christian Apologetics was established in 2004 to raise up the next generation of evangelist-apologists. By 2021, around 350 emerging evangelists from around the world had studied on the OCCA one-year programme. This course equipped each of them to share and defend the gospel message and to come alongside others to help them with their intellectual objections and heartfelt concerns about the Christian faith. Subscribe to our weekly newsletters to see our latest articles from our team of speakers.

Name
Select the type of emails you'd like to receive
World Religions currently includes Hinduism, Sikhism, Jainism, Buddhism and Islam

Hear from us