When you read or hear about artificial intelligence (AI), you probably have one of two reactions: fear of the unknown or some level of disregard because of other seemingly more pressing issues. Both reactions are understandable. AI seems like a far off, futuristic technology that doesn’t yet affect daily life. In reality, though, AI is around us all the time and, for all its tangible benefits, major promises have been made about how this technology will revolutionize our lives.
Along with many of these overhyped predictions, there’s a great need for ethical reflection, because these technologies already drive our communication tools, medical innovations, weapons of war, economy, office work, and even the smart devices in our homes. There are many moral concerns about how these technologies will be developed and deployed in our local communities, as well as major philosophical debates over the role of theistic faith in the sciences. But a quick survey of popular AI literature reveals too few thinkers engaging with these issues from a distinctly Christian worldview.
This is exactly where John Lennox’s new book, 2084: Artificial Intelligence and the Future of Humanity, enters the conversation. Lennox serves as emeritus professor of mathematics at Oxford University and is a prolific writer on the interface of science, philosophy, and religion. In this book he engages a wide swath of AI literature, highlights the promises and perils of this technology, and ultimately shows how the Christian faith is the most coherent worldview for engage the pressing issues of AI.
2084: Artificial Intelligence and the Future of Humanity
John Lennox
In 2084, scientist and philosopher John Lennox will introduce you to a kaleidoscope of ideas: the key developments in technological enhancement, bioengineering, and, in particular, artificial intelligence. You will discover the current capacity of AI, its advantages and disadvantages, the facts and the fiction, as well as potential future implications.
The questions posed by AI are open to all of us. And they demand answers. A book that is written to challenge all readers, no matter your worldview, 2084 shows how the Christian worldview, properly understood, can provide evidence-based, credible answers that will bring you real hope for the future of humanity.
Faith in What?
The book’s two main strengths are Lennox’s engagement with secular and naturalistic worldviews that drive much of the conversation around AI, and his defense of the Christian faith against challenges by modern scientific worldviews. Lennox engages many well-known thinkers and AI experts—Yuval Noah Harari, RayKurzweil, Nick Bostrom, Rosalind Picard, Max Tegmark, and more—while addressing many of the philosophical challenges presented to the Christian worldview.
The book is organized around a few existential questions: Where do we come from? Where are we going? What does it mean to be human? Such questions help expose the varying worldviews behind much of this work in AI. Lennox spends considerable time diving into many of the basic questions of narrow AI and general AI, as well as the concepts of transhumanism and the dream of a superintelligence.
Lennox then shifts to discussing the philosophy of science and AI. Much of our scientific heritage is due to the pursuit of a knowable universe by men and women who believed in a rational God. Often debates over faith and science are framed as if religious believers have blind faith, and that only the sciences are based on reason and knowledge (223).
Atheism and naturalism, Lennox notes, are actually working off of borrowed capital from a theistic worldview; indeed, without reason they couldn’t construct or grasp anything about this world in the first place (115). And modern secular worldviews have attempted to drive a wedge between theism and science, where no gap really exists.
Need for Ethical Reflection
After a survey of the popular literature on AI, Lennox shifts to a philosophical reflection on the nature of humanity. He offers a richly intellectual defense of God’s image and shows the need for deep moral reflections on the nature of AI.
Ethical formulations don’t “evolve horizontally through social evolutionary processes, as many naturalists claim.” Rather, they’re transcendent by nature, since we must have a standard by which we base our decisions. Our moral convictions are, to a certain extent, “hardwired” because God, who defines good and evil, created each of us in his likeness. Moral relativism is simply “not liveable” (148).
Lennox points out that modern secular worldviews have attempted to drive a wedge between theism and science, where no gap really exists.
Lennox lacks adequate space to address many of the specific and pressing moral concerns surrounding emerging technologies like AI. While he does mention ethical questions like the effect on work, privacy, and even weaponry, the main focus of the book is on the philosophical grounding for ethics, and how God’s image is humanity’s defining characteristic. These specific ethical reflections are one area I wish Lennox had expanded on. Ultimately, though, Lennox serves us well by providing a sturdy foundation for engaging debates over faith and science, particularly in light of unique challenges posed by AI.
Ultimate Priority
Lennox reminds readers, playing off of Yuval Noah Harari’s famous work Homo Deus, that Christians can enter these debates with confidence because there’s already a God-man who took on human flesh in order to bring us to the Father.
While we can embrace technologies like AI, we must first think biblically and ethically about how we use these technologies in order to love God and neighbor, in light of how God has first loved us.