Christianity cookies notice

To give you the best possible experience, this site uses cookies. We have published a cookies policy, which you should read to find out more about how we use cookies. By clicking 'Continue' you agree to allow us to collect information through cookies.

Join our newsletter

Subscribe above

OPINION - Artificial intelligence - what is it good for?

Rev Dr Peter Phillips argues that AI is a morally neutral tool, whilst pointing out potential dangers, and asking what we want to achieve.

Read time: 9 minutes and 3 seconds

Both words in the phrase “Artificial Intelligence” are misleading, at the moment. One day, and perhaps one day soon, AGI (artificial general intelligence – think “the Terminator”) will come knocking on the door. But until then, AI could be thought of as a series of helpful subroutines to make us more productive – remember the paperclip in Microsoft Word? Think about those “chat with the computer” help routines. Like the automatic cashier in Tesco, they offer a choice for the consumer to queue for a human or an automated checkout. Often, humans opt for the shorter queue and the automated assistant. We want to do things quickly and so we turn to computer assistance to save us time – a tool to catch up with life?

Some theologians talk of humans as co-creators with God.

While tools aren’t theological, God gave us the intelligence and wisdom to make use of tools to a very high standard. Some theologians talk of humans as co-creators with God. We take what God has made and turn it into something we fashion, shape and beautify. We use tools in many ways to make our daily lives easy and for those who live with a disability or disease, often tools help us simply to live. Moreover, we can use tools to generate more income, more food, fewer poisonous chemicals in the air, less danger on the roads. Tools are part and parcel of the lives of both humans and many other sections of God’s creation.

But AI as a tool to improve our productivity is becoming more and more sophisticated – perhaps more sophisticated than the user! So, many will be familiar with the likes of CHATGPT, DALL-E and Midjourney (among many others). These AI engines (actually large language models – based on the amount of material they are trained on to do their work) mimic intelligence in performing a task which seems to require intelligence – taking a human prompt and producing the requested output. So, ask CHATGPT to create an essay on Mondrian, and it will; ask it to devise an act of worship for a Christian church, and it will; ask it to write a Pascal script for sending tweets, and it will. It does this by learning from lots of similar material already on the web and then producing the script by checking which word would normally follow the previous word it has chosen. It’s a tool doing the job it has been trained to do: an algorithm analysing data in order to solve a specific problem. Of course, you might want to ask questions about whether humans jump to the easy computerised option too quickly when God gave us so much creativity within our own brains!

Ask CHATGPT to create an essay on Mondrian, and it will; ask it to devise an act of worship for a Christian church, and it will.

Sometimes machine algorithms (or rather those who code the algorithms) get things terribly wrong. So, data sets which train algorithms can be plagued with human bias – see Cathy O’Neil’s Weapons of Math Destruction where she points this out – how computers read historical data without the social context taken into account and imply future actions based on a wrong analysis of the past or present. Algorithms can, indeed, be fed more complex data sets but already the internet is full of facial recognition systems that cannot really do very well with black faces, with google algorithms which send back results for “cute babies” without any babies of anything other than Caucasian origin. I just checked this and of the first 50 images which Google sent in response to my request for “cute baby images”, I received 45 white babies and 5 others from a non-Caucasian background. You see?

So algorithms aren’t theological – but the way they shape our understanding of the world has ethical ramifications and so algorithms matter. Where AI impedes the development of social justice, inclusion and the well-being of all creation, God and the Christian Church in general will be very interested indeed – see the Bible!

Kate Crawford argues that Artificial Intelligence is neither artificial nor intelligent.

In her book, Atlas of AI, Kate Crawford argues that Artificial Intelligence is neither artificial nor intelligent. It is embodied in expensive computer systems, in concrete buildings and server farms, in expensive technologies which take up vast quantities of extremely rare minerals often mined in deplorable conditions. AI is seen as a replication of human intelligence. But no one knows how the brain works, no one knows how we gain self identity or how human intelligence works. Even if we could create a neural network which would replicate the brain, will we ever create an intelligent being, given that the brain only works because of body memory, muscle memory, and automated systems, which happen throughout our organs? The AI industry wants to focus on our brains but Christian and Jewish faith knows the importance of every part of the body, not least the heart.

Crawford talks of AI as a registry of power – of the latest form of capitalism which seeks to extract wealth from the earth and turn it into money for the rich at the expense of everyone else. Aral Balkan talks of humans being farmed for their data – that if anything is free, we are the product. Cory Doctorow reminds us that governments want our data to sell it to AI developers – to make money but also to make power (read his latest work: Attack Surface).

Those are powerful statements.

If it’s really that bad should Christians be involved in AI at all?

Continued below...

Christianity OPINION - Artificial intelligence - what is it good for?

Steven Croft, the Bishop of Oxford was an important member of the House of Lord’s working party on AI which developed the report: Ai in the UK: Ready, Willing and Able? This brief paper follows the same kind of arguments given in this much larger and detailed report about how we define AI, what its benefits are, what its problems may be. As part of the process, Bishop Croft developed and disseminated what became called the 10 Commandments of AI. The key element of such commandments was that they were all pro-human, all focussed on the benefit of humanity rather than for the AI industry or for technology firms:

Jason argues that one of the most countercultural things we can say is “I don’t know”.

1. AI should be designed for all, and benefit humanity.
2. AI should operate on principles of transparency and fairness, and be well signposted.
3. AI should not be used to transgress the data rights and privacy of individuals, families, or communities.
4. The application of AI should be to reduce inequality of wealth, health, and opportunity.
5. AI should not be used for criminal intent, nor to subvert the values of our democracy, nor truth, nor courtesy in public discourse.
6. The primary purpose of AI should be to enhance and augment, rather than replace, human labour and creativity.
7. All citizens have the right to be adequately educated to flourish mentally, emotionally, and economically in a digital and artificially intelligent world.
8. AI should never be developed or deployed separately from consideration of the ethical consequences of its applications.
9. The autonomous power to hurt or destroy should never be vested in AI.
10. Governments should ensure that the best research and application of AI is directed toward the most urgent problems facing humanity.

In the end AI is only as good, or bad, as its coder, or as the material from which it is learning.

Jason Thacker, amongst many other theologians in the States, is wary of AI. In his book, The Age of AI and in regular posts in the Christian media, Thacker outlines the trends around AI. So in a recent Baptist Press article, Jason argued that the following four trends would dominate:

1. Content moderation – Jason thinks that this will become an increasingly contentious issue, as politicians particularly within the democratic Global North seek to protect free speech while at the same time call on social media companies to censor hate speech, racism and bullying. The question is whether you can do both at the same time in increasingly divided societies. How will religious freedom fare in such an environment?

2. Misinformation/fake news - exploring the problem through information overload, Jason argues that we often cannot determine whether news is fake because of the sheer mass of alternatives on offer – what is truth right now? Jason argues that one of the most countercultural things we can say is “I don’t know”.

3. Pervasive Surveillance – how much will AI be used to gain more and more opportunity to surveil the public space. Already we hear of new cameras on roads in the UK able to identify 15,000 drivers using their mobile phones while driving; already we know of Chinese surveillance of the public which scores people according to their compliance with Government directives. Will governments in the US and EU seek to legislate in this area – a digital bill of rights – perhaps similar to Steven Crofts Ten Commandments for AI?

...perhaps AI should be programmed to protect all life, or all intelligence?

4. Digital Authoritarianism – using the example of the Chinese Communist Party which uses digital tech to award compliant citizens and punish unruly citizens, Jason reminds us of the Party’s genocide of the Uyghyr muslims. But China is not the only example and states are not the only users of such methods. As digital is used to suppress, perhaps we also need to think of ways in which digital might also be used to set people free of such oppression?

In the end AI is only as good, or bad, as its coder, or as the material from which it is learning (another issue when those are texts stolen from the web without payment of royalties). There is nothing inherently bad about AI. Some would argue that AI should always be pro-human – in other words always supporting human life, not least because sci-fi has already proposed ways in which AI decides that human life is a cancer which the planet needs to expel (see, for example, Interstellar). But perhaps AI should be programmed to protect all life, or all intelligence? Should AI be programmed to avoid fossil fuels in its own working processes or to replicate itself in ways which avoid the use of carbon or plastics? But then we are into the whole field of AI Ethics and that’s another story…

Related Research: “Digital Theology and a potential Theological Approach to a Metaphysics of Information” in Zygon, Spring 2023: