Recently three of the chaplains, Fiona , Ibrahim and our consultant Jewish Chaplain Jeremy  shared some thoughts about Artificial Intelligence and what it means to be uniquely human. The result culminated was this blog.

The first Terminator film was released way back in 1984. Before that, in 1968, 2001: A Space Odyssey was released. Why do I mention these films? Because both rely heavily on Artificial Intelligence as drivers of their plots. There are many other examples of AI in fiction – going back further in history than we might think. In fact, finding out about these is an interesting research project.

Today, AI is very much fact rather than fiction, and it plays a role in many of our lives from banking to healthcare, education to the creation of visual art. That’s not likely to change. The genie is well and truly out of the bottle in that respect.

Here at the Chaplaincy we’ve been thinking about, and talking about AI quite a bit in the context of what it can do for us as humans, where it differs from us, and how we can ensure the right checks and balances are always in play. This matters hugely.

An AI analysing medical data at speed to identify worrisome symptoms to a clinician is positive and helpful. But the AI sector itself realises there are areas where AI could be problematic. One of the leading companies in the field, OpenAI, has been testing Voice Engine, a synthetic voice creation tool. This can be made to resemble existing people’s voices, and all it needs is a 15-second audio clip to do so.

A blog at the OpenAI web site gives some examples of use. It is an interesting read. However, the blog notes that the technology won’t yet be released widely, recognising that generating speech that resembles people’s voices has serious risks, especially in an election year. (The US election is the focus).

So, back to our own discussions at the Chaplaincy.

We think that humans have some unique characteristics that an AI, which is at its root made of computer code, can’t have. We can sum these up as duty, wisdom, compassion and gut feeling or intuition. It is these in combination that make up our warts-and-all nature. We call on these facets of ourselves when we make every decision from what to have for breakfast to which of a selection of shortlisted candidates to employ.

These facets exist in us whether we are people of faith or not. They are formed from our own, unique life experiences. We use them to make decisions which might feel right to us even if they are not logical, and even when we know they are not logical. (I’d like chocolate cake with my afternoon tea, please, even though I know I shouldn’t have it).

We’ve been talking about these human characteristics, their role in the decisions we make, and about accountability for those decisions in the context of how AI could be used across all its many possibilities.

Humans are accountable for their decisions (and consequent actions) in many ways. In different situations we might be accountable to our peers, work colleagues, bosses, customers, service users, elders, friends, relatives, next door neighbours, and others, including people we know and people we don’t. Some accountability is codified in law.

The need for accountability in AI is being recognised at a high level. OpenAI’s action regarding Voice Engine is an example of the sector making a decision about itself. Governments are also realising their role. Last last year the UK hosted the AI Safety Summit, and this resulted in the Bletchley Declaration on AI Safety, which was agreed by 28 countries. It recognises the need to work collaboratively to identify safety risks and build risk based policies.

We need, as societies, as governments, as international organisations and as private individuals, to think carefully about how we use AI. We need to be sure we understand how it makes decisions, that there is a good dose of human involvement, bringing our humanity to bear, and that the accountability regime is always robust. If we don’t do these things, we’re risking ending up in the service of computer code, rather than it working in our service.