Andrew Basden takes a look at some of the big questions around artificial intelligence.
“The debate around AI”, wrote Toby Payne, “is desperate for a perceptive Christian contribution”. In an email from the Good News for the University initiative, he went on to ask: How might the fact of our being made in the image of God contribute to the debate over artificial intelligence and what is at risk when it is lost or ignored?
Often this becomes a debate about innate properties that God and humans might share – things like intelligence, consciousness or freedom – and arguments over whether computers can ever have these properties. However, any attempt to ascribe any a property X to computers is countered by: “But that is not real X!” End of discussion! Discussion became sterile.
Richard Gunton’s recent post took a more fruitful approach: If being made in the image of God means that humans are to do what God wants doing, then AI is made in the image of humanity: “The majority of AI systems do something that humans can do, in a machine-like way, … [sometimes] faster or more efficiently, or both, without a human needing to be present.” Then Richard emphasised the need for humility.
I will take Richard’s approach further by addressing actual questions that are being asked about AI, from a perspective of having been involved in AI since the early 1980s. The worldwide debate over AI has gone beyond the traditional questions, “Could computers ever become like humans?” (Q1), and, “Will AI take over the world, making humans extinct?” (Q2), to rather specific ones. “Will AI write essays for students so that they can cheat?” (Q3, referring to apps based on Large Language Models like the now-famous ChatGPT). “Will automatic cars kill cyclists who are pushing their cycles?” (Q4; one did). People also ask, “Surely AI is better at detecting cancers in x-rays / finding new chemicals / etc.?” (Q5) and “Will AI recognise my face and put me at risk?” (Q6).
There being too many questions above for one blog post, I will discuss only Q1 here, leaving others for later.
The AI Question: Could computers ever become like humans?
This is a philosophical question that has been debated for 70 years but remains unresolved. Why? Because the question has generally been posed within the context of a dualistic ground-motive. Ground-motives propel a society’s thinking and beliefs over centuries. Three dualistic ground-motives have driven Western thought for 2500 years, irreconcilably opposing two poles. A fourth, the Biblical ground-motive, is non-dualistic (in fact pluralistic). They yield different versions of the AI question [with some proponents’ names in brackets]:
(a) The classical Greek ground-motive of Mind-Matter: Computers are matter, humans are partly mind; can matter generate mind? e.g. Could a dump of my mind into Cyberspace be the real me? Could I live forever that way? [John Perry Barlow]
(b) The Scholastic ground-motive of Nature-Supernature: Computers are natural; humans are partly supernatural; can computers gain such supernatural characteristics? e.g. Is the biological causality by which humans operate a kind of spark that computers can never have? [John Searle]
(c) The Humanistic ground-motive of Nature-Freedom: Computers are machines, determined; humans are partly free; can freedom arise from determined causality? e.g. Could an Emergence Theory explain it [Allen Newell, Systems theory] Or Quantum Mechanics?
Posing the AI question in any of the dualistic ways is ultimately fruitless because each presupposes the very dualism that it is trying to overcome. The philosopher Dooyeweerd argued that the three dualistic ground-motives have misled philosophy and science into many dead-ends, just as is happening with the AI question.
(d) The Biblical ground-motive of Creation-Fall-Redemption recognises, and encourages us to explore, multiple ways in which the Creation – involving humans, animals, plants and inanimate things (including machines such as computers) – is Meaningful and Good and works well.
The Christian philosophers Dirk Vollenhoven and Herman Dooyeweerd identified fifteen such ways in which things exist and operate meaningfully – physical for inanimate things, biotic for plants, psychical for animals, and various other ways for humans, such as language, art, morals and faith. They are not things, but rather aspects of reality. The following table gives the aspects and their kernel meanings in columns 1, 2.
We can now rephrase the AI question as: “Is it meaningful to say that computers, like humans, function in aspect X?” When we do this, however, we find two ways of asking the question, which are contained in columns 3 and 4 respectively in the table. They depend on what we mean by ‘computer’.
(a) In everyday language, when we say “computer” we usually assume a system in which some humans are or have been involved, such as its designers, fabricators, programmers and users. “ChatGPT writes essays” expresses this way of thinking.
(b) In a narrower, theoretical way, especially used when philosophers ask the AI question, we restrict “computer” to “computer as such”, without any reference to those humans. We treat it as merely a mass of silicon, various doping elements, copper, plastic, etc., all arranged in particular spatial distributions and subjected to certain electromagnetic forces. This way, though much discussed, is impoverished compared with (a) because it obscures the reason why a computer differs from a rock in which the elements are distributed by purely physical processes. To take humans into account is crucially important. [1]
In the first four aspects, answers to both (a) and (b) are “Yes” for both computers and humans. For example, computers and humans consume energy (and thus emit greenhouse gases), occupy space, and so on. In these four aspects, computers are like humans. In subsequent aspects, however, the answer is “Yes” if we take humans into account (version (a)), and “No” if we don’t (version (b)). The answer is “Yes” in (a) because we assign meaning from later aspects to the physical operation of the computer. It is the fabricators’ intention to build a computer: that is the reason why the various chemical elements are arranged spatially the way they are. It is the designers’ and programmers’ intention to produce an application, such as ChatGPT, that is the reason for the initial arrangements of electromagnetic forces (in what fabricators would call the computer memory). It is the users entering text into ChatGPT that are the reason for how those forces vary through time.
The answer is “No” in (b) because, in that view, the intention to build a computer, create ChatGPT and seek meaningful answers, etc, are irrelevant to the computer’s functioning.
Thus the Biblical ground-motive enables us to answer the AI Question in richer ways than the three dualistic ones do. The debate becomes more fruitful.
In the next post we will see how this view allows good answers to questions Q2 to Q6.
Andrew Basden is emeritus professor of human factors in information systems at the University of Salford. His books Foundations of Information Systems and Foundations and Practice of Research: Adventures with Dooyeweerd’s Philosophy are used in universities around the world.
________________________
[1] In the philosophical terminology used by Dooyeweerd, (b) is ‘subject-functioning’ whereas (a) is any meaningful functioning, whether as subject and/or object. In much traditional philosophy, (b) is treated as superior to (a), but to Dooyeweerd and much recent philosophy, (b) has limitations and (a) is more real. I treat (a) as more real.