What is artificial intelligence? The rapid spread of ChatGPT and its competitors is accompanied by a widespread view that AI is an artificial form of intelligence. This phrase might seem innocuous, but I believe it’s both misguided and dangerous. We commonly take intelligence as an indicator of sentience, because until recently it was considered a characteristic of humans and some other ‘higher’ animals. But the spread of, and people’s growing familiarity with, artificial intelligence very easily feeds the narrative that computers are now close to having autonomy and/or sentience. My suspicion about this is endorsed by comments I’ve seen below a video recently posted to YouTube featuring a talk about “AI, Ethics and Faith” as part of TFN‘s Iron Sharpens Iron series.
I argued in a recent post that the ‘artificial’ in AI should be understood in the sense in which it occurs in ‘artificial flowers’: that is, ‘unreal’. This is distinct from the sense in which it occurs in, say, ‘artificial flavours’. Artificial flavours are genuine flavours created by artifice (i.e. with human agency), but artificial flowers are not really flowers at all. They’re merely imitations.
Confusion is exacerbated by how well the latest chatbots can simulate a human dialogue partner combined with the obscurity of how they work. So one way to help shatter the illusion of AI actually being a form of intelligence might be for us to gain some skeletal understanding of its inner workings. Text-generating AI tools like ChatGPT are complex feats of engineering, and there’s a growing number of free resources that can shed light on their design.1 But in the rest of this post, I want to share an intriguing way that the underlying large language models (LLMs) exploit a feature of God’s created order that I encountered before I realised the connection to AI. This can help us understand how LLMs work – and how they are not genuinely intelligent.
In 2021, I published a paper about the concept of objectivity. We presented a striking view of this concept that my coauthor Dick Stafleu had developed from the Reformational philosophy tradition by applying it to the science of physics. In a nutshell, we argue that objectivity means representing something complex by projecting it into simpler terms like numbers or diagrams. We provided a range of illustrations from different sciences, and went so far as to suggest that using language to describe social, economic or legal phenomena counts as objectivity just because language systems are simpler than those kinds of realities. (By the same token, describing something from maths, physics or biology in language doesn’t count as objectivity.)
What I didn’t know back then was that exciting progress in the science of natural language processing was being made by assigning numbers to words. ‘Word embedding’ is a set of techniques that encode words, or parts of words, using vectors (sequences of decimal numbers) – so that they could be represented as rows in a spreadsheet and even plotted as arrows on a graph. Important progress in this area was made by the Czech language scientist Tomáš Mikolov and colleagues in the 2010s, demonstrating that artificial neural networks2 can learn ‘positions’ of words in this ‘space’ that actually capture meaning relations. This means a couple of surprising things: (i) words with similar meanings have similar vectors, and (ii) if each vector is represented by an arrow, placing two or more word ‘arrows’ head to tail leads us to a position corresponding to a concept that combines the meanings of those words. A nice illustration of this comes from the 2013 paper of Mikolov et al.3: the vector that separates “sushi” from “Japan”, if added to “Germany”, gives a position whose closest word vector represents “Bratwurst”. We can illustrate this in a cartoon graph:

How does this relate to objectivity? It’s a perfect example of the concept we laid out in our paper, which Stafleu had written about as early as 19804. Word embeddings can be objectifications of meaning in that they represent meanings numerically, and addition or subtraction performed on the numbers corresponds to combining meanings – such that the results of arithmetic can be converted back into meanings. This is analogous to how appropriate arithmetic on physical quantities (e.g. adding and subtracting, but never multiplying, energy values) produces new quantities with physical meaning (amounts of energy). Stafleu speaks of projecting relationships from one relation-frame into an earlier relation-frame and discovering that equivalent relationships pertain there.
This is only the beginning of an explanation of how ChatGPT and similar tools work, but in my view it’s the most important part. Essentially, these LLMs are pre-trained to represent the meanings of chunks of text by calibrating an enormous vector space (e.g. 12,288 dimensions in the model behind the latest public version of ChatGPT) using a next-word prediction task over vast numbers of documents (like all the English Wikipedia articles). Then when the model is used, the user’s question (prompt) is encoded into a precise location in this enormous space, from where a plausible response can be generated, word by word, by moving around this space in ways that are highly likely according to the texts used for training, while also accounting for the syntax of both the question and the progressively-generated response.
Machine learning systems like this are so complex that no-one can have a detailed understanding of why a model produces the outputs it does – and so perhaps it’s understandable why even many AI experts and technicians imagine that we’re blurring the lines between humans and computers, on the verge of creating autonomous thinking beings. But another perspective, as I’ve tried to sketch here, portrays AI as the fruit of ingenious exploration of the coherence and potential of the created order. It can be good fruit if we use it as a tool for unfolding the cultural mandate, and bad fruit if we end up idolising it. Nuanced religious factors are surely at play – and that’s something to explore in a future post.
____________________
The featured image for this post was generated for me by the AI image generator at deepai.org, from the prompt: “Array of artificial daisies in different colors”.
- I’ve found the following resources helpful starting points:
– 3Blue1Brown (2024) Large Language Models explained briefly. YouTube. At: https://youtu.be/LPZh9BOjkQs (and others from this YouTube channel)
– Lee, T & Trott, S. (2023) ‘A jargon-free explanation of how AI large language models work’. Ars Technica. At: https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/ ↩︎ - ‘Artificial’ in this phrase is clearly the kind that goes with flowers, not flavours. ↩︎
- Mikolov, T. et al. (2013) Efficient Estimation of Word Representations in Vector Space. ArXiv: 1301.3781v3 ↩︎
- Stafleu, MD. (1980) Time and Again: A Systematic Analysis of the Foundations of Physics. Toronto: Wedge. Also available here. ↩︎
- Flowers not flavours in AI: How large language models resonate with the beauty of creation - May 19, 2025
- Artificial thinking? - April 2, 2025
- A degree of critical thinking - September 2, 2024
0 Comments