Andrew Basden concludes his series on a Christian understanding of artificial intelligence.
My first post here showed a way to address the AI question “AI = Human?” more fruitfully than is usual, by reference to multiple aspects of reality. My second piece described how AI systems (like X-ray analysis or ChatGPT) work in general, and the inescapable role of humans in AI. AI systems comprise an engine, a knowledge base and a user interface, with different human roles for each, as depicted here:
With this understanding, we can address five outstanding questions, Q2 – Q5, from my first piece.
What Makes AI Capable?
The capability of an AI system comes mainly from its knowledge base encapsulating laws and information that are meaningful in aspect(s) of reality relevant to its application: spatial aspect for Chess AI, kinematic for automated cars and lingual for ChatGPT, for example. [1]
How can ChatGPT write essays (Q3)? ChatGPT analyses a user’s instructions or questions, and generates text. Both operate according to the laws of the lingual aspect, which are encapsulated as a host of probabilities about which phrases and words tend to follow which, given the context. [2] This was constructed by ChatGPT reading vast amounts of Internet content, which, of course, emerged from humans functioning in the lingual aspect.
Table 1 gives Dooyeweerd’s fifteen aspects, with their laws and some examples of applications for each that are mentioned in these blogs.
In fact, most AI applications encapsulate laws of more than one aspect. Chess AI, for example, encapsulates some laws of human strategy (formative aspect). ChatGPT has been trained not just with written text but also by humans tasked with flagging inappropriate content (social and jural aspects) [3].
But why does AI make mistakes, such as in automated cars not recognising a cyclist pushing a bicycle. or ChatGPT offering its famous “hallucinations”?
Why Does AI Go Wrong?
For example, why did the automated car kill a cyclist (Q4)?
There are several reasons AI goes wrong. One is errors in user input or world data. Another is that the engine wrongly processes the encapsulated knowledge, which is the responsibility of the algorithm designer. Three others arise from deficiencies in the encapsulated knowledge itself, and these are the responsibility of the AI developer.
1. Erroneous knowledge. Because human writings from the Internet contain errors, ChatGPT ‘learned’ erroneous patterns that generate “hallucinations”. Also, since its word associations are probabilistic, it sometimes selects inappropriate ones.
2. Missing knowledge: minor biases. Tacit knowledge and rare exceptions are often absent from a knowledge base. In knowledge elicitation, a good analyst will deliberately seek these out but MLAI learns patterns statistically. There is often not enough training data to learn rare patterns reliably, such as cyclists pushing rather than riding bicycles.
3. Undervalued aspects: major biases. The bulk of Internet data on which ChatGPT was trained was written by affluent people in the Global North. This is a worldview problem, in which certain aspects tend to predominate over others, such as the economic and analytical over the ethical and faith. Therefore ChatGPT tends to be trained more in some aspects than others.
This becomes more challenging in later-aspect applications.
In Which Applications Can AI Work Well?
In which applications AI is likely to work well, now and in future, (Q5) can be understood via aspects. The laws of earlier aspects are easier to encapsulate in a knowledge base reliably. This is for two main reasons. One is that the laws of earlier aspects are more determinative so that, for example, 3 + 4 is always 7 (law of quantitative aspect), whereas a valid description of something might take many different forms (lingual aspect).
The other is that the laws of earlier aspects act as a foundation for those of later aspects, so, in principle, encapsulating knowledge of later aspects requires us to encapsulate laws of all earlier aspects too. Laws of physics depend on three earlier aspects, those of lingual, on eight. Moreover, the middle aspects of human individual functioning are influenced by later aspects too, which can also need encapsulating).
Therefore AI tends to work more reliably, and have more successes, in applications governed by the earlier aspects, than those governed by later aspects (see Table above). X-ray analysis (spatial aspect) is more reliable than is ChatGPT (lingual). Those who extrapolate from current successes in AI to “AI will soon be able to do everything” fundamentally misunderstand AI.
However, full reliability is not always needed where AI assists rather than replaces humans – the next question.
How Do We Use AI Beneficially?
Whether AI face recognition is beneficial or harmful (Q6) depends, not just on the AI working properly or wrongly, but the role it plays and whether it is used with evil intent, carelessness or good intent. Nor will AI do all our jobs, as Elon Musk believes; similar predictions were made in the late 1970s!
Roles of AI: Most popular discussion presupposes AI replacing humans, but AI can also assist humans. During the 1980s, I was involved in an AI system to advise managers on the strength of business sectors – analytical and economic aspect application. From information supplied by managers, it estimated sector strengths but then actively encouraged them to disbelieve it rather than accept its answers, inviting them to explore differences between their and its views. This revealed things they had overlooked, thus refining their knowledge. Knowledge refinement is the very opposite of AI replacing humans [4].
Intent, at two levels: Whatever role, is AI used with good intent, evil intent or carelessness? Are decisions to invest in or deploy AI made with responsibility and wisdom, or with self-interest and fear of missing out?
Will AI Take Over From Humans? (Q2)
No. Because, to do so, it would have to (a) have encapsulated in its knowledge base the laws of every aspect (b) have done so more completely and with fewer errors or biases than humans. For the reasons discussed above I do not believe this is possible. [5]
The danger from AI, in my opinion, is not AI capabilities but human sin. Humanity will tend to use AI in ways that are “affluent, arrogant and unconcerned”, which is the reason Sodom was destroyed and Judah was exiled (Ezekiel 16:49). This attitude and mindset can affect all three human activities around the AI system: algorithm design, AI development, and AI use and deployment. Issues of climate change, biodiversity and the Global South are largely overlooked so far but, I submit, are more important issues in God’s eyes, and for our future, than AI capability.
___________
1. These aspects were delineated in Dooyeweerd’s philosophy, as described for example in http://dooy.info/aspects.html. Other suites of aspects could be used, but Dooyeweerd’s is most complete and most philosophically sound; see http://dooy.info/compare.asp.html.
2. Laws of the lingual aspect are deeper than laws of any given language, enabling language to occur – laws about linguistic syntax, semantics, pragmatics and so on.
3. For how ChatGPT was trained, see a fascinating first-person account here: https://www.technologyreview.com/2023/03/03/1069311/inside-story-oral-history-how-chatgpt-built-openai/
4. Basden (1983) outlines eight roles in which AI could be used and be beneficial. Strangely, there has been little discussion of roles since then, but most of the roles still apply today.
5. I might be wrong. Debate is needed here. What I do believe is that the above multi-aspectual approach could assist that debate.
Reference:
Basden A, (1983). On the application of Expert Systems. Int. J. Man-Machine Studies, 19: 461-477. Available at http://kgsvr.net/andrew/-p/ai/Basden83-ApplicES.pdf