AI (artificial intelligence) can beat us at Go and Chess. AI let an automated car kill a cyclist. AI can analyse X-ray screens very well. ChatGPT can write essays for students, but they are bland and full of errors (hallucinations).
A few weeks ago I outlined how a Biblical perspective that celebrates the diversity of meaning in Creation makes debate on whether AI can be like humans more nuanced and fruitful by comparing humans and AI in each aspect. I also asked five other questions about prospects for AI. In order to answer these, we will need some understanding of how AI technology works and the importance of humans in AI systems. That is the topic of today’s post, allowing us to return to those intriguing questions next time.
How AI works
The following figure shows roughly how AI works. The AI system is a software engine operating with a knowledge base, interacting with users via a user interface (UI) and sometimes with data from the world via sensors or databases . The knowledge base encapsulates knowledge about how it should operate in its intended application, based on various technologies, like inference nets, sets of logical statements, sets of associations, or so-called neural networks. The engine is designed and written to process the encapsulated knowledge according to the technology employed so as to respond to users (or the world).
For example, at the core of ChatGPT is a huge array of probabilistic associations between phrases and words found in billions of statements taken off the Internet (with a lot more around this, such as images). Its engine uses this both to ‘understand’ user questions or instructions and to generate replies or even essays .
Two kinds of AI
There are two kinds of AI, two ways in which the knowledge base can be constructed, in each of which the AI developer operates in a different way: human knowledge elicitation and machine learning. In my early work as AI developer in the 1980s, we would manually build the knowledge base by interviewing human experts and expressing the elicited knowledge in an appropriate computer language. Knowledge engineering, as it was called, was a labour-intensive process, in which good knowledge engineers would winkle out tacit knowledge and rare exceptions and incorporate them into the knowledge base.
Today’s machine learning AI (MLAI) bypasses the human processes of eliciting and expressing knowledge, by detecting patterns in masses of training data supplied to it by AI developers, such as from Reddit in the case of ChatGPT . I like the explanation given by Paul McCartney of how they used MLAI to extract John Lennon’s voice from a poor quality recording; they told the AI system, “That’s voice. That’s guitar. In this recording, lose the guitar.”
Why humans – and faith – are important
How well AI works depends on the quality of knowledge in its knowledge base and, of course, on the engine processing this correctly. Since human beings design both engine (algorithm designer) and knowledge base (AI developer), and also use the AI system, even if indirectly, AI cannot be properly understood without taking human intention and interpretation into account. The quality of early AI depended on sensitive elicitation and close relationships of trust with experts. Sadly, as AI became fashionable, people became knowledge engineers who would be less careful, so that many AI systems did not work well. Quality of MLAI depends on careful selection of training data and of parameters by which to learn patterns.
For example, several human errors were responsible for the automated car killing the cyclist. The AI developers had not trained the AI system to recognise cyclists pushing rather than riding bicycles. This was partly because management had reduced funding for training. The safety driver in the car was not paying attention. And there was a lax safety culture in the company.
In both kinds of AI, the quality of the knowledge base is a human responsibility. This is where faith can make a difference. Being committed to Christ led me to be honest and careful, taking trouble to seek tacit and exceptional knowledge, rather than merely doing a job to a deadline. I tried to make the way the system worked serve the users and dignify them – and urged potential users to use it responsibly. The equivalent in MLAI today would be care in selecting data and pattern parameters.
Of course, many who are not Christians also take this care. Where I think faith might matter more is at a deeper level. Presuppositions and worldviews affect the way questions such as I listed before are debated, and these have a religious quality . Often each question is addressed separately and from a utilitarian or purely academic viewpoint, but my Christian faith urges me towards an overall, integrative viewpoint. In my next post, I will show how to address the remaining five questions with one integrated framework.
- In automated AI the UI might be only a start/stop button, a few controls and data from sensors, but in most AI, like ChatGPT, there is more “dialogue” between users and AI systems.
- To say ChatGPT ‘understands’ is an example of the validity of attributing humanlike behaviour to a computer which I discussed in my previous post.
- An excellent account of how ChatGPT works is available here at Ars Technica.
- MLAI knowledge bases are usually based on neural net technology or associations.
- Most philosophical and scientific thinking, especially in AI, takes what Dooyeweerd called an immanence standpoint, which lies at the root of ancient Greek and Humanistic thinking. This presupposes that it is valid to try to understand the essence of AI systems (e.g. “Can AI do X?”) with no reference to human meaning or actions. We shall see in Blog 3 that such questions need reference to spheres of meaning in which humans function. The validity of such spheres of meaning (aspects) presuppose a meaning-giver, i.e. a Creator.