The revolution being brought about by generative artificial intelligence (“genAI”) is a profound one. The advent of writing tools like ChatGPT is having far-reaching impacts in many areas of the economy and our societies. Workers can now generate the text of emails, internal reports and even publications in seconds by instructing a virtual assistant to write for them. Computer programming can be done particularly well by genAI tools, and one of the ironies of this revolution is that use of such tools by software engineers to speed up coding could result in declining numbers of specialist jobs in this area even before other kinds of role become obsolete.
Of course, it’s not that AI is autonomously steering our economies and societies. We don’t have the sci-fi scenario of robots taking over the world (which is, according to whom you listen to, a far-fetched fantasy, a remote dystopia, or a brave new world just around the corner). But what is happening is that technical innovation fuelled by a mix of public funding (as in the case of Open AI and its Chat-GPT bot) and entrepreneurial speculation is rapidly changing labour relations. A fascinating long view of AI, taking in Marxist insights as well as current machine learning paradigms, is provided by Matteo Pasquinelli in The Eye of the Master: A Social History of Artificial Intelligence (2023), which I recommend.
Something else that I believe this revolution is not doing is growing our intelligence. The “A” in AI, I suggest, should be understood in the terms of the question we sometimes ask when looking at a floral display in a restaurant or hotel foyer: “Are those real or artificial?” Having studied plant sciences at university, I like to think I’m above-average at detecting artificial flowers. I do admit that the stakes in this challenge are usually low. Beyond impressing my companions when closer inspection leads them to agree with me, I don’t stand to gain much from spotting fake flowers at fifteen feet. Nonetheless, experience has helped me hone my skills. And this also now applies to reading texts that might have been produced by genAI. When assessing students’ assignments over the last year – and indeed, sometimes even when reading scholarly publications, I’ve found myself asking: is this a product of real or artificial intelligence? A similar question as with the flowers – but in this case, I believe it matters far more.
Centres of higher education are an important focus for the socioeconomic turmoil being wrought by genAI. The traditional student went up to university to ‘read’ a subject and was assessed largely by their fluency in writing about it. Another irony of our time, however, is that the arrival of online tools that can write fluent essays tailored to any brief coincides with an increasing drive for ‘authentic assessment’ – i.e. setting assignments that mimic tasks students might be expected to perform in the workplace. This has meant the demise of traditional exams in favour of coursework – which the time-pressed or demotivated student might now all too easily delegate to a bot. But this might well happen in the workplace too – so why does it matter?
In a previous post, I suggested that AI, like other technologies, can be seen as a form of delegation that echoes God’s delegation to us. We are like artificial deities (I assume this is something of Jesus’ meaning in quoting Psalm 82:6), and in similar manner, genAI tools are like artificial intelligences. But everyone knows that we aren’t really gods, and likewise it should be known that AI tools are not really intelligent. If we delegate tasks to them, this is not a kind of partnership, but rather a way to help us fulfill our responsibilities. As Christians, I assume we live by the ethic of the Kingdom of God, but my students and colleagues also need to reckon with the big picture in which they understand their lives. And a virtue ethic is surely a wise approach for all of us to take 1, because how we handle technology shapes us. If someone believes that genAI tools are genuinely intelligent and tries to delegate their thinking, understanding and learning to them – perhaps by relying on such tools to craft essays, draft reports and prepare for job interviews – there is a real risk that their abilities to write, to organise ideas and even to speak cogently will atrophy. Indeed, I see a real risk that over-reliance on AI will diminish one’s own intelligence – a serious concern for those of us in education. Another psalm passage comes to mind: “The idols of the nations are silver and gold, made by human hands… Those who make them will be like them, and so will all who trust in them.”2
But perhaps I’m getting ahead of myself. In a future post I will consider more carefully the ways in which uses of genAI may help or hinder us in our scholarly abilities and responsibilities.
_____________________
- I’ve been helped here by Tom Wright’s Virtue Reborn (SPCK, 2010). ↩︎
- Psalm 135: 15,18. The intervening verses refer to mouths that cannot speak and ears that cannot hear, but presumably even such abilities would not redeem these entities! ↩︎
Image: Generated by deepai.org, from the prompt “A robot writing an essay”.
- Artificial thinking? - April 2, 2025
- A degree of critical thinking - September 2, 2024
- Reforming Economics? - July 3, 2024
0 Comments