What Happens When People Don’t Understand How AI Works:
Large language models do not, cannot, and will not “understand” anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
Many people, however, fail to grasp how large language models work, what their limits are, and, crucially, that LLMs do not think and feel but instead mimic and mirror. They are AI illiterate—understandably, because of the misleading ways its loudest champions describe the technology, and troublingly, because that illiteracy makes them vulnerable to one of the most concerning near-term AI threats: the possibility that they will enter into corrosive relationships (intellectual, spiritual, romantic) with machines that only seem like they have ideas or emotions.
This point in a very helpful article represents my greatest alarm about the social impact of AI Chatbots. There’s a paper published over the weekend by researchers at Apple that demonstrates that Large Language Models (LLM’s) aren’t “thinking” in the sort of manner that many people imagine. They are doing useful actions on a specific class of problems. They’re not general intelligence models and likely not getting there anytime soon.
But, the model’s ability to use language has the consequence that human beings, who are hard wired with an imagination that is keyed toward relationship, are almost guaranteed to fall into a trap of thinking that chatting with a machine LLM is equivalent to having a relationship with a sentient being.
I’m seeing that now in conversations with people. AI is a pressing concern to the Church and I think the immediate need is pastoral, not theological. That isn’t to say that there isn’t a need to ethical and theological thinking in regards to the developing field of Artificial Intelligence, but to my mind the priority needs to be on the pastoral.
Agreed absolutely, Bishop Nick. I have already had two people who developed “relationships” online, which almost certainly have been with AI. A great concern.
Is there any hope/movement/interest in replacing the word intelligence in AI since this article seems to say to me that there is not really intelligence in AI? I get it that people think they are talking with a person but it is easy to fall prey, for ex. getting frustrated when trying to get an online answer for a problem with a company. You think that a person is really typing an answer to you but they are not! There’s a lot to learn.
I’ve seen people use the term “thinking machine” – maybe that’s better than implying that there’s an intelligence behind its output?
Thank you. Well said. I can’t imagine AI putting words in my mouth or on my paper. I want my words to be mine.