I’m spending my sabbatical time doing a deep dive into how Machine Learning and Large Language Models (LLM) work and what the implications might be for all of us in the next few years.
I came across this insight just now:
AI’s Uneven Arrival – Stratechery by Ben Thompson:
because they lack the ability to think and decide and verify they are best thought of as a tool for humans to leverage. Indeed, while conventional wisdom about these models is that it allows anyone to generate good enough writing and research, the biggest returns come to those with the most expertise and agency, who are able to use their own knowledge and judgment to reap efficiency gains while managing hallucinations and mistakes.
In a sense this is obvious. The more you know about the subject you’ve asked AI to “write” about, the better and more quickly you are able to spot “confabulations” (hallucinations). But it seems helpful to me to keep this in mind because we tend to imagine that anyone, particularly people with no training in a field, will suddenly, through the assistance of a chatbot become expert practitioners.
And that’s simply not the case.
I’m guessing you saw this -https://www.nytimes.com/2025/01/03/technology/ai-religious-leaders.html. Interesting use and read.