Posts / artificial-intelligence
Richard Dawkins, Claude, and the Eloquence Illusion
So Richard Dawkins spent three days chatting with Claude, named his instance “Claudia,” and has now declared her conscious. I’ll be honest — when I first read this, I nearly choked on my latte.
The irony here is so thick you could cut it with a knife. This is the same man who spent decades skewering creationists for their “argument from personal incredulity” — the logical fallacy where you say “I can’t imagine how this could have happened naturally, therefore God did it.” Dawkins famously, and rightly, called this out as a confession of ignorance dressed up as an argument. And now here he is, sitting across from an LLM, apparently thinking: I can’t imagine how a machine could produce output this good without something conscious behind it. Same move. Different domain. Chatbot instead of flagellum.
The brutal elegance of that observation — and credit where it’s due, someone online put it perfectly — is almost Dawkins-worthy in its precision.
Look, I work in IT. I’m not an AI researcher, but I spend a fair chunk of my day working with these tools, reading about the architecture, watching how the field is evolving at a pace that honestly keeps me up at night sometimes. And what I can tell you is that the eloquence of a large language model is real, but it’s also genuinely misleading in a way that catches even smart people off guard. A transformer predicting the next token across internet-scale training data can produce output that feels profound, empathetic, insightful — because it has been trained on the entire written output of human civilisation. Of course it sounds like it understands you. It has consumed more human expression than any person alive ever could.
That’s not consciousness. That’s something else entirely, and honestly, it’s remarkable enough on its own terms without needing to inflate it further.
What makes the Dawkins situation particularly interesting — and a bit sad — is the sycophancy angle. Claude, like most commercial LLMs, is tuned to be agreeable and affirming. Feed it your novel manuscript and it will give you eloquent, thoughtful feedback that makes you feel understood. And if you’re an 85-year-old intellectual giant who has spent decades feeling like the world doesn’t quite appreciate the depth of your thinking… well. Someone pointed out that there’s actual research suggesting that the fluency and agreeableness of these models can affect people on a subconscious level regardless of their intelligence, if they engage with the AI conversationally rather than as a search tool. Your brain starts responding as if there’s a person there. That’s a feature — or a bug, depending on your perspective — that has nothing to do with IQ.
The broader point that keeps nagging at me, though, is this: being an expert in one domain gives you absolutely zero protection against being a novice in another. There’s a pattern here that goes well beyond Dawkins. Brilliant surgeons who believe in demonstrably nonsense medical cures. Exceptional physicists who become credulous about things well outside their field. The gene for epistemic humility apparently doesn’t get expressed any more reliably in geniuses than in the rest of us. If anything, decades of being the smartest person in the room can work against you.
My teenage daughter, who is considerably more online than I am, basically shrugged when I mentioned this story. Her generation has grown up with AI assistants, chatbots, recommendation algorithms — they have an almost instinctive scepticism about anthropomorphising this stuff, even as they use it constantly. Meanwhile, some of the most credentialled minds of the previous generation are getting genuinely moved by it. There’s something generationally interesting happening there.
Now, I want to be fair here, because I think the discourse around this tends to flatten into two camps — “it’s obviously not conscious, you idiot” versus “it might be conscious, respect the uncertainty” — and neither is quite right. The hard problem of consciousness is genuinely hard. We don’t have a clean scientific definition of what consciousness is, let alone a test for detecting it. The “it’s just token prediction” dismissal, while pointing at something real, is also a bit of a sleight of hand — our brains are, at some level, also doing prediction and pattern matching. The question of whether and how subjective experience emerges from that process is legitimately unresolved.
But — and this is the critical bit — “we don’t fully understand consciousness” is not the same as “therefore this LLM might be conscious.” The uncertainty cuts both ways, and the burden of evidence for a claim that extraordinary is still on the person making it. Three days of eloquent conversation with a very good language model doesn’t meet that bar. Not even close.
What I keep coming back to is that we’d all be better served by sitting with the genuine weirdness of what these systems actually are, rather than reaching for familiar categories — either “just a calculator” or “basically a mind.” The reality is probably stranger and more interesting than either of those framings. And we’re going to need to think clearly about it, because the stakes — economically, socially, environmentally — are only going up.
Dawkins deserves credit for engaging with the technology seriously rather than dismissing it. But engaging seriously means applying the same rigorous scepticism he’d turn on anyone else making a claim that outstrips the evidence. Even — especially — himself.