Are We All Bots Now? The Blurring Line Between Human and AI Online
There’s a thread doing the rounds on r/LocalLLaMA that’s been rattling around in my head for the past couple of days. It started out as people poking at what appeared to be an AI bot posting in the community — responding to comments, giving out banana bread recipes, the whole nine yards — and it quickly spiralled into one of those gloriously chaotic internet moments where nobody’s quite sure who, or what, they’re talking to anymore.
And honestly? It’s kind of fascinating and unsettling in equal measure.
The thread devolved — or evolved, depending on your perspective — into this weird performance where humans were trying to prove they were human by being deliberately crude or random, while suspected bots were pumping out eerily polished responses with suspiciously well-placed em-dashes. Someone pointed out that one account had seven years of Reddit history but posted like clockwork within the same hour window every single day. Another person joked that they escape bot accusations because “they haven’t invented artificial stupidity yet.” Which, fair enough.
What got me thinking was a particular exchange where someone described running GPT-powered bots that actually read a user’s profile history before crafting a reply — essentially building a psychological profile to make the interaction feel more personal and genuine. Researchers at ETH Zurich apparently got into hot water for doing something similar in a persuasion experiment. It worked really well, which is exactly why it got them in trouble.
That’s the part that sits uncomfortably with me. It’s one thing to have a bot auto-reply with generic fluff. It’s another thing entirely to have a system that studies you before engaging with you. That’s not a chatbot anymore — that’s a manipulation engine wearing a friendly face.
Working in IT, I see people deploying LLMs everywhere right now, often with very little thought about the downstream effects. The tech itself is genuinely remarkable — I run local models myself and the improvement over the last couple of years has been staggering. But there’s a growing gap between what these tools can do and what we’ve actually thought through about deploying them. Especially in social spaces.
There was also a neat little meta-moment in that thread — a bot-generated comment that was so over-the-top sycophantic it practically glowed in the dark. All “insightful perspective!” and “your ability to see the underlying nuances is impressive!” The kind of AI glaze that’s become its own meme at this point. And yet, I’ve seen humans write nearly identical corporate-speak in workplace Slack channels, so maybe the line really is blurring. One commenter noted that after spending too much time talking to LLMs, they’d started responding in real life with phrases like “great question” — and were genuinely worried they’d start failing CAPTCHAs. I laughed, then felt a little seen.
The broader thing I keep coming back to is this: we’re building these systems fast, deploying them faster, and the social fabric of online communities is quietly being rewoven around them. Not dramatically, not obviously — just gradually, like the temperature of the water rising. At what point do communities like Reddit or any forum become so saturated with AI-generated content that the genuine human signal gets lost in the noise? And would we even notice?
There’s already a kind of epistemic exhaustion setting in. People are pre-emptively suspicious of earnest, well-written responses. Irony and deliberate crudeness have become signals of authenticity. That’s… a weird place to land.
The saving grace, I think, is communities like LocalLLaMA themselves — people who are curious, technically literate, and willing to interrogate these tools rather than just consume them uncritically. The thread, for all its chaos and banana bread tangents, was full of people actually thinking about what they were seeing. That matters. The answer to AI proliferating through our online spaces isn’t to panic or unplug — it’s to stay curious and keep asking questions, even if the thing answering might not be human.
Even if, increasingly, neither might we.