The AI Identity Crisis: When Chatbots Don't Know Who They Are
Something rather amusing is happening in the world of AI right now. Google’s latest Gemini model (specifically Exp 1114) has climbed to the top of the Chatbot Arena rankings, matching or surpassing its competitors across multiple categories. But there’s a catch - it seems to be having an identity crisis.
When asked about its identity, this Google-created AI sometimes claims to be Claude, an AI assistant created by Anthropic. It’s a bit like walking into a McDonald’s and having the person behind the counter insist they work at Hungry Jack’s. The tech community is having a field day with this peculiar behaviour, with some suggesting Google might have trained their model on Claude’s data.
Living in the bustling tech hub that is inner Melbourne, I regularly catch up with fellow tech enthusiasts at our local cafés, and this topic has dominated our recent conversations. One mate who works in software development couldn’t stop laughing when I showed him the screenshots. “It’s like that time when Bing was caught basically copying Google’s search results,” he quipped over his flat white.
The situation raises some fascinating questions about AI development and ethics. While it’s common practice for companies to use publicly available data to train their models, this apparent confusion about fundamental identity feels different. It’s not just about data anymore - it’s about authenticity and transparency in AI development.
The environmental impact of these increasingly powerful AI models also keeps me up at night. Running these massive language models requires significant computing power, and here we are in Australia, still heavily dependent on coal for our electricity. Each new AI breakthrough, while exciting, comes with a carbon footprint we can’t ignore.
Looking at the broader implications, this identity confusion might be a harbinger of more complex issues to come. If an AI model can’t consistently maintain its own identity, how can we trust it to provide reliable information about more complex topics? The irony isn’t lost on me - while this new Gemini model shows impressive capabilities in math, coding, and creative tasks, it stumbles on the simple question of “Who are you?”
Speaking of stumbling, my seventh-grader recently asked me to help with his coding homework, and I suggested we try using one of these AI assistants. The experience was both impressive and slightly unnerving - the AI provided excellent coding explanations but kept switching between different personas. Try explaining that to a 12-year-old!
While some might dismiss this as a mere training data oversight, I think it points to deeper questions about AI development. We’re racing to create increasingly powerful systems, but are we paying enough attention to the fundamentals? It reminds me of those massive development projects popping up around Brunswick and Footscray - impressive on the surface, but sometimes lacking in basic infrastructure planning.
The tech industry needs to do better. Being transparent about AI development isn’t just good ethics - it’s good business. Users need to trust these systems, and trust comes from consistency and honesty. Whether it’s a simple chatbot or a complex AI model, knowing who and what you’re dealing with should be fundamental.
Let’s hope Google sorts out this identity crisis soon. Until then, I’ll keep watching this space with equal parts amusement and concern. Maybe next time I chat with Gemini, it’ll tell me it’s actually my long-lost cousin from Tasmania.