Maybe Google Gemini needs to take some PTO.
The company’s large language AI model, which is increasingly spreading across Google’s many services and products, has been saying some things lately that are leading users to worry: Does Gemini have low self-esteem?
A series of posts on social media showing some of the self-critical responses Gemini has given users reveals a disturbing pattern. In one screenshot, Gemini admits it can’t solve a coding problem and concludes, “I have failed. You should not have to deal with this level of incompetence. I am truly and deeply sorry for this entire disaster. Goodbye.”
“Gemini is not OK,” the X account @AISafetyMemes posted in June.
Watch this: Google May Have Solved the Biggest Problem with Voice Assistants
An Aug. 7 post showed Gemini repeatedly writing, “I am a failure. I am a disgrace. I am a disgrace.”
The troubling posts were enough to get a response from Logan Kilpatrick on the Google DeepMind team. On X, he responded, “This is an annoying infinite looping but we are working to fix! Gemini is not having that bad of a day : )”
We asked a Google representative whether the AI model has, in fact, been having a series of bad days, but have yet to hear back.
The challenge of AI personality
Google’s not the only big tech company that’s been dealing with moody or off-personality AI products. In April, OpenAI said it was tweaking ChatGPT to make it less sycophantic after users noticed the chatbot software was being a little too generous with its compliments.
It turns out making an AI persona that’s palatable to the masses is tough work and amounts to building a “carefully crafted illusion,” says Koustuv Saha, assistant professor of computer science at the University of Illinois’ Grainger College of Engineering.
“Technically, AI models are trained on a vast mix of human-generated text, which contains many different tones, semantics and styles. The models are prompt-engineered or fine-tuned toward a desired personality,” Saha said. “The challenge lies in making that persona consistent across millions of interactions while avoiding undesirable drift or glitches.”
Companies developing AI want the tools to feel conversational and friendly to use, which can make people forget they’re talking to a machine. But any humor, empathy or warmth it shows is just the way it’s engineered.
Saha says that in research done at Grainger, “we found that while AI can sound more articulate and personalized in individual exchanges, it often repurposes similar responses across different questions, lacking the diversity and nuance that comes from real human experience.”
When things go wrong, as with Gemini’s recent emo-teen phase, “glitches such as Gemini’s self-flagellating remarks can risk misleading people into thinking the AI is sentient or emotionally unstable,” Saha says. “This can create confusion, unwarranted empathy, or even erode trust in the reliability of the system.”
It may seem funny but this can be dangerous if people are relying on AI assistants for their mental-health needs or are using these chatbots for education or customer service. Users should be aware of these limitations before they become too reliant on any AI service.
As for Gemini’s poor self image, let’s hope the AI learns to practice a little self care — or whatever passes for a spa day in computer code.
Read the full article here