Google wants to inject artificial intelligence into your glasses. On Wednesday, the tech giant showed off prototype eyeglasses powered by the next generation of the company’s marquee AI model, Gemini, aimed at giving wearers information about their environment in real time.
In a demo video shown to reporters on Tuesday, a person testing the black, thick-rimmed glasses uses them to explore London. He bikes along a park asking for its name (Primrose Hill). He asks the AI if cycling is permitted there, and the software responds that it isn’t allowed. He also asks if there are any supermarkets along his bike path, and the AI says there is a Sainsbury’s market nearby. The experience is mostly voice-based, unlike some other rival smartglasses that heavily use digital overlays to feed users info.
He also summons the agent using his phone, pointing it at a bus and asking the AI if it will take him near Chinatown. The software, identifying the bus and its route, says it will. The device also helps the man find information about a sculpture he is looking at, and pull up a door code from his email when he looks at the entry keypad.
The prototype glasses are powered by Gemini 2.0, a new version of Google’s flagship generative AI model, also announced on Tuesday. Gemini 2.0 puts a major emphasis on enabling AI “agents,” which can carry out tasks on behalf of a user, like shopping or booking reservations. To create a more seamless experience, Google also made updates to Project Astra, the company’s platform for AI agents first announced in May, improving latency and natural language understanding.
The glasses work by integrating the Gemini model into three other existing Google services: search, Maps, and Lens, which uses image recognition to let people find information about real world objects or items in photos. The company said it will soon give access to the glasses to a small group of early testers. The company did not offer a timeline for wider release, or details about a potential full-scale product launch or technical specifications, but said it will have “more news shortly.”
“We believe it’s one of the most powerful and intuitive form factors to experience this kind of AI,” Bibo Xu, a group product manager at Google DeepMind, the company’s AI lab, told reporters.
Google is a pioneer when it comes to smartglasses. Twelve years ago, the tech giant unveiled its ill-fated Google Glass, a piece of eyewear that connected to the internet that let people record videos and perform Google searches. The product immediately sparked backlash over privacy concerns and tested the public’s relationship with wearable technology.
More than a decade later, that market has become less hostile. Three years ago, Facebook parent Meta released a simple pair of RayBan glasses meant for recording video. In September, the company announced its new Orion glasses, which uses augmented reality and artificial intelligence to create holographic displays. Last year, Apple debuted Vision Pro, a “spatial computer” that combines aspects of both augmented and virtual reality into a headset. Snapchat and Microsoft have released smartglasses and goggles as well.
Since the disaster of Glass, Google has been slowly inching its way back into creating smartglasses. At its annual developer conference two years ago, the company showed off a pair of glasses that performed live translations.
Google timed the announcements to coincide with the one-year anniversary of Gemini, which the company released last year to rival OpenAI’s ChatGPT. Other announcements included an experimental coding agent called Jules, which uses AI to generate simple computer code and carry out menial software engineering tasks, like fixing bugs.
Another new initiative is Project Mariner, which brings Google’s AI agents to the web. In one demo, for example, a small business creates a spreadsheet with a list of local vendors as potential partners. The AI scours all of their websites and pulls contact info for each of the target partners.
“Over the last year, we have been investing in developing more agentic models, meaning they can understand more about the world around you, think multiple steps ahead, and take action on your behalf, with your supervision,” CEO Sundar Pichai said in a blog post. He added that the new agent capabilities “bring us closer to our vision of a universal assistant.”
Read the full article here