Google releases Olympiad medal-winning Gemini 2.5 ‘Deep Think’ AI publicly — but there’s a catch…

The Gemini 2.5 Deep Think released to users is not that same competition model, rather, a lower performing but apparently faster version.Read More
How many Google AI researchers does it take to screw in a lightbulb? A recent research paper detailing the technical core behind Google's Gemini AI assistant may suggest an answer, listing an eye-popping 3,295 authors.
It's a number that recently caught the attention of machine learning researcher David Ha (known as "hardmaru" online), who revealed on X that the first 43 names also contain a hidden message. "There’s a secret code if you observe the authors’ first initials in the order of authorship," Ha wrote, relaying the Easter egg: "GEMINI MODELS CAN THINK AND GET BACK TO YOU IN A FLASH."
The paper, titled "Gemini 2.5: Pushing the Frontier with Advanced Reasoning, Multimodality, Long Context, and Next Generation Agentic Capabilities," describes Google's Gemini 2.5 Pro and Gemini 2.5 Flash AI models, which were released in March. These large language models, which power Google's chatbot AI assistant, feature simulated reasoning capabilities that produce a string of "thinking out loud" text before generating responses in an attempt to help them solve more difficult problems. That explains "think" and "flash" in the hidden text.
© PeterPencil via Getty Images
The world’s leading artificial intelligence companies are stepping up efforts to deal with a growing problem of chatbots telling people what they want to hear.
OpenAI, Google DeepMind, and Anthropic are all working on reining in sycophantic behavior by their generative AI products that offer over-flattering responses to users.
The issue, stemming from how the large language models are trained, has come into focus at a time when more and more people have adopted the chatbots not only at work as research assistants, but in their personal lives as therapists and social companions.
© FT montage
Jeffrey Dastin/REUTERS
Google cofounder Sergey Brin says now is the time for retired computer scientists to dust off their keyboards.
Six years after leaving Alphabet in 2019, Brin is back working on its most ambitious projects. Reports of Brin helping out at Google began to emerge sometime in 2023 after OpenAI rocked the tech industry with ChatGPT's release in 2022. It's clear that Brin is no longer a retired computer scientist.
And you shouldn't be either, Brin told "Big Technology's" Alex Kantrowitz during a live interview onstage at Google's IO developer conference on Tuesday.
"Honestly, anybody who's a computer scientist should not be retired right now," Brin said alongside Google DeepMind CEO Demis Hassabis.
DeepMind, a subsidiary of Alphabet, is the research lab behind the company's AI projects, including its genAI assistant Gemini. Brin told Kantrowitz that he's at Google "pretty much every day now" to help with training the latest models from Gemini.
With artificial intelligence becoming an increasingly competitive and near-constantly changing tech field, it's a "very unique time in history," according to Brin. When Kantrowitz asked if his return was solely about competing with rivals who are working toward their own artificial general intelligence systems, Brin said it's not just about the AI arms race.
"There's just never been a greater, sort of, problem and opportunity — greater cusp of technology," he responded.
Google DeepMind did not immediately respond to Business Insider's request for additional comments from Brin.
Having witnessed tech advancements like the earliest iteration of the internet, Web 1.0, and the phases that followed, Brin said Tuesday AI is "far more exciting" to be immersed in and will have a greater impact on the world.
However the race to reach AGI, a tech milestone of machine intelligence that can solve human tasks, is still on his mind.
"We fully intend that Gemini will be the very first AGI," Brin said.
His retirement included working an airship startup, LTA Research, funding research for Parkinson's, and investing in real estate.
The former Alphabet president led moonshot projects as the head of Google X before his departure in 2019. He notably worked on its failed attempt at smart glasses — Google Glass.
In the AI world, a vulnerability called a "prompt injection" has haunted developers since chatbots went mainstream in 2022. Despite numerous attempts to solve this fundamental vulnerability—the digital equivalent of whispering secret instructions to override a system's intended behavior—no one has found a reliable solution. Until now, perhaps.
Google DeepMind has unveiled CaMeL (CApabilities for MachinE Learning), a new approach to stopping prompt-injection attacks that abandons the failed strategy of having AI models police themselves. Instead, CaMeL treats language models as fundamentally untrusted components within a secure software framework, creating clear boundaries between user commands and potentially malicious content.
The new paper grounds CaMeL's design in established software security principles like Control Flow Integrity (CFI), Access Control, and Information Flow Control (IFC), adapting decades of security engineering wisdom to the challenges of LLMs.
© Aman Verma via Getty Images