Google DeepMind's Gemini AI won a gold medal at the International Mathematical Olympiad by solving complex math problems using natural language, marking a breakthrough in AI reasoning and human-level performance.Read More
How many Google AI researchers does it take to screw in a lightbulb? A recent research paper detailing the technical core behind Google's Gemini AI assistant may suggest an answer, listing an eye-popping 3,295 authors.
It's a number that recently caught the attention of machine learning researcher David Ha (known as "hardmaru" online), who revealed on X that the first 43 names also contain a hidden message. "Thereβs a secret code if you observe the authorsβ first initials in the order of authorship," Ha wrote, relaying the Easter egg: "GEMINI MODELS CAN THINK AND GET BACK TO YOU IN A FLASH."
The paper, titled "Gemini 2.5: Pushing the Frontier with Advanced Reasoning, Multimodality, Long Context, and Next Generation Agentic Capabilities," describes Google's Gemini 2.5 Pro and Gemini 2.5 Flash AI models, which were released in March. These large language models, which power Google's chatbot AI assistant, feature simulated reasoning capabilities that produce a string of "thinking out loud" text before generating responses in an attempt to help them solve more difficult problems. That explains "think" and "flash" in the hidden text.
Google DeepMind launches Weather Lab platform for AI hurricane forecasting, showing improved accuracy in early tests with U.S. National Hurricane Center partnership.Read More
In the AI world, a vulnerability called a "prompt injection" has haunted developers since chatbots went mainstream in 2022. Despite numerous attempts to solve this fundamental vulnerabilityβthe digital equivalent of whispering secret instructions to override a system's intended behaviorβno one has found a reliable solution. Until now, perhaps.
Google DeepMind has unveiled CaMeL (CApabilities for MachinE Learning), a new approach to stopping prompt-injection attacks that abandons the failed strategy of having AI models police themselves. Instead, CaMeL treats language models as fundamentally untrusted components within a secure software framework, creating clear boundaries between user commands and potentially malicious content.
The new paper grounds CaMeL's design in established software security principles like Control Flow Integrity (CFI), Access Control, and Information Flow Control (IFC), adapting decades of security engineering wisdom to the challenges of LLMs.