❌

Normal view

Received before yesterday

Google hides secret message in name list of 3,295 AI researchers

17 July 2025 at 17:12

How many Google AI researchers does it take to screw in a lightbulb? A recent research paper detailing the technical core behind Google's Gemini AI assistant may suggest an answer, listing an eye-popping 3,295 authors.

It's a number that recently caught the attention of machine learning researcher David Ha (known as "hardmaru" online), who revealed on X that the first 43 names also contain a hidden message. "There’s a secret code if you observe the authors’ first initials in the order of authorship," Ha wrote, relaying the Easter egg: "GEMINI MODELS CAN THINK AND GET BACK TO YOU IN A FLASH."

The paper, titled "Gemini 2.5: Pushing the Frontier with Advanced Reasoning, Multimodality, Long Context, and Next Generation Agentic Capabilities," describes Google's Gemini 2.5 Pro and Gemini 2.5 Flash AI models, which were released in March. These large language models, which power Google's chatbot AI assistant, feature simulated reasoning capabilities that produce a string of "thinking out loud" text before generating responses in an attempt to help them solve more difficult problems. That explains "think" and "flash" in the hidden text.

Read full article

Comments

Β© PeterPencil via Getty Images

Researchers claim breakthrough in fight against AI’s frustrating security hole

16 April 2025 at 11:15

In the AI world, a vulnerability called a "prompt injection" has haunted developers since chatbots went mainstream in 2022. Despite numerous attempts to solve this fundamental vulnerabilityβ€”the digital equivalent of whispering secret instructions to override a system's intended behaviorβ€”no one has found a reliable solution. Until now, perhaps.

Google DeepMind has unveiled CaMeL (CApabilities for MachinE Learning), a new approach to stopping prompt-injection attacks that abandons the failed strategy of having AI models police themselves. Instead, CaMeL treats language models as fundamentally untrusted components within a secure software framework, creating clear boundaries between user commands and potentially malicious content.

The new paper grounds CaMeL's design in established software security principles like Control Flow Integrity (CFI), Access Control, and Information Flow Control (IFC), adapting decades of security engineering wisdom to the challenges of LLMs.

Read full article

Comments

Β© Aman Verma via Getty Images

❌