❌

Normal view

Received before yesterday

Key fair use ruling clarifies when books can be used for AI training

24 June 2025 at 19:56

Artificial intelligence companies don't need permission from authors to train their large language models (LLMs) on legally acquired books, US District Judge William Alsup ruled Monday.

The first-of-its-kind ruling that condones AI training as fair use will likely be viewed as a big win for AI companies, but it also notably put on notice all the AI companies that expect the same reasoning will apply to training on pirated copies of booksβ€”a question that remains unsettled.

In the specific case that Alsup is weighingβ€”which pits book authors against Anthropicβ€”Alsup found that "the purpose and character of using copyrighted works to train LLMs to generate new text was quintessentially transformative" and "necessary" to build world-class AI models.

Read full article

Comments

Β© CookiesForDevo | iStock / Getty Images Plus

Researchers claim breakthrough in fight against AI’s frustrating security hole

16 April 2025 at 11:15

In the AI world, a vulnerability called a "prompt injection" has haunted developers since chatbots went mainstream in 2022. Despite numerous attempts to solve this fundamental vulnerabilityβ€”the digital equivalent of whispering secret instructions to override a system's intended behaviorβ€”no one has found a reliable solution. Until now, perhaps.

Google DeepMind has unveiled CaMeL (CApabilities for MachinE Learning), a new approach to stopping prompt-injection attacks that abandons the failed strategy of having AI models police themselves. Instead, CaMeL treats language models as fundamentally untrusted components within a secure software framework, creating clear boundaries between user commands and potentially malicious content.

The new paper grounds CaMeL's design in established software security principles like Control Flow Integrity (CFI), Access Control, and Information Flow Control (IFC), adapting decades of security engineering wisdom to the challenges of LLMs.

Read full article

Comments

Β© Aman Verma via Getty Images

❌