Chinese hackers are targeting web hosting firms - here's what we know
The United Kingdom will no longer force Apple to provide backdoor access to secure user data protected by the companyβs iCloud encryption service, according to US Director of National Intelligence Tulsi Gabbard.
βOver the past few months, Iβve been working closely with our partners in the UK, alongsideΒ @POTUS and @VP, to ensure Americansβ private data remains private and our Constitutional rights and civil liberties are protected,β Gabbard posted to X on Monday. βAs a result, the UK has agreed to drop its mandate for Apple to provide a βback doorβ that would have enabled access to the protected encrypted data of American citizens and encroached on our civil liberties.β
This announcement follows the UK issuing a secret order in January this year, demanding Apple provide it with backdoor access to encrypted files uploaded by users worldwide. In response, Apple pulled the ability for new users in the UK to sign up to its Advanced Data Protection (ADP) encrypted iCloud storage offering, and challenged the order, winning the right to publicly discuss the case in April. Earlier this year, US officials started examining whether the UK order had violated the bilateral CLOUD Act agreement, which bars the UK and US from issuing demands for each otherβs data.
This pressure from the US sparked reports last month that Britain would walk back the demands it issued to Apple, with one unnamed UK official telling the Financial Times that the UK βhad its back against the wall,β and was looking for a way out. While itβs unclear if the UK would negotiate new terms with Apple that avoid implicating the data of US citizens, an unnamed US official told The Financial Times that such negotiations would not be faithful to the new agreement.
With the order now reportedly removed, itβs unclear if Apple will restore access to its ADP service in the UK. We have reached out to Apple for comment. The UK Home Office has refused to comment on the situation.
In June, headlines read like science fiction: AI models "blackmailing" engineers and "sabotaging" shutdown commands. Simulations of these events did occur in highly contrived testing scenarios designed to elicit these responsesβOpenAI's o3 model edited shutdown scripts to stay online, and Anthropic's Claude Opus 4 "threatened" to expose an engineer's affair. But the sensational framing obscures what's really happening: design flaws dressed up as intentional guile. And still, AI doesn't have to be "evil" to potentially do harmful things.
These aren't signs of AI awakening or rebellion. They're symptoms of poorly understood systems and human engineering failures we'd recognize as premature deployment in any other context. Yet companies are racing to integrate these systems into critical applications.
Consider a self-propelled lawnmower that follows its programming: If it fails to detect an obstacle and runs over someone's foot, we don't say the lawnmower "decided" to cause injury or "refused" to stop. We recognize it as faulty engineering or defective sensors. The same principle applies to AI modelsβwhich are software toolsβbut their internal complexity and use of language make it tempting to assign human-like intentions where none actually exist.
Β© Colin Anderson Productions via Getty Images