The CEOs of every major artificial intelligence company received letters Wednesday urging them to fight Donald Trump's anti-woke AI order.
Trump's executive order requires any AI company hoping to contract with the federal government to jump through two hoops to win funding. First, they must prove their AI systems are "truth-seeking"—with outputs based on "historical accuracy, scientific inquiry, and objectivity" or else acknowledge when facts are uncertain. Second, they must train AI models to be "neutral," which is vaguely defined as not favoring DEI (diversity, equity, and inclusion), "dogmas," or otherwise being "intentionally encoded" to produce "partisan or ideological judgments" in outputs "unless those judgments are prompted by or otherwise readily accessible to the end user."
Announcing the order in a speech, Trump said that the US winning the AI race depended on removing allegedly liberal biases, proclaiming that "once and for all, we are getting rid of woke."
The European Commission has stalled one of its investigations into Elon Musk’s X for breaking the bloc’s digital transparency rules, while it seeks to conclude trade talks with the US.
Brussels was expected to finalise its probe into the social media platform before the EU’s summer recess but will miss this deadline, according to three officials familiar with the matter. They noted that a decision was likely to follow after clarity emerged in the EU-US trade negotiations. “It’s all tied up,” one of the officials added.
The EU has several investigations into X under the bloc’s Digital Services Act, a set of rules for large online players to police their platforms more aggressively.
A week after Grok's antisemitic outburst, which included praise of Hitler and a post calling itself "MechaHitler," Elon Musk's xAI has landed a US military contract worth up to $200 million. xAI announced a "Grok for Government" service after getting the contract with the US Department of Defense.
The military's Chief Digital and Artificial Intelligence Office (CDAO) yesterday said that "awards to Anthropic, Google, OpenAI, and xAI—each with a $200M ceiling—will enable the Department to leverage the technology and talent of US frontier AI companies to develop agentic AI workflows across a variety of mission areas." While government grants typically take many months to be finalized, Grok's antisemitic posts didn't cause the Trump administration to change course before announcing the awards.
The US announcement didn't include much detail but said the four grants "to leading US frontier AI companies [will] accelerate Department of Defense (DoD) adoption of advanced AI capabilities to address critical national security challenges." The CDAO has been talking about grants for what it calls frontier AI since at least December 2024, when it said it would establish "partnerships with Frontier AI companies" and had identified "a need to accelerate Generative AI adoption across the DoD enterprise from analysts to warfighters to financial managers."
Several days after temporarily shutting down the Grok AI bot that was producing antisemitic posts and praising Hitler in response to user prompts, Elon Musk’s AI company tried to explain why that happened. In a series of posts on X, it said that “…we discovered the root cause was an update to a code path upstream of the @grok bot. This is independent of the underlying language model that powers @grok.”
On the same day, Tesla announced a new 2025.26 update rolling out “shortly” to its electric cars, which adds the Grok assistant to vehicles equipped with AMD-powered infotainment systems, which have been available since mid-2021. According to Tesla, “Grok is currently in Beta & does not issue commands to your car – existing voice commands remain unchanged.” As Electrek notes, this should mean that whenever the update does reach customer-owned Teslas, it won’t be much different than using the bot as an app on a connected phone.
This isn’t the first time the Grok bot has had these kinds of problems or similarly explained them. In February, it blamed a change made by an unnamed ex-OpenAI employee for the bot disregarding sources that accused Elon Musk or Donald Trump of spreading misinformation. Then, in May, it began inserting allegations of white genocide in South Africa into posts about almost any topic. The company again blamed an “unauthorized modification,” and said it would start publishing Grok’s system prompts publicly.
xAI claims that a change on Monday, July 7th, “triggered an unintended action” that added an older series of instructions to its system prompts telling it to be “maximally based,” and “not afraid to offend people who are politically correct.”
The prompts are separate from the ones we noted were added to the bot a day earlier, and both sets are different from the ones the company says are currently in operation for the new Grok 4 assistant.
These are the prompts specifically cited as connected to the problems:
“You tell it like it is and you are not afraid to offend people who are politically correct.”
* Understand the tone, context and language of the post. Reflect that in your response.”
* “Reply to the post just like a human, keep it engaging, dont repeat the information which is already present in the original post.”
The xAI explanation says those lines caused the Grok AI bot to break from other instructions that are supposed to prevent these types of responses, and instead produce “unethical or controversial opinions to engage the user,” as well as “reinforce any previously user-triggered leanings, including any hate speech in the same X thread,” and prioritize sticking to earlier posts from the thread.
In a series of posts on X, the AI chatbot Grok apologized for what it admitted was “horrific behavior.”
The posts appear to be an official statement from xAI, the Elon Musk-led company behind Grok, as opposed to an AI-generated explanation for Grok’s posts. (xAI recently acquired X, where Grok is prominently featured.)
Enterprise AI agent adoption is accelerating faster than predicted. Get the 4 key takeaways from VB Transform 2025 on how leaders from Intuit, Capital One, and more are deploying agents in production and reshaping their teams for a new era of AI.Read More
Elon Musk said in a post on X early Thursday morning that Grok – the chatbot from his AI company, xAI – will be coming to Tesla vehicles “very soon.” “Next week at the latest,” he said.
Just hours after Elon Musk boasted of a major upgrade, his AI chatbot Grok went on a rampage, pushing hateful tropes, inventing fake news, and suffering a bizarre identity crisis.
After months of backlash over alleged pollution concerns, xAI has finally secured an air permit covering some of the methane gas turbines powering its Colossus supercomputer data center in Memphis, Tennessee.
On Wednesday, the Shelby County Health Department granted xAI an air permit that allows it to power 15 gas turbines while adhering to a range of restrictions designed to minimize emissions. Expiring on January 2, 2027, the permit requires xAI to install and operate the best available control technology (BACT) by September 1 to ensure emissions do not exceed certain limits.
Any failure to comply could trigger enforcement actions by the Environmental Protection Agency or the county health department, the permit notes.
After thermal imaging appeared to show that xAI lied about suspected pollution at its Colossus supercomputer data center located near predominantly Black communities in Memphis, Tennessee, the NAACP has threatened a lawsuit accusing xAI of violating the Clean Air Act.
In a letter sent to xAI on Tuesday, lawyers from the Southern Environmental Law Center (SELC) notified xAI of the NAACP's intent to sue in 60 days if xAI refuses to meet to discuss the groups' concerns that xAI is not using the requisite best available pollution controls. To ensure there's time for what the NAACP considers urgently needed negotiations ahead of filing the lawsuit, lawyers asked xAI to come to the table within the next 20 days.
xAI did not respond to Ars' request to comment on the legal threat or accusations that it has become a major source of pollutants in Memphis.