Normal view
-
VentureBeat
- Elon Musk’s ‘truth-seeking’ Grok AI peddles conspiracy theories about Jewish control of media
Elon Musk’s ‘truth-seeking’ Grok AI peddles conspiracy theories about Jewish control of media

The chatbot is giving antisemitic responses and bizarre first-person replies, raising concerns about bias and safety ahead of Grok 4 launch.Read More
Grok is being antisemitic again and also the sky is blue
Elon Musk’s “Upgraded” AI Is Spewing Antisemitic Propaganda

Just hours after Elon Musk boasted of a major upgrade, his AI chatbot Grok went on a rampage, pushing hateful tropes, inventing fake news, and suffering a bizarre identity crisis.
‘Improved’ Grok criticizes Democrats and Hollywood’s ‘Jewish executives’
xAI data center gets air permit to run 15 turbines, but imaging shows 24 on site
After months of backlash over alleged pollution concerns, xAI has finally secured an air permit covering some of the methane gas turbines powering its Colossus supercomputer data center in Memphis, Tennessee.
On Wednesday, the Shelby County Health Department granted xAI an air permit that allows it to power 15 gas turbines while adhering to a range of restrictions designed to minimize emissions. Expiring on January 2, 2027, the permit requires xAI to install and operate the best available control technology (BACT) by September 1 to ensure emissions do not exceed certain limits.
Any failure to comply could trigger enforcement actions by the Environmental Protection Agency or the county health department, the permit notes.
© Satellite image via the Southern Environmental Law Center
-
VentureBeat
- Musk’s attempts to politicize his Grok AI are bad for users and enterprises — here’s why
Musk’s attempts to politicize his Grok AI are bad for users and enterprises — here’s why

As an independent business owner or leader, how could you possibly trust Grok to give you unbiased results?Read More
xAI faces legal threat over alleged Colossus data center pollution in Memphis
After thermal imaging appeared to show that xAI lied about suspected pollution at its Colossus supercomputer data center located near predominantly Black communities in Memphis, Tennessee, the NAACP has threatened a lawsuit accusing xAI of violating the Clean Air Act.
In a letter sent to xAI on Tuesday, lawyers from the Southern Environmental Law Center (SELC) notified xAI of the NAACP's intent to sue in 60 days if xAI refuses to meet to discuss the groups' concerns that xAI is not using the requisite best available pollution controls. To ensure there's time for what the NAACP considers urgently needed negotiations ahead of filing the lawsuit, lawyers asked xAI to come to the table within the next 20 days.
xAI did not respond to Ars' request to comment on the legal threat or accusations that it has become a major source of pollutants in Memphis.
© ©Steve Jones, Flight by Southwings for SELC
Will Musk vs. Trump affect xAI’s $5 billion debt deal?
Musk’s DOGE used Meta’s Llama 2—not Grok—for gov’t slashing, report says
An outdated Meta AI model was apparently at the center of the Department of Government Efficiency's initial ploy to purge parts of the federal government.
Wired reviewed materials showing that affiliates of Elon Musk's DOGE working in the Office of Personnel Management "tested and used Meta’s Llama 2 model to review and classify responses from federal workers to the infamous 'Fork in the Road' email that was sent across the government in late January."
The "Fork in the Road" memo seemed to copy a memo that Musk sent to Twitter employees, giving federal workers the choice to be "loyal"—and accept the government's return-to-office policy—or else resign. At the time, it was rumored that DOGE was feeding government employee data into AI, and Wired confirmed that records indicate Llama 2 was used to sort through responses and see how many employees had resigned.
© Anadolu / Contributor | Anadolu
xAI says an “unauthorized” prompt change caused Grok to focus on “white genocide”
On Wednesday, the world was a bit perplexed by the Grok LLM's sudden insistence on turning practically every response toward the topic of alleged "white genocide" in South Africa. xAI now says that odd behavior was the result of "an unauthorized modification" to the Grok system prompt—the core set of directions for how the LLM should behave.
That prompt modification "directed Grok to provide a specific response on a political topic" and "violated xAI's internal policies and core values," xAI wrote on social media. The code review process in place for such changes was "circumvented in this incident," it continued, without providing further details on how such circumvention could occur.
To prevent similar problems from happening in the future, xAI says it has now implemented "additional checks and measures to ensure that xAI employees can't modify the prompt without review" as well as putting in place "a 24/7 monitoring team" to respond to any widespread issues with Grok's responses.
© Getty Images
xAI blames Grok’s obsession with white genocide on an ‘unauthorized modification’
-
VentureBeat
- Elon Musk’s xAI tries to explain Grok’s South African race relations freakout the other day
Elon Musk’s xAI tries to explain Grok’s South African race relations freakout the other day

With its prompts now public and a team of human babysitters on call, Grok is supposedly back on script. But the incident underscores...Read More
Report: Terrorists seem to be paying X to generate propaganda with Grok
Back in February, Elon Musk skewered the Treasury Department for lacking "basic controls" to stop payments to terrorist organizations, boasting at the Oval Office that "any company" has those controls.
Fast-forward three months, and now Musk's social media platform X is suspected of taking payments from sanctioned terrorists and providing premium features that make it easier to raise funds and spread propaganda—including through X's chatbot, Grok. Groups seemingly benefiting from X include Houthi rebels, Hezbollah, and Hamas, as well as groups from Syria, Kuwait, and Iran. Some accounts have amassed hundreds of thousands of followers, paying to boost their reach while X apparently looks the other way.
In a report released Thursday, the Tech Transparency Project (TTP) flagged popular accounts likely linked to US-sanctioned terrorists. Some of the accounts bear "ID verified" badges, suggesting that X may be going against its own policies that ban sanctioned terrorists from benefiting from its platform.
© Mohammed Hamoud / Contributor | Getty Images News
The OpenAI mafia: 15 of the most notable startups founded by alumni
Thermal imaging shows xAI lied about supercomputer pollution, group says
Elon Musk raced to build Colossus, the world's largest supercomputer, in Memphis, Tennessee. He bragged that construction only took 122 days and expected that his biggest AI rivals would struggle to catch up.
To leap ahead, his firm xAI "removed whatever was unnecessary" to complete the build, questioning "everything" that might delay operations and taking the timeline "into our own hands," xAI's website said.
Now, xAI is facing calls to shut down gas turbines that power the supercomputer, as Memphis residents in historically Black communities—which have long suffered from industrial pollution causing poor air quality and decreasing life expectancy—allege that xAI has been secretly running more turbines than the local government knows, without permits.
© ©Steve Jones, Flight by Southwings for SELC