❌

Normal view

Received before yesterday

FDA employees say the agency's Elsa generative AI hallucinates entire studies

24 July 2025 at 22:35

Current and former members of the FDA told CNN about issues with the Elsa generative AI tool unveiled by the federal agency last month. Three employees said that in practice, Elsa has hallucinated nonexistent studies or misrepresented real research. "Anything that you don't have time to double-check is unreliable," one source told the publication. "It hallucinates confidently." Which isn't exactly ideal for a tool that's supposed to be speeding up the clinical review process and aiding with making efficient, informed decisions to benefit patients.

Leadership at the FDA appeared unfazed by the potential problems posed by Elsa. "I have not heard those specific concerns," FDA Commissioner Marty Makary told CNN. He also emphasized that using Elsa and participating in the training to use it are currently voluntary at the agency.

A spokesperson for the Department of Health and Human Services told Engadget that "the information provided by FDA to CNN was mischaracterized and taken out of context." The spokesperson also claimed that CNN led its story with "disgruntled former employees and sources who have never even used the current version of Elsa." The agency claims to have guardrails and guidance for how its employees can use the tool, but its statement doesn’t address that Elsa, like any AI platform, can and will deliver incorrect or incomplete information at times. We have not yet received a response to our request for additional details.

The CNN investigation highlighting these flaws with the FDA's artificial intelligence arrived on the same day as the White House introduced an "AI Action Plan." The program presented AI development as a technological arms race that the US should win at all costs, and it laid out plans to remove "red tape and onerous regulation" in the sector. It also demanded that AI be free of "ideological bias," or in other words, only following the biases of the current administration by removing mentions of climate change, misinformation, and diversity, equity and inclusion efforts. Considering each of those three topics has a documented impact on public health, the ability of tools like Elsa to provide genuine benefits to both the FDA and to US patients looks increasingly doubtful.

Update, July 24, 2025, 6:35PM ET: Added a statement from the Department of Health and Human Services.

This article originally appeared on Engadget at https://www.engadget.com/ai/fda-employees-say-the-agencys-elsa-generative-ai-hallucinates-entire-studies-203547157.html?src=rss

Β©

Β© Reuters / Reuters

FILE PHOTO: Signage is seen outside of the Food and Drug Administration (FDA) headquarters in White Oak, Maryland, U.S., August 29, 2020. REUTERS/Andrew Kelly/File Photo

Trump's AI Action Plan targets state regulation and 'ideological bias'

23 July 2025 at 16:32

At the start of the year, President Trump announced his AI Action Plan, an initiative he said would eventually enact policy that would "enhance America's position as an AI powerhouse." Now, after months of consultation with industry players like Google and OpenAI, the administration has finally shared the specific actions it plans to take.Β Β Β 

Notably, the framework seeks to limit state regulation of AI companies by instructing the Office of Science and Technology Policy (OSTP) and other federal agencies to consider a state's existing AI laws before awarding AI-related funding. "The Federal government should not allow AI-related Federal funding to be directed to those states with burdensome AI regulations that waste these funds," the document states. As you may recall, Trump's "Big Beautiful Bill" was supposed to include a 10-year qualified moratorium on state AI regulation before that amendment was ultimately removed in a 99-1 vote by the US Senate.

Elsewhere, the AI Action Plan targets AI systems the White House says promote "social engineering agendas." To that end, Trump plans to direct the National Institute of Standards and Technology, through the Department of Commerce, to revise its AI Risk Management Framework to remove any mentions of "misinformation, Diversity, Equity, and Inclusion, and climate change." Furthermore, he's calling for an update to the federal government's procurement guidelines to ensure the government only contracts model providers that can definitively say their AI systems are "free from top-down ideological bias." Just how companies like OpenAI, Google and others are expected to do this is unclear from the document.Β 

Separately, Trump says he plans to remove regulatory hurdles that slow the construction of AI data centers. "America's environmental permitting system and other regulations make it almost impossible to build this infrastructure in the United States with the speed that is required," the document states. Specifically, the president plans to make federal lands available for the construction of data centers and power generation facilities. Under the Action Plan, the federal government will also expand efforts to use AI to carry out environmental reviews.Β Β Β Β 

The president plans to sign a handful of executive orders today to start the wheels turning on his action plan. Trump began his second term by rescinding President Biden's October 2023 AI guidelines. Biden's executive order outlined a plan to establish protections for the general public with regard to artificial intelligence. Specifically, the EO sought new standards for safety and security in addition to protocols for AI watermarking and both civil rights and consumer protections.

This article originally appeared on Engadget at https://www.engadget.com/ai/trumps-ai-action-plan-targets-state-regulation-and-ideological-bias-163247225.html?src=rss

Β©

Β© Reuters / Reuters

U.S. President Donald Trump stands after delivering remarks on AI infrastructure at the Roosevelt room at White House in Washington, U.S., January 21, 2025. REUTERS/Carlos Barria/File Photo
❌