Meta’s new world model lets robots manipulate objects in environments they’ve never encountered before

A robot powered by V-JEPA 2 can be deployed in a new environment and successfully manipulate objects it has never encountered before.Read More
The ongoing war between the Trump administration and Harvard University has taken a new twist, with the government sending Harvard a letter that, amid what appears to be a stream-of-consciousness culture war rant, announces that the university will not be receiving any further research grants. The letter potentially suggests that Harvard could see funding restored by "complying with long-settled Federal Law," but earlier demands from the administration included conditions that went well beyond those required by law.
The letter, sent by Secretary of Education Linda McMahon, makes it somewhat difficult to tell exactly what the government wants, because most of the text is a borderline deranged rant written in florid MAGA-ese. You don't have to go beyond the first paragraph to get a sense that this is less a setting of funding conditions than an airing of grievances:
Instead of using these funds to advance the education of its students, Harvard is engaging in a systemic pattern of violating federal law. Where do many of these "students" come from, who are they, how do they get into Harvard, or even into our country—and why is there so much HATE? These are questions that must be answered, among many more, but the biggest question of all is, why will Harvard not give straightforward answers to the American public?
Does Harvard have to answer these questions to get funding restored? It's unclear.
© Kevin Fleming
There's a curious contradiction at the heart of today's most capable AI models that purport to "reason": They can solve routine math problems with accuracy, yet when faced with formulating deeper mathematical proofs found in competition-level challenges, they often fail.
That's the finding of eye-opening preprint research into simulated reasoning (SR) models, initially listed in March and updated in April, that mostly fell under the news radar. The research serves as an instructive case study on the mathematical limitations of SR models, despite sometimes grandiose marketing claims from AI vendors.
What sets simulated reasoning models apart from traditional large language models (LLMs) is that they have been trained to output a step-by-step "thinking" process (often called "chain-of-thought") to solve problems. Note that "simulated" in this case doesn't mean that the models do not reason at all but rather that they do not necessarily reason using the same techniques as humans. That distinction is important because human reasoning itself is difficult to define.
© PhonlamaiPhoto via Getty Images
The semiconductor industry is bracing to potentially lose more than $1 billion once Donald Trump announces chip tariffs.
Two sources familiar with discussions between chipmakers and lawmakers last week told Reuters that Applied Materials, Lam Research, and KLA—three of the largest US chip equipment makers—could each lose about "$350 million over a year related to the tariffs." That adds up to likely more than $1 billion in losses between the three, and smaller firms will likely face similarly spiked costs, estimating losses in the tens of millions.
Some chipmakers are already feeling the pain of Trump's trade war, despite a 90-day pause on reciprocal tariffs and a tenuous exception for semiconductors and other electronics.
© William_Potter | iStock / Getty Images Plus
Remember when teachers demanded that you "show your work" in school? Some new types of AI models promise to do exactly that, but new research suggests that the "work" they show can sometimes be misleading or disconnected from the actual process used to reach the answer.
New research from Anthropic—creator of the ChatGPT-like Claude AI assistant—examines simulated reasoning (SR) models like DeepSeek's R1, and its own Claude series. In a research paper posted last week, Anthropic's Alignment Science team demonstrated that these SR models frequently fail to disclose when they've used external help or taken shortcuts, despite features designed to show their "reasoning" process.
(It's worth noting that OpenAI's o1 and o3 series SR models were excluded from this study.)
© Malte Mueller via Getty Images
Shortly after its inauguration, the Trump administration has made no secret that it isn't especially interested in funding research. Before January's end, major science agencies had instituted pauses on research funding, and grant funding has not been restored to previous levels since. Many individual grants have been targeted on ideological grounds, and agencies like the National Science Foundation are expected to see significant cuts. Since then, individual universities have been targeted, starting with an ongoing fight with Columbia University over $400 million in research funding.
This week, however, it appears that the targeting of university research has entered overdrive, with multiple announcements of funding freezes targeting several universities. Should these last for any considerable amount of time, they will likely cripple research at the targeted universities.
On Wednesday, Science learned that the National Institutes of Health has frozen all of its research funding to Columbia, despite the university agreeing to steps previously demanded by the administration and the resignation of its acting president. In 2024, Columbia had received nearly $700 million in grants from the NIH, with the money largely going to the university's prestigious medical and public health schools.
© Bruce Yuanyue Bi