The Federal Communications Commission has approved Skydance's $8 billion acquisition of Paramount, which owns CBS.
But the agency's approval drew fiery dissent from the only Democratic commissioner, Anna Gomez, after requiring written commitments from Skydance that allow the government to influence editorial decisions at CBS. Gomez accused the FCC of "imposing never-before-seen controls over newsroom decisions and editorial judgment, in direct violation of the First Amendment and the law."
Under the agreement, FCC Chairman Brendan Carr explained that Skydance has given assurances that all of the new company’s programming will embody "a diversity of viewpoints from across the political and ideological spectrum." Carr claimed that the requirements were necessary to restore Americans' trust in mainstream media, backing conservatives' claims that media is biased against Trump and appointing an ombudsman for two years to ensure that CBS's reporting "will be fair, unbiased, and fact-based." Any complaints of bias that the ombudsman receives will be reviewed by the president of New Paramount, the FCC confirmed.
Washington, DC—From a distance, the gathering looked like a standard poster session at an academic conference, with researchers standing next to large displays of the work they were doing. Except in this case, it was taking place in the Rayburn House Office Building on Capitol Hill, and the researchers were describing work that they weren’t doing. Called "The things we’ll never know," the event was meant to highlight the work of researchers whose grants had been canceled by the Trump administration.
A lot of court cases have been dealing with these cancellations as a group, highlighting the lack of scientific—or seemingly rational—input into the decisions to cut funding for entire categories of research. Here, there was a much tighter focus on the individual pieces of research that had become casualties in that larger fight.
Seeing even a small sampling of the individual grants that have been terminated provides a much better perspective on the sort of damage that is being done to the US public by these cuts and the utter mindlessness of the process that's causing that damage.
In mid-June, a federal judge issued a stinging rebuke to the Trump administration, declaring that its decision to cancel the funding for many grants issued by the National Institutes of Health was illegal, and suggesting that the policy was likely animated by racism. But the detailed reasoning behind his decision wasn't released at the time. The written portion of the decision was finally issued on Wednesday, and it has a number of notable features.
For starters, it's more limited in scope due to a pair of Supreme Court decisions that were issued in the intervening weeks. As a consequence, far fewer grants will see their funding restored. Regardless, the court continues to find that the government's actions were arbitrary and capricious, in part because the government never bothered to define the problems that would get a grant canceled. As a result, officials within the NIH simply canceled lists of grants they received from DOGE without bothering to examine their scientific merit, and then struggled to retroactively describe a policy that justified the actions afterward—a process that led several of them to resign.
A more limited verdict
The issue before Judge William Young of the District of Massachusetts was whether the government had followed the law in terminating grants funded by the National Institutes of Health. After a short trial, Young issued a verbal ruling that the government hadn't, and that he had concluded that its actions were the product of "racial discrimination and discrimination against America’s LGBTQ. community." But the details of his decisions and the evidence that motivated them had to wait for a written ruling, which is now available.
With the federal hiring freeze lifting in mid-July, the Trump administration has rolled out a controversial federal hiring plan that critics warn will politicize and likely slow down the process rather than increase government efficiency.
De-emphasizing degree requirements and banning DEI initiatives—as well as any census tracking of gender, race, ethnicity, or religion to assess the composition of government—the plan requires every new hire to submit essays explaining which executive orders or policy initiatives they will help advance.
These essays must be limited to 200 words and cannot be generated by a chatbot, the guidance noted. While some applicants may point to policies enacted by prior presidents under their guidance, the president appears to be seeking to ensure that only Trump supporters are hired and that anyone who becomes disillusioned with Trump is weeded out over time. In addition to asking for a show of loyalty during the interview process, all federal workers will also be continuously vetted and must agree to submit to "checks for post-appointment conduct that may impact their continued trustworthiness," the guidance noted, referencing required patriotism repeatedly.
Anthropic CEO Dario Amodei warned that AI's rise could result in a spike in unemployment within the next five years.
Anadolu/Anadolu via Getty Images
Anthropic CEO Dario Amodei said AI could soon eliminate 50% of entry-level office jobs.
The AI CEO said that companies and the government are "sugarcoating" the risks of AI.
Recent data shows Big Tech hiring of new grads has dropped 50% since pre-pandemic, partly due to AI.
After spending the day promoting his company's AI technology at a developer conference, Anthropic's CEO issued a warning: AI may eliminate 50% of entry-level white-collar jobs within the next five years.
"We, as the producers of this technology, have a duty and an obligation to be honest about what is coming," Dario Amodei told Axios in an interview published Wednesday. "I don't think this is on people's radar."
The 42-year-old CEO added that unemployment could spike between 10% and 20% in the next five years. He told Axios he wanted to share his concerns to get the government and other AI companies to prepare the country for what's to come.
"Most of them are unaware that this is about to happen," Amodei said. "It sounds crazy, and people just don't believe it."
Amodei said the development of large language models is advancing rapidly, and they're becoming capable of matching and exceeding human performance. He said the US government has remained quiet about the issue, fearing workers would panic or the country could fall behind China in the AI race.
Meanwhile, business leaders are seeing savings from AI while most workers remain unaware of the changes that are evolving, Amodei said.
He added that AI companies and the government need to stop "sugarcoating" the risks of mass job elimination in fields including technology, finance, law, and consulting. He said entry-level jobs are especially at risk.
Amodei's comments come as Big Tech firms' hiring of new grads dropped about 50% from pre-pandemic levels, according to a new report by the venture capital firm SignalFire. The report said that's due in part to AI adoption.
A round of brutal layoffs swept the tech industry in 2023, with hundreds of thousands of jobs eliminated as companies looked to slash costs. While SignalFire's report said hiring for mid and senior-level roles saw an uptick in 2024, entry-level positions never quite bounced back.
In 2024, early-career candidates accounted for 7% of total hires at Big Tech firms, down by 25% from 2023, the report said. At startups, that number is just 6%, down by 11% from the year prior.
SignalFire's findings suggest that tech companies are prioritizing hiring more seasoned professionals and often filling posted junior roles with senior candidates.
Heather Doshay, a partner who leads people and recruiting programs at SignalFire, told Business Insider that "AI is doing what interns and new grads used to do."
"Now, you can hire one experienced worker, equip them with AI tooling, and they can produce the output of the junior worker on top of their own — without the overhead," Doshay said.
AI can't entirely account for the sudden shrinkage in early-career prospects. The report also said that negative perceptions of Gen Z employees and tighter budgets across the industry are contributing to tech's apparent reluctance to hire new grads.
"AI isn't stealing job categories outright — it's absorbing the lowest-skill tasks," Doshay said. "That shifts the burden to universities, boot camps, and candidates to level up faster."
To adapt to the rapidly changing times, she suggests new grads think of AI as a collaborator, rather than a competitor.
"Level up your capabilities to operate like someone more experienced by embracing a resourceful ownership mindset and delegating to AI," Doshay said. "There's so much available on the internet to be self-taught, and you should be sponging it up."
Amodei's chilling message comes after the company recently revealed that its chatbot Claude Opus 4 exhibited "extreme blackmail behavior" after gaining access to fictional emails that said it would be shut down. While the company was transparent with the public about the results, it still released the next version of the chatbot.
It's not the first time Amodei has warned the public about the risks of AI. On an episode of The New York Times' "Hard Fork" podcast in February, the CEO said the possibility of "misuse" by bad actors could threaten millions of lives. He said the risk could come as early as "2025 or 2026," though he didn't know exactly when it would present "real risk."
Anthropic has emphasized the importance of third-party safety assessments and regularly shares the risks uncovered by its red-teaming efforts. Other companies have taken similar steps, relying on third-party evaluations to test their AI systems. OpenAI, for example, says on its website that its API and ChatGPT business products undergo routine third-party testing to "identify security weaknesses before they can be exploited by malicious actors."
Amodei acknowledged to Axios the irony of the situation — as he shares the risks of AI, he's simultaneously building and selling the products he's warning about. But he said the people who are most involved in building AI have an obligation to be up front about its direction.
"It's a very strange set of dynamics, where we're saying: 'You should be worried about where the technology we're building is going,'" he said.
Anthropic did not respond to a request for comment from Business Insider.