Reading view

All of your international packages are about to get more expensive

President Donald Trump signed an executive order on Wednesday that will suspend the de minimis exemption — which allows packages with goods valued less than $800 to enter the US duty-free — for all countries. Earlier this year, Trump ended the de minimis exemption for goods from China and Hong Kong.

The White House says the change goes into effect on August 29th. Per the executive order, for the next six months, goods shipped through the international postal system will either be charged the flat tariff rate based on country of origin (an ad valorem duty) or a specific duty ranging from $80 to $200 per item. After six months, all duties will be calculated as ad valorem duties.

The White House’s argument for ending the exemption is that packages using it are “subject to less scrutiny than traditional imports” and could “pose health, safety, national and economic security risks.” The White House claims that 98 percent of narcotics seizures (by “number of cases”) are from de minimis shipments. It also says that low-value packages from China and Hong Kong accounted for “the majority of de minimis shipments to the United States.”

  •  

Google is using AI age checks to lock down user accounts

Google will soon cast an even wider net with its AI age estimation technology. After announcing plans to find and restrict underage users on YouTube, the company now says it will start detecting whether Google users based in the US are under 18.

Age estimation is rolling out over the next few weeks and will only impact a “small set” of users to start, though Google plans on expanding it more widely. The company says it will use the information a user has searched for or the types of YouTube videos they watch to determine their age. Google first announced this initiative in February.

If Google believes that a user is under 18, it will apply the same restrictions it places on users who proactively identify as underage. In addition to enabling bedtime reminders on YouTube and limiting content recommendations, Google will also turn off Timeline in Maps, disable personalized advertising, and block users from accessing apps for adults on the Play Store.

In case Google incorrectly identifies someone as under 18, users can submit a photo of their government ID or a selfie to verify their age. The move comes amid a global push for age verification, with politicians in the US pressuring tech companies to make their platforms safer for kids, and the UK widely rolling out an age verification requirement affecting platforms like Bluesky, Reddit, Discord, and even Spotify.

  •  

VPN use soars in UK after age-verification laws go into effect

After the United Kingdom’s Online Safety Act went into effect on Friday, requiring porn platforms and other adult content sites to implement user age verification mechanisms, use of virtual private networks (VPNs) and other circumvention tools spiked in the UK over the weekend.

Experts had expected the surge, given that similar trends have been visible in other countries that have implemented age check laws. But as a new wave of age check regulations debuts, open Internet advocates warn that the uptick in use of circumvention tools in the UK is the latest example of how an escalating cat-and-mouse game can develop between people looking to anonymously access services online and governments seeking to enforce content restrictions.

The Online Safety Act requires that websites hosting porn, self-harm, suicide, and eating disorder content implement “highly effective” age checks for visitors from the UK. These checks can include uploading an ID document and selfie for validation and analysis. And along with increased demand for services like VPNs—which allow users to mask basic indicators of their physical location online—people have also been playing around with other creative workarounds. In some cases, reportedly, you can even use the video game Death Stranding’s photo mode to take a selfie of character Sam Porter Bridges and submit it to access age-gated forum content.

Read full article

Comments

© Getty Images

  •  

AI in Wyoming may soon use more electricity than state’s human residents

On Monday, Mayor Patrick Collins of Cheyenne, Wyoming, announced plans for an AI data center that would consume more electricity than all homes in the state combined, according to The Associated Press. The facility, a joint venture between energy infrastructure company Tallgrass and AI data center developer Crusoe, would start at 1.8 gigawatts and scale up to 10 gigawatts of power use.

The project's energy demands are difficult to overstate for Wyoming, the least populous US state. The initial 1.8-gigawatt phase, consuming 15.8 terawatt-hours (TWh) annually, is more than five times the electricity used by every household in the state combined. That figure represents 91 percent of the 17.3 TWh currently consumed by all of Wyoming's residential, commercial, and industrial sectors combined. At its full 10-gigawatt capacity, the proposed data center would consume 87.6 TWh of electricity annually—double the 43.2 TWh the entire state currently generates.

Because drawing this much power from the public grid is untenable, the project will rely on its own dedicated gas generation and renewable energy sources, according to Collins and company officials. However, this massive local demand for electricity—even if self-generated—represents a fundamental shift for a state that currently sends nearly 60 percent of its generated power to other states.

Read full article

Comments

© Greg Meland via Getty Images

  •  

Trump claims Europe won’t make Big Tech pay ISPs; EU says it still might

The White House said yesterday that the European Union agreed to scrap a controversial proposal to make online platforms pay for telecom companies' broadband network upgrades and expansions. But European officials have not confirmed the White House claim, and a European Commission spokesperson said the issue must go through the legislative process.

A White House fact sheet on President Trump's trade deal with European Commission President Ursula von der Leyen contains a brief reference to Europe agreeing not to impose network usage fees.

"The United States and the European Union intend to address unjustified digital trade barriers," the White House said. "In that respect, the European Union confirms that it will not adopt or maintain network usage fees. Furthermore, the United States and the European Union will maintain zero customs duties on electronic transmissions."

Read full article

Comments

© Getty Images | Andrew Harnik

  •  

EPA plans to ignore science, stop regulating greenhouse gases

The Trump administration has proposed curbing the government’s ability to regulate greenhouse gases by unwinding rules that control emissions from fossil fuel drilling, power plants, and cars.

Environmental Protection Agency Administrator Lee Zeldin on Tuesday announced the proposed rollback of a 2009 declaration that determined carbon dioxide and other greenhouse gases are a danger to public health and welfare.

“With this proposal, the Trump EPA is proposing to end 16 years of uncertainty for automakers and American consumers,” said Zeldin.

Read full article

Comments

© Harvey Schwartz | Getty Images

  •  

Will online safety laws become the next tariff bargaining chip?

An image showing a school crossing sign on a pixelated background.

President Donald Trump and other Republicans have railed for years against foreign regulation of US tech companies, including online safety laws. As the US fights a global tariff war, it may bring those rules under fire - just as some of them are growing teeth.

Over the past weeks, Trump has touted a blitz of trade deals, seeking concessions from countries in exchange for lower tariffs. This has coincided with the rollout of new child safety measures in the European Union and United Kingdom, most recently a new phase of the UK's Online Safety Act (OSA), which effectively age-gates porn, bullying, and self-harm promotion, as well as other ca …

Read the full story at The Verge.

  •  

Delta’s AI spying to “jack up” prices must be banned, lawmakers say

One week after Delta announced it is expanding a test using artificial intelligence to charge different prices based on customers' personal data—which critics fear could end cheap flights forever—Democratic lawmakers have moved to ban what they consider predatory surveillance pricing.

In a press release, Reps. Greg Casar (D-Texas) and Rashida Tlaib (D-Mich.) announced the Stop AI Price Gouging and Wage Fixing Act. The law directly bans companies from using "surveillance-based" price or wage setting to increase their profit margins.

If passed, the law would allow anyone to sue companies found unfairly using AI, lawmakers explained in what's called a "one-sheet." That could mean charging customers higher prices—based on "how desperate a customer is for a product and the maximum amount a customer is willing to pay"—or paying employees lower wages—based on "their financial status, personal associations, and demographics."

Read full article

Comments

© Hongwei Jiang | iStock / Getty Images Plus

  •  

Skydance deal allows Trump’s FCC to “censor speech” and “silence dissent” on CBS

The Federal Communications Commission has approved Skydance's $8 billion acquisition of Paramount, which owns CBS.

But the agency's approval drew fiery dissent from the only Democratic commissioner, Anna Gomez, after requiring written commitments from Skydance that allow the government to influence editorial decisions at CBS. Gomez accused the FCC of "imposing never-before-seen controls over newsroom decisions and editorial judgment, in direct violation of the First Amendment and the law."

Under the agreement, FCC Chairman Brendan Carr explained that Skydance has given assurances that all of the new company’s programming will embody "a diversity of viewpoints from across the political and ideological spectrum." Carr claimed that the requirements were necessary to restore Americans' trust in mainstream media, backing conservatives' claims that media is biased against Trump and appointing an ombudsman for two years to ensure that CBS's reporting "will be fair, unbiased, and fact-based." Any complaints of bias that the ombudsman receives will be reviewed by the president of New Paramount, the FCC confirmed.

Read full article

Comments

© Anadolu / Contributor | Anadolu

  •  

Lawmakers writing NASA’s budget want a cheaper upper stage for the SLS rocket

Not surprisingly, Congress is pushing back against the Trump administration's proposal to cancel the Space Launch System, the behemoth rocket NASA has developed to propel astronauts back to the Moon.

Spending bills making their way through both houses of Congress reject the White House's plan to wind down the SLS rocket after two more launches, but the text of a draft budget recently released by the House Appropriations Committee suggests an openness to making some major changes to the program.

The next SLS flight, called Artemis II, is scheduled to lift off early next year to send a crew of four astronauts around the far side of the Moon. Artemis III will follow a few years later on a mission to attempt a crew lunar landing at the Moon's south pole. These missions follow Artemis I, a successful unpiloted test flight in 2022.

Read full article

Comments

© NASA

  •  

Facebook ranks worst for online harassment, according to a global activist survey

Art depicts a mobile phone with comment bubbles and flames rising out of the screen.

Activists around the world are calling attention to harassment they’ve faced on Meta’s platforms. More than 90 percent of land and environmental defenders surveyed by Global Witness, a nonprofit organization that also tracks the murders of environmental advocates, reported experiencing some kind of online abuse or harassment connected to their work. Facebook was the most-cited platform, followed by X, WhatsApp, and Instagram.

Global Witness and many of the activists it surveyed are calling on Meta and its peers to do more to address harassment and misinformation on their platforms. Left to fester, they fear that online attacks could fuel real-world risks to activists. Around 75 percent of people surveyed said they believed that online abuse they experienced corresponded to offline harm.

“Those stats really stayed with me. They were so much higher than we expected them to be,” Ava Lee, campaign strategy lead on digital threats at Global Witness, tells The Verge. That’s despite expecting a gloomy outcome based on prior anecdotal accounts. “It has kind of long been known that the experience of climate activists and environmental defenders online is pretty awful,” Lee says.

Left to fester, they fear that online attacks could fuel real-world risks

Global Witness surveyed more than 200 people between November 2024 and March of this year that it was able to reach through the same networks it taps when documenting the killings of land and environmental defenders. It found Meta-owned platforms to be “the most toxic.” Around 62 percent of participants said they encountered abuse on Facebook, 36 percent on WhatsApp, and 26 percent on Instagram. 

That probably reflects how popular Meta’s platforms are around the world. Facebook has more than 3 billion active monthly users, more than a third of the global population. But Meta also abandoned its third-party fact-checking program in January, which critics warned could lead to more hate speech and disinformation. Meta moved to a crowdsourced approach to content moderation similar to X, where 37 percent of survey participants reported experiencing abuse. 

In May, Meta reported a “small increase in the prevalence of bullying and harassment content” on Facebook as well as “a small increase in the prevalence of violent and graphic content” during the first quarter of 2025.

“That’s sort of the irony as well, of them moving towards this kind of free speech model, which actually we’re seeing that it’s silencing certain voices,” says Hannah Sharpe, a senior campaigner at Global Witness.

Fatrisia Ain leads a local collective of women in Sulawesi, Indonesia, where she says palm oil companies have seized farmers’ lands and contaminated a river local villagers used to be able to rely on for drinking water. Posts on Facebook have accused her of being a communist, a dangerous allegation in her country, she tells The Verge.

The practice of “red-tagging” — labeling any dissident voices as communists — has been used to target and criminalize activists in Southeast Asia. In one high-profile case, a prominent environmental activist in Indonesia was jailed under “anti-communism” laws after opposing a new gold mine.

Ain says she’s asked Facebook to take down several posts attacking her, without success. “They said it’s not dangerous, so they can’t take it down. It is dangerous. I hope that Meta would understand, in Indonesia, it’s dangerous,” Ain says. 

Other posts have accused Ain of trying to defraud farmers and of having an affair with a married man, which she sees as attempts to discredit her that could wind up exposing her to more threats in the real world — which has already been hostile to her activism. “Women who are being the defenders for my own community are more vulnerable than men … more people harass you with so many things,” she says. 

Nearly two-thirds of people who responded to the Global Witness survey said that they have feared for their safety, including Ain. She’s been physically targeted at protests against palm oil companies accused of failing to pay farmers, she tells The Verge. During a protest outside of a government office, men grabbed her butt and chest, she says. Now, when she leads protests, older women activists surround her to protect her as a security measure. 

In the Global Witness survey, nearly a quarter of respondents said they’d been attacked on the basis of their sex. “There’s evidence of the way that women and women of color in particular in politics experience just vast amounts more hate than any other group,” Lee says. “Again, we’re seeing that play out when it comes to defenders … and the threats of sexual violence, and the impact that that is having on the mental health of lots of these defenders and their ability to feel safe.” 

“We encourage people to use tools available on our platforms to help protect against bullying and harassment,” Meta spokesperson Tracy Clayton said in an email to The Verge, adding that the company is reviewing Facebook posts that targeted Ain. Meta also pointed to its “Hidden Words” feature that allows you to filter offensive direct messages and comments on your posts and its “Limits” feature that hides comments on your posts from users that don’t follow you. 

Other companies mentioned in the report, including Google, TikTok, and X, did not provide on-the-record responses to inquiries from The Verge. Nor did a palm oil company Ain says has been operating on local farmers’ land without paying them, as they’re supposed to do under a mandated profit-sharing scheme

Global Witness says there are concrete steps social media companies can take to address harassment on their platforms. That includes dedicating more resources to their content moderation systems, regularly reviewing these systems, and inviting public input on the process. Activists surveyed also reported that they think algorithms that boost polarizing content and the proliferation of bots on platforms make the problem worse. 

“There are a number of choices that platforms could make,” Lee says. “Resourcing is a choice, and they could be putting more money into really good content moderation and really good trust and safety [initiatives] to improve things.” 

Global Witness plans to put out its next report on the killings of land and environmental defenders in September. Its last such report found that at least 196 people were killed in 2023.

  •  

Trump wants to ban 'woke AI.' Here's why it's hard to make a truly neutral chatbot.

President Donald Trump onstage at the All-In and Hill & Valley Forum "Winning The AI Race"
President Donald Trump unveiled an AI Action Plan and an executive order on "woke AI."

Roy Rochlin/Getty Images for Hill & Valley Forum

  • Donald Trump issued an executive order mandating that AI used by the government be ideologically neutral.
  • BI's reporting shows training AI for neutrality often relies on subjective human judgment.
  • Executives at AI training firms say achieving true neutrality is a big challenge.

President Donald Trump's war on woke has entered the AI chat.

The White House on Wednesday issued an executive order requiring any AI model used by the federal government to be ideologically neutral, nonpartisan, and "truth-seeking."

The order, part of the White House's new AI Action Plan, said AI should not be "woke" or "manipulate responses in favor of ideological dogmas" like diversity, equity, and inclusion. The White House said it would issue guidance within 120 days that will outline exactly how AI makers can show they are unbiased.

As Business Insider's past reporting shows, making AI completely free from bias is easier said than done.

Why it's so hard to create a truly 'neutral' AI

Removing bias from AI models is not a simple technical adjustment — or an exact science.

The later stages of AI training rely on the subjective calls of contractors.

This process, known as reinforcement learning from human feedback, is crucial because topics can be ambiguous, disputed, or hard to define cleanly in code.

The directives for what counts as sensitive or neutral are decided by the tech companies making the chatbots.

"We don't define what neutral looks like. That's up to the customer," Rowan Stone, the CEO of data labeling firm Sapien, which works with customers like Amazon and MidJourney, told BI. "Our job is to make sure they know exactly where the data came from and why it looks the way it does."

In some cases, tech companies have recalibrated their chatbots to make their models less woke, more flirty, or more engaging.

They are also already trying to make them more neutral.

BI previously reported that contractors for Meta and Google projects were often told to flag and penalize "preachy" chatbot responses that sounded moralizing or judgmental.

Is 'neutral' the right approach?

Sara Saab, the VP of product at Prolific, an AI and data training company, told BI that thinking about AI systems that are perfectly neutral "may be the wrong approach" because "human populations are not perfectly neutral."

Saab added, "We need to start thinking about AI systems as representing us and therefore give them the training and fine-tuning they need to know contextually what the culturally sensitive, appropriate tone and pitch is for any interaction with a human being."

Tech companies must also consider the risk of bias creeping into AI models from the datasets they are trained on.

"Bias will always exist, but the key is whether it's there by accident or by design," said Sapien's Stone. "Most models are trained on data where you don't know who created it or what perspective it came from. That makes it hard to manage, never mind fix."

Big Tech's tinkering with AI models has sometimes led to unpredictable and harmful outcomes

Earlier this month, for example, Elon Musk's xAI rolled back a code update to Grok after the chatbot went on a 16-hour antisemitic rant on the social media platform X.

The bot's new instructions included a directive to "tell it like it is."

Read the original article on Business Insider

  •