‘Cheapfake’ AI Celeb Videos Are Rage-Baiting People on YouTube
By now, you’ve likely heard of fraudulent calls that use AI to clone the voices of people the call recipient knows. Often, the result is what sounds like a grandchild, CEO, or work colleague you’ve known for years reporting an urgent matter requiring immediate action, saying to wire money, divulge login credentials, or visit a malicious website.
Researchers and government officials have been warning of the threat for years, with the Cybersecurity and Infrastructure Security Agency saying in 2023 that threats from deepfakes and other forms of synthetic media have increased “exponentially.” Last year, Google’s Mandiant security division reported that such attacks are being executed with “uncanny precision, creating for more realistic phishing schemes.”
On Wednesday, security firm Group-IB outlined the basic steps involved in executing these sorts of attacks. The takeaway is that they’re easy to reproduce at scale and can be challenging to detect or repel.
© Getty Images
On Wednesday, the White House released "Winning the Race: America's AI Action Plan," a 25-page document that outlines the Trump administration's strategy to "maintain unquestioned and unchallenged global technological dominance" in AI through deregulation, infrastructure investment, and international partnerships. But critics are already taking aim at the plan, saying it's doing Big Tech a big favor.
Assistant to the President for Science and Technology Michael Kratsios and Special Advisor for AI and Crypto David Sacks crafted the plan, which frames AI development as a race the US must win against global competitors, particularly China.
The document describes AI as the catalyst for "an industrial revolution, an information revolution, and a renaissance—all at once." It calls for removing regulatory barriers that the administration says hamper private sector innovation. The plan explicitly reverses several Biden-era policies, including Executive Order 14110 on AI model safety measures, which President Trump rescinded on his first day in office during his second term.
© Joe Daniel Price | Getty Images
An anti-deepfake bill is on the verge of becoming US law despite concerns from civil liberties groups that it could be used by President Trump and others to censor speech that has nothing to do with the intent of the bill.
The bill is called the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes On Websites and Networks Act, or Take It Down Act. The Senate version co-sponsored by Ted Cruz (R-Texas) and Amy Klobuchar (D-Minn.) was approved in the Senate by unanimous consent in February and is nearing passage in the House. The House Committee on Energy and Commerce approved the bill in a 49-1 vote yesterday, sending it to the House floor.
The bill pertains to "nonconsensual intimate visual depictions," including both authentic photos shared without consent and forgeries produced by artificial intelligence or other technological means. Publishing intimate images of adults without consent could be punished by a fine and up to two years of prison. Publishing intimate images of minors under 18 could be punished with a fine or up to three years in prison.
© Getty Images
When Francesca Mani was 14 years old, boys at her New Jersey high school used nudify apps to target her and other girls. At the time, adults did not seem to take the harassment seriously, telling her to move on after she demanded more severe consequences than just a single boy's one or two-day suspension.
Mani refused to take adults' advice, going over their heads to lawmakers who were more sensitive to her demands. And now, she's won her fight to criminalize deepfakes. On Wednesday, New Jersey Governor Phil Murphy signed a law that he said would help victims "take a stand against deceptive and dangerous deepfakes" by making it a crime to create or share fake AI nudes of minors or non-consenting adults—as well as deepfakes seeking to meddle with elections or damage any individuals' or corporations' reputations.
Under the law, victims targeted by nudify apps like Mani can sue bad actors, collecting up to $1,000 per harmful image created either knowingly or recklessly. New Jersey hopes these "more severe consequences" will deter kids and adults from creating harmful images, as well as emphasize to schools—whose lax response to fake nudes has been heavily criticized—that AI-generated nude images depicting minors are illegal and must be taken seriously and reported to police. It imposes a maximum fine of $30,000 on anyone creating or sharing deepfakes for malicious purposes, as well as possible punitive damages if a victim can prove that images were created in willful defiance of the law.
© Akiko Aoki | Moment