Hereβs how deepfake vishing attacks work, and why they can be hard to detect
By now, youβve likely heard of fraudulent calls that use AI to clone the voices of people the call recipient knows. Often, the result is what sounds like a grandchild, CEO, or work colleague youβve known for years reporting an urgent matter requiring immediate action, saying to wire money, divulge login credentials, or visit a malicious website.
Researchers and government officials have been warning of the threat for years, with the Cybersecurity and Infrastructure Security Agency saying in 2023 that threats from deepfakes and other forms of synthetic media have increased βexponentially.β Last year, Googleβs Mandiant security division reported that such attacks are being executed with βuncanny precision, creating for more realistic phishing schemes.β
Anatomy of a deepfake scam call
On Wednesday, security firm Group-IB outlined the basic steps involved in executing these sorts of attacks. The takeaway is that theyβre easy to reproduce at scale and can be challenging to detect or repel.
Β© Getty Images