According to an alert released by the FBI on May 15, 2025, cyber criminals have been using AI-generated content in order to impersonate high-ranking government officials and gather sensitive information in a new threat campaign.
About the scam
By utilizing “text messages and AI-generated voice messages” that appear to be from legitimate government officials, the threat actors are attempting to trick lower-level state and federal officials into giving over personal account details. Once they gain the victim’s trust by having a conversation with them, they then send them a “malicious link under the guise of transitioning to a separate messaging platform,” in order to infect their devices with malware. The FBI alert reads:
“Traditionally, malicious actors have leveraged smishing, vishing, and spear phishing to transition to a secondary messaging platform where the actor may present malware or introduce hyperlinks that direct intended targets to an actor-controlled site that steals log-in information, like user names and passwords.”
The fact that the malware campaign is specifically targeting government employees is especially concerning, as it appears that the individuals behind it aren’t just looking to steal the average American’s credit card information – they’re seeking to infiltrate the inner workings of the U.S. government. Cybersecurity Dive reports that it is not clear who, if anyone, has actually “been compromised on their personal or government devices.”
The rising threat of AI in cyber-scams
Impersonation scams are nothing new, but the rise of AI technology has made it easier for threat actors to pull them off – and they’re on the rise. According to the FBI memo,
“[M]alicious actors are more frequently exploiting AI-generated audio to impersonate well-known, public figures or personal relations to increase the believability of their schemes.”
Leah Siskind, an AI research fellow and director of impact at the Foundation for Defense of Democracies recently commented on the potential impact that AI can have in such instances:
“Criminals can use [AI] to social engineer a situation, usually for financial gain. An example: malicious actors clone a boss’s voice and use [it] to request that the CFO pay off an unexpected invoice.”
In addition to the government’s latest findings, CrowdStrike’s 2025 Global Threat Report also indicated that AI impersonation tactics led to a 442% increase in the use of “fraudulent phone calls or voice messages” – known as voice phishing, or “vishing,” from H1 to H2 of last year.
The post A new malware campaign is using AI deepfakes to impersonate government officials appeared first on OPUSfidelis.