Deepfake Scams
Navigating cybersecurity is tricky. AI continues to evolve, giving attackers brand new ways to trick targets into falling for their schemes. Attackers can use AI to create deepfakes, which are then used to impersonate a target’s loved ones, business associates, or a representative from a legitimate company. Use of deepfakes often makes spotting a phishing attempt difficult, as deepfakes can be verily convincing. Fear not, though, there are simple ways to protect yourself from becoming an attacker’s next payday!
Common Deepfake Scams
The FBI and Internet Crime Compliance Center warn that attackers can use AI to generate convincing audio impersonating public figures and victims’ loved ones in crisis situations demanding immediate financial assistance or a ransom. Criminals can also use AI-generated video in a similar manner.
Commonly, cyber criminals will call someone using an AI-generated voice clip pretending to be the victim’s grandchild in an emergency, needing immediate financial assistance. As an example, the caller may claim the victim’s grandchild is in jail and needs their grandparent to pay bail. Or they are stranded on the side of the road and need money to pay for emergency car repairs.
The intention of these messages is to put the victim in a crisis-response mode where rational thinking is not front of mind. Involving a victim’s loved one seemingly raises the stakes of not responding or meeting the request. When receiving a call like this, staying calm and acting rationally is imperative. Do not wire anyone money, give out bank or credit card details, give any gift card information, or provide any personal information to the requestor—the call very well may be a scam.
AI audio deepfakes can be used for reasons other than financial gain, too. In January of 2024, amid Presidential Primary Elections in New Hampshire, a robocall phone message impersonating presidential candidate Joe Biden went out to Democrat voters urging them not to vote in primary elections. The voice message urged voters to “save your vote for the November election” instead of voting in primaries. The intention of this was undoubtedly to disturb the election process. While it is not a financially motivated example of this common scam, it does still demonstrate the damaging effect impersonating public figures can have on populations.
In the same vein, if you get a call coming from a supposed public figure requesting financial assistance in some way (like a “presidential candidate” personally calling asking for financial assistance), do not give any financial or personal details to the caller!
In a famous example of AI-generated video use, a finance worker at a large multi-national company was tricked into giving scammers the equivalent of $25 million. The scammers had used deepfakes in a video call to impersonate the company’s CFO and several other prominent figures in the organization claiming to need a secret transaction authorized.
In a similar way, attackers can impersonate a person’s loved ones via AI-generated video. Social media is ripe with videos and pictures of people which can be used to impersonate them using AI. Once again, an attacker could FaceTime a victim while impersonating their child, grandchild, parent, spouse, friend, or other loved one using a similar method to the voice calls.
How to Tell When It’s a Deepfake
The FBI and Internet Crime and Compliance Center recommend looking for subtle imperfections in video or audio. Does it sound like the person on the phone has a strange way of pronouncing words or do their sentences sound strange? Does something about the image on the screen look off to you? Or are their movements and facial expressions strange? This could be a sign that the person on the phone isn’t a person at all, but an AI-generated impression of one.
Having a password between family members is another safeguard against becoming a victim of deepfake attacks. Agreeing upon and using a simple phrase like “banana bread and cheese” is one way to authenticate who you are talking to. In an emergency, if you are unsure whether a situation is legitimate, you can ask for the password before continuing the conversation.
When using a family password, though, it is important that the password stays secret—be careful not to post it on social media or type it out anywhere unless you are using it. While joking about the family password can be fun and amusing, it loses its effect if it is discoverable to others.
What to Do When a Deepfake is Calling
First and foremost, stay calm! If you receive a call from a loved one that seems fishy, hang up and contact the supposed caller at a known phone number. If you cannot reach them directly, try reaching them through friends, family members, or other means.
SKB Cyber recommends having an agreed upon family password, as described in the above section, that can be used to verify the identity of the person you are talking to. Make sure that the password stays secret and is not discoverable to anyone outside of your family and those who are supposed to have access to it.
If you receive a suspicious call from a notable figure or from a financial institution, hang up as soon as the call seems suspicious to you. Find a known number for the institution or a known contact, if there is one, and call yourself. In the case of a suspicious political message, find a campaign office to call or call your county clerk to verify the information in the call. In any case, do not provide any personal or financial details to an unconfirmed, unidentified caller!
For more information and security tips, contact us at www.skbcyber.com for a free consultation!
