Digital Puppets and Million-Dollar Scams: The Rise of AI Powered Fraud

October 8, 2025
This thought leadership article on Deepfakes and AI-Powered Phishing was co-authored by Jonathan Green, founder of Fraction AIO and Host of The Artificial Intelligence Podcast, and Jason McKinley, Founder CEO here at Arc Technologies Group.
Jason and Jonathan are collaborating on a number of articles on the intersection of Cybersecurity and Artificial Intelligence this #CybersecurityAwarenessMonth. You can also check out the recent podcast episode: Is AI Security Getting Better or Worse With Jason McKinley, which launched on the 29th of September!
By Jonathan Green – Part 1:
That’s Not Your CEO on the Video Call Asking for a Wire Transfer
You get an urgent request. It’s a video call from your CEO. She’s traveling, the connection is a little choppy, but it’s clearly her. She looks like her, and she sounds like her. She needs you to process an emergency wire transfer for a top-secret acquisition, and it has to happen now.
So you do it. And you’ve just been scammed out of millions. Because that wasn’t your CEO. It was a real-time deepfake, a digital puppet created by a criminal who just walked away with your company’s money.
The age of “seeing is believing” is over. The most sophisticated financial fraud is now targeting your C-suite with deepfake technology. And the only way to fight this high-tech threat is with a surprisingly old-school defense.
Your Eyes and Ears Can No Longer Be Trusted
For years, the gold standard for verifying a strange financial request was to “get them on the phone” or “do a quick video call.” That security protocol is now completely and utterly useless. As I’ve warned before, that’s not your CFO on the Zoom call. Scammers can scrape a few minutes of audio from a podcast and a few photos from your company website to create a convincing deepfake of any executive. Our brains are wired to trust what we see and hear, and that instinct has now become your single greatest vulnerability.
The New Corporate Password is a Shared Memory
So how do you defend against a threat you can’t see or hear? You go analog. You need to implement a verification system that an AI can’t possibly know because the information doesn’t exist on the public internet. The new corporate password is a shared memory.
Establish a set of simple, non-public challenge questions among your leadership and finance teams.
“What was the name of our first major client?”
“What city was our 2023 annual retreat in?”
“What’s the name of the awful restaurant we went to after the trade show?”
If you get an urgent, out-of-the-ordinary financial request via video call, your job is to ask the challenge question. Your real boss will know the answer instantly. A deepfake puppet will stall or give a generic, AI-generated response. It’s a simple test that a machine cannot pass.
When in Doubt, Walk Down the Hall
The ultimate defense is the simplest one: physical verification. If your CEO is supposedly on a video call from another country asking for an emergency wire transfer, but you think they’re in the building… get up and walk down the hall. A five-second, in-person conversation can prevent a multi-million-dollar mistake.
We’re so used to digital communication that we’ve forgotten the power of the analog world. We know that your AI can be tricked, but a simple, human-to-human interaction is much harder to fake.
The tools that scammers use to weaponize AI are evolving at a terrifying speed. Your security protocols must evolve faster. It’s time to train your team for this new reality and implement a system of simple, human-centric verification.
But surely this doesn’t happen in real life, does it?
By Jason McKinley – Part 2:
Yes, deepfakes are really a problem, and it is more common than you’d think, particularly for business executives who have some form of public presence, including myself.
AI has been utilized for numerous innovative applications in the business world over the last two years. Still, it has also introduced new risk vectors and, unfortunately, enhanced the effectiveness of many of them. AI-powered Phishing becomes particularly nasty in the form of voice or video Deepfakes of company executives.
In fact, last year, a video-based Deepfake duped an employee at a UK engineering firm into transferring over $25 million from company accounts into the accounts of a cyber threat actor.
This is a very high-risk attack that is better enabled the more public a personality you are. But that doesn’t mean small businesses or the SME market are exempt. Even with smaller companies, we have seen, in particular, AI voice deepfakes that have attempted the same thing.
Thankfully, none of our customers, whether enterprise or otherwise, have experienced a loss. However, additional security training and awareness on this particular vector should be an integral part of your ongoing cybersecurity training.
Plan and train now, and reduce risk later.
ATG also strongly advises the use of non-public safe phrases among executive suite members and their teams. This is an almost impossible to beat control if handled correctly in the current risk landscape for this kind of attack!
#AICybersecurity101 #AIThreats #CybersecurityAwarenessMonth #ThoughtLeadership #AIRiskAwareness