23.8 C
Miami
Tuesday, March 3, 2026

Boards aren’t ready for the AI age: What happens when your CEO gets deepfaked? | Fortune

- Advertisement -spot_imgspot_img
- Advertisement -spot_imgspot_img

Deepfake fraud drained $1.1 billion from U.S. corporate accounts in 2025, tripling from $360 million the year before. By midyear last year, documented incidents had already quadrupled the 2024 total. And most corporate communications and brand teams remain dangerously unprepared.

Executives now face synthetic threats from two directions: their likenesses cloned to authorize fraudulent transfers or inflict reputational harm, and AI-generated voices impersonating government officials, board members, and business partners used to manipulate them.

In 2019, an unnamed British energy executive received a phone call from someone they believed was their chief executive. The accent and subtle consonant shifts were right, even the cadence was familiar. Only after wiring $243,000 did they learn the voice on the other end of the phone was synthetic. Last year, scammers cloned Italy’s defense minister and called the country’s business elite. At least one sent nearly €1 million before learning of the scam.

But these brands were fortunate. Consider the impact if a synthetic video of your CEO making inappropriate remarks, announcing a false merger, or criticizing a regulator spread rapidly on social media before your team could respond. Deepfakes are no longer a cybersecurity curiosity. They now represent a security threat, a financial risk, and a significant reputational hazard.

The communications gap is wider than the security gap

Most coverage of deepfake threats centers on detection algorithms and verification protocols. Cybersecurity vendors offer solutions, and IT departments update policies. However, few address a critical question for CMOs and CCOs: What happens to your brand if your CEO’s likeness is used for fraud, disinformation, or character attacks?

I have spent two decades advising executives through reputational crises, including regulatory investigations and hostile media campaigns. Established playbooks exist for these situations. However, there is no established protocol for incidents such as a synthetic likeness of a CEO authorizing a fraudulent acquisition or a fabricated video of a founder going viral.

Executive visibility now cuts both ways

Each social media post, keynote address, podcast appearance, and earnings call involving your CEO provides potential training data for attackers. The visibility that builds executive brands and humanizes leadership also supplies the voice samples and facial mapping needed for synthetic media.

Not every attack succeeds. Last year, scammers targeted the CEO of a global advertising company. They created a fake WhatsApp account using his photo, staged a Microsoft Teams call with an AI-cloned voice trained on YouTube footage, and asked a senior executive to fund a new business venture. The employee refused and the firm lost nothing, but the sophistication of the attempt revealed how far the technology has advanced.

The number of deepfakes increased from 500,000 in 2023 to over eight million in 2025. Voice cloning fraud rose by 680 percent in one year. Projected losses from AI-enabled fraud are expected to reach $40 billion by 2027. However, only 32 percent of corporate executives believe their organizations are prepared to handle a deepfake incident.

Three questions every communications team should answer now

First, do you have a disclosure protocol for synthetic media attacks? If an AI-generated replica of your CEO is used for fraud or disinformation, who communicates, when, and through which channels?

Second, have you conducted a deepfake tabletop exercise? Crisis simulations should now include scenarios where an executive’s likeness is used for internal fraud, external disinformation, or both.

Third, have you coordinated response sequencing with legal, cybersecurity, and investor relations? A deepfake crisis is a fraud event, a potential disclosure obligation, and a brand emergency all at once. Siloed responses will fail.

Act before the attack

The companies that will weather this era are building crisis protocols now, before their executives’ faces show up in videos they never recorded, saying things they never said, authorizing transactions they never approved. Your CEO’s likeness is a brand asset. It is also an attack vector.

Communications and brand teams that treat deepfakes as someone else’s problem—a cybersecurity issue, an IT concern, a fraud matter for finance—will find themselves drafting apologies instead of strategies.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Source link

- Advertisement -spot_imgspot_img

Highlights

- Advertisement -spot_img

Latest News

- Advertisement -spot_img