What are Deepfakes?
Deepfakes are synthetic media—primarily video or audio—that have been manipulated or entirely generated by deep learning artificial intelligence to convincingly portray a person saying or doing something they never actually did.
The term Deepfakes is a portmanteau of “deep learning” and “fake.” These highly realistic fabrications leverage powerful generative AI models to map the facial expressions, speech patterns, and mannerisms of a target person onto new footage or audio with near-perfect fidelity. While deepfakes have legitimate, creative applications (such as generating realistic animated characters or restoring old film), they pose significant risk in public discourse by facilitating financial fraud, electoral disinformation, and severe reputational damage. The increasing accessibility and realism of deepfake technology necessitate careful media vetting by all organizations.
Think of it this way: A deepfake is the digital equivalent of an expert Hollywood visual effects studio, but the effects are now cheap, fast, and accessible to anyone with a laptop. If someone can use an AI tool to make it look and sound exactly like your Chamber of Commerce CEO is announcing a fake event date or making an inappropriate comment, that’s a deepfake. The result is a total nightmare for trust, eh, because people no longer know what to believe is real.
Why Deepfakes Matters for Your Organization
For a leader managing external communications and brand safety, deepfakes are a critical and immediate threat to your credibility and operational security.
Your organization’s voice—whether it’s the CEO’s keynote address or a public safety announcement—must be unimpeachable. A malicious actor could easily create a deepfake video of a BIA manager issuing a false emergency alert or a DMO leader making a politically charged statement that sparks controversy. The speed at which deepfakes spread far outpaces the speed at which you can issue a retraction. Your risk management strategy must include protocols for verifying the authenticity of high-stakes communications, especially video and audio content, before publicizing or reacting to it.
Example
A local Economic Development Officer is trying to promote a new investment opportunity for the region, complete with a testimonial video from a prominent local CEO.
Weak Approach (Vulnerable to Deepfake): The EDO posts a video testimonial that was sourced from an unverified third party. Later, the video is exposed as a deepfake where the CEO appears to promote a competitor’s city instead. The EDO’s team must spend days damage-controlling a massive misinformation crisis and repairing the relationship with the local CEO.
Strong Approach (Security Protocol): The EDO follows a strict internal AI Policy that mandates a digital watermarking or cryptographic verification check on all key external video or audio assets. They proactively inform the public that their official videos will always include this digital signature, immediately flagging any deepfake attempts as fraudulent.
Key Takeaways
- Synthetic Media: Deepfakes are highly realistic, AI-generated or manipulated video and audio.
- Reputational Risk: They pose a massive threat to public trust, corporate credibility, and brand safety.
- Fraud Potential: They are increasingly used for sophisticated financial and political fraud.
- Verification is Key: Organizations must implement protocols for checking the authenticity of media before engaging or publishing.
Go Deeper
- The Technology: Understand the powerful underlying system in our definition of Generative AI.
- The Defense: See the organizational rules required to manage this risk in our guide on AI Policy.
- The Motivation: Learn why users might choose to create inauthentic information in our entry on Hallucinations (a different form of factual failure).