Listen to the article
When most UAE residents first heard the term “deepfake,” they thought it was a foreign issue that only affected Hollywood actresses or American politicians, not their neighbors or coworkers in Sharjah or Abu Dhabi. That distance has vanished. These days, if you walk into any cybersecurity briefing in Dubai, you’ll hear the same unsettling refrain: technology has progressed from novelty to annoyance to, increasingly, weapon. For months, the UAE Cybersecurity Council has been warning citizens to be suspicious of any unexpected video call or voicemail from a “boss” or “family member,” just as they used to be with emails from Nigerian princes.
The nation’s response is novel and deserving of attention. As part of a larger initiative linked to the National Programme for Artificial Intelligence and a manual that guides the public through systematic deepfake identification, the UAE has started implementing AI-powered detection tools specifically made to capture synthetic media before it spreads. The fact that the tools are updated frequently is more important than it may seem. Detection models deteriorate rapidly. Because the generators are constantly improving, becoming more affordable, and becoming faster, a system that was trained on last year’s deepfakes is frequently blind to this year’s. Researchers in the area believe that this is more of a treadmill than a war.
| UAE’s AI-Powered Deepfake Defense — Key Information | |
|---|---|
| Lead Authority | UAE Cybersecurity Council |
| Strategic Framework | UAE National Programme for Artificial Intelligence (launched under Vision 2031) |
| Governing Law on Image Misuse | Federal Decree-Law No. 34 of 2021 (digital blackmail, image manipulation) |
| Child-Specific Protection | Wadeema’s Law (heightened privacy for minors) |
| Detection Approach | AI-based deepfake detection tools, regularly updated, with public awareness guides |
| Reported Threat Growth | Deepfake-driven fraud attempts up roughly 2,137% over three years (Deloitte Middle East) |
| Reporting Channels for Victims | Dubai Police eCrime portal, Abu Dhabi’s Aman service, MySafe Society |
| Key Academic Contributor | Al Ain University — research on OpenCV-based fake video detection |
| Industry Partners Cited | KPMG Lower Gulf, PwC Middle East, Deloitte Middle East, Proofpoint EMEA |
| Common Misuse Patterns | Sextortion, voice cloning, fabricated executive videos, romance scams |
In a recent interview with Gulf News, Talal Shaikh, an associate professor of AI and robotics at Heriot-Watt University Dubai, stated bluntly that thousands of altered images can now be produced in a matter of seconds from a single clear photograph. In the UAE’s close-knit communities, where reputation spreads more quickly than proof, fake images can ruin a marriage or a career before anyone has a chance to confirm what they’re seeing. Cases of sextortion have increased. Payroll accounts have been drained using deepfakes of fake corporate portraits. Wire transfers that would have been dismissed a year ago have been approved by voice clones of executives.

There is already a legal framework in place. Wadeema’s Law provides a more stringent privacy protection for children, and Federal Decree-Law No. 34 of 2021 makes digital blackmail and image manipulation with malicious intent illegal. Through MySafe Society, the Aman service in Abu Dhabi, or the Dubai Police eCrime portal, victims can lodge complaints. However, generative models move at a different pace than laws. Over a three-year period, Eliza Lozan of Deloitte Middle East reported a roughly 2,137% increase in deepfake-driven fraud attempts. There should be a moment of silence after hearing that number.
Al Ain University researchers have been discreetly publishing work on OpenCV-based fake-video detection, concentrating on the tiny, nearly undetectable artifacts that even highly advanced deepfakes leave behind, such as a flicker around the eyes, an irregularity in the shadows of the jawline, and audio that fits the lips but not the room. As this develops, it’s difficult not to feel that the detection effort is now essentially a forensic art form, focusing more on reading the digital equivalent of brushstrokes than on spotting blatant fakes.
Whether any of these scales is the more difficult question. What used to require specialized hardware now resides inside a phone app, according to Moussa Beidas of PwC. Microsoft had to file a lawsuit against developers who were reselling Azure OpenAI services that had been altered. Without informing the creators, YouTube subtly added AI “enhancement” to user uploads. No government tool, no matter how well-designed, will be able to rebuild trust in the visible world on its own.
However, the UAE’s strategy seems more planned than instinctive. It is clear that this cannot be resolved at the model level alone when detection technology is combined with public education, watermarking guidelines, child-protection laws, and easily accessible reporting channels. It is genuinely unclear if the rest of the region adopts that model or waits for its own crisis to force the issue. However, there’s a sense that the Gulf’s next major deepfake scandal won’t come as a surprise. The only question will be which nation was prepared for it.









