Iranian state-backed hackers disrupted TV streaming services in the United Arab Emirates to broadcast a deepfake newsreader delivering a report on the war in Gaza, according to analysts at Microsoft. The tech company revealed that a hacking operation, run by the Islamic Revolutionary Guards—an essential branch of the Iranian armed forces—compromised streaming platforms in the UAE with an AI-generated news broadcast branded “For Humanity.”
The fabricated news anchor presented unverified images purportedly showing Palestinians injured and killed by Israeli military operations in Gaza. Microsoft analysts identified the hacking group responsible as Cotton Sandstorm, which published videos on the Telegram messaging platform demonstrating their infiltration of three online streaming services and the subsequent interruption of news channels with the fake newscaster.
The Khaleej Times, a UAE-based news outlet, reported that Dubai residents using HK1RBOXX set-top boxes experienced interruptions in December with a message stating: “We have no choice but to hack to deliver this message to you.” This was followed by the AI-generated anchor introducing “graphic” footage, alongside a ticker displaying the number of casualties in Gaza.
Microsoft noted additional disruptions in Canada and the UK, affecting channels such as the BBC, although the BBC itself was not directly hacked. In a blog post accompanying a report on Iranian cyber-espionage, Microsoft stated: “This marked the first Iranian influence operation Microsoft has detected where AI played a key component in its messaging and is one example of the fast and significant expansion in the scope of Iranian operations since the start of the Israel-Hamas conflict.”
The advancements in generative AI—technology capable of quickly producing realistic text, voice, and images from simple prompts—have led to an increase in deepfake content online. These include fabricated images of celebrities like Taylor Swift and AI-generated robocalls featuring voices of public figures like Joe Biden. Deepfakes involve using AI to create deceptive images, most commonly in the form of fake videos of people.
Experts are concerned that AI-generated content could be used extensively to disrupt elections, including the upcoming 2024 US presidential election. In the 2020 US election, Iran conducted a cyber-campaign that involved sending threatening emails to voters, pretending to be from the far-right Proud Boys group, creating a website inciting violence against FBI director Christopher Wray and others, and spreading disinformation about voting infrastructure.
Microsoft warned: “As we look forward to the 2024 US presidential election, Iranian activities could build on what happened in 2020 when they impersonated American extremists and incited violence against US government officials.”
Since the 7 October Hamas attacks, Iranian state-backed actors have launched a series of cyber-attacks and online attempts to sway public opinion. Their tactics include exaggerating the effects of alleged cyber-attacks, leaking personal data from an Israeli university, and attacking targets in pro-Israel countries such as Albania and Bahrain—a signatory to the Abraham Accords formalizing relations with Israel—as well as the US.
These incidents illustrate the growing threat posed by cyber-espionage and the use of AI in spreading disinformation, emphasizing the need for robust cybersecurity measures and public awareness to mitigate the impact of such operations.