The article examines the nature and impact of disinformation associated with the Israel-Iran conflict, highlighting the involvement of AI technology and various social media platforms.
**Disinformation Surge in the Israel-Iran Conflict Fueled by AI Technologies**

**Disinformation Surge in the Israel-Iran Conflict Fueled by AI Technologies**
Recent military actions between Israel and Iran have led to an alarming increase in AI-generated disinformation spread on social media.
A dramatic rise in disinformation has accompanied Israel's military strikes on Iran, particularly since June 13, when Israel launched operations targeting Iranian facilities. This wave of false information has been analyzed by various organizations, including BBC Verify, and reflects a troubling trend of AI-generated content exacerbating conflicts online.
In the immediate aftermath of the strikes, numerous fabricated videos purporting to show Iranian military successes began circulating, garnering massive attention. According to BBC Verify, these videos have collectively racked up over 100 million views on various platforms, showcasing exaggerated or entirely false portrayals of the conflict. These include AI-created footage that claims to depict missile strikes on Israeli territories, along with misleading images suggesting Iranian military triumphs.
Conversely, there is also an active spread of pro-Israel disinformation, particularly the resurgence of outdated clips misrepresenting public sentiment in Iran, falsely suggesting that many Iranians are opposing their regime and supporting Israel's military actions. The growing popularity of these narratives has been bolstered by accounts leveraging old footage to misinform viewers about the situation on the ground in Iran.
Geoconfirmed, a group focused on validating online content, described the volume of misinformation as unprecedented, referring to a proliferation of recycled video material, footage unrelated to the conflict, and AI-generated images. This new phenomenon has raised concerns about the power of generative AI in disseminating false narratives during times of international tension.
Observers note that certain social media accounts have emerged as significant "super-spreaders" of misleading content, with a notable example being the pro-Iranian account Daily Iran Military, which experienced an 85% increase in followers in just a matter of days, reflecting the reach of disinformation in amplifying narratives favoring one side over the other.
Experts like Emmanuelle Saliba of the investigative group Get Real pointed out that generative AI tools are now being employed on an unprecedented scale in the context of war and geopolitical discord. The AI-generated images often depict implausible scenarios, such as bombardments of major cities and the downing of advanced fighter jets, further complicated by the challenging verification of night-time strikes.
The implications of such disinformation extend beyond mere misinformation; they can create perceived legitimacy for military actions on either side, eroding public understanding of the facts. Advanced technologies, while potent tools for information dissemination, also pose significant risks in distorting narratives and influencing public opinion.
As the conflict unfolds, the dissemination of disinformation via social media platforms remains a critical concern. TikTok and other platforms have asserted their commitment to combating misleading content, yet the actual impact and effectiveness of these measures remain in question, as users turn to AI-generated images and videos, often without proper verification.
This evolving landscape highlights an urgent need for individuals and organizations to critically engage with the information presented during conflicts to distinguish between fact and fiction and emphasizes the ethical responsibilities of technology companies in curbing the spread of disinformation.