
When Google unveiled Veo 3 in May 2025, the tech press hailed it as a breakthrough in cinematic AI. But make no mistake: we didn’t just get better video generation—we handed extremists, propagandists and governments a machine that fabricates reality on demand. Veo 3 is not neutral. It’s a truth war machine—and we’re already losing.
As AI safety CEO Connor Leahy put it, “They are not responsible enough to handle even more dangerous, uncontrolled AI and AGI.”
Here’s why: Veo 3 produces visually disarming clips with lifelike physics, lip-sync and ambient audio, that are so real they break our fact-checking reflex. I watched a raw clip labelled as a protest in Gaza and wondered: Was the fire really burning? Was the crowd even real? We’ve entered an era where eight‑second videos are indistinguishable from authentic footage; and our scepticism is weaponised.

I’ve seen users on Reddit react to an AI clip of an “American soldier” overlooking a Gaza crowd on Instagram as if it were live footage. One commenter remarked, “99 % believed it was real.” That’s not just misperception, it’s a trust crisis.
The second problem: scale and opacity. Veo 3 runs on public prompts, embedded in everyday platforms. In minutes, anyone with a prompt can generate believable visuals of bombed buildings, protesters being gassed, or militant scenes that never happened. During the Iran–Israel escalation, AI crafted entirely fabricated protests—one video featured Israelis supposedly shouting peace slogans in Hebrew gibberish, another showed Iranians praising Israel.
Remember Trump’s AI clip of Gaza turned into his own personal resort with bellydancers, yachts and a golden statue of himself? It began life as satire, but when stripped of context and reposted by Trump, millions were fooled. The viral shock value made it propaganda.

Sensitive content around Gaza that is already suppressed by algorithmic moderation gets buried further. Deepfake propaganda can amplify Israeli narratives or muddy footage of real Palestinian suffering. It’s an information war fought in pixel noise.
If authentic documentation of Gaza becomes blurred or disbelieved, silencing real suffering in favour of simulated spectacle, who speaks for the voiceless?
So here’s my hot take: Veo 3 isn’t a creative tool—it’s a weapon. And without urgent action: mandating synthetic content disclosure, enforcing digital watermark tracking like SynthID and building interoperable provenance systems, we’re deliberately undermining reality.
The visual commons is now contested ground—and Gaza is at the epicentre. When someone says, “I saw it online,” we must ask: Is this truth, or just a prompt?
Because if we can’t tell, AI wins first by creating images—and second by creating doubt. And when truth disappears, what’s left is silence.
References:
Abu Toha, M. (2025, June). Mosab Abu Toha on Instagram: “concentration camp! humiliating people starved for 86 days. shame on this world.” Instagram. https://www.instagram.com/reel/DKKoZ1CStaW/?igsh=cmZ0M2VpMmJ3YnE%3D
AFP via Getty Images. (2025). Largest anti-Hamas protests since start of war break out in northern Gaza. CNN World. Retrieved August 13, 2025, from https://media.cnn.com/api/v1/images/stellar/prod/anti-hamas-protests-0326-screengrab-dig.jpg?c=16×9&q=w_1280,c_fill.
Chow, A. R., & Perrigo, B. (2025, June 3). Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud. Time. https://time.com/7290050/veo-3-google-misinformation-deepfake
ChrisWagy. (2025, June). VEO 3 (presumably) used for real war propaganda by popular YouTube channel, showing fake “bombed-out streets of Tel Aviv”.: Do you still believe what you see? Reddit. https://images.app.goo.gl/XY2ek7gvXjVxrtWx9
EDMO Publications. (2025, July 14). The first AI War: How the Iran-Israel Conflict Became a Battlefield for Generative Misinformation. EDMO. https://edmo.eu/publications/the-first-ai-war-how-the-iran-israel-conflict-became-a-battlefield-for-generative-misinformation
Hall, R. (2025, March 6). “Trump Gaza” ai video intended as political satire, says creator. The Guardian. https://www.theguardian.com/technology/2025/mar/06/trump-gaza-ai-video-intended-as-political-satire-says-creator
Lucente, A. (2025). Trump’s AI Gaza video elicits mockery from Middle East social media users. Al-Monitor. Retrieved August 13, 2025, from https://www.al-monitor.com/originals/2025/02/trumps-ai-gaza-video-elicits-mockery-middle-east-social-media-users.
Masood, A. (2025, March 29). Toward Reliable Provenance in AI-Generated Content: Text, Images, and Code. Medium. https://medium.com/@adnanmasood/toward-reliable-provenance-in-ai-generated-content-text-images-and-code-9ebe8c57ceae
Mehvar, A. (2025, July 25). Q&A: Twelve days that shook the region: Inside the Iran-Israel war. ACLED. https://acleddata.com/qa/qa-twelve-days-shook-region-inside-iran-israel-war
Olteanu, A. (2025, May 22). Google’s Veo 3: A Guide With Practical Examples. DataCamp. https://www.datacamp.com/tutorial/veo-3
Shtaya, M. (2024, August 14). How AI systems are dehumanising Palestinians. Digital Action. https://yearofdemocracy.org/how-ai-systems-are-dehumanising-palestinians/