In the midst of escalating tensions between the US, Israel, and Iran, social media giant X has rolled out a stringent policy targeting AI-generated war videos. Creators who fail to clearly disclose that their content is artificially produced will face immediate suspension from the platform’s revenue-sharing program.
This decisive move comes at a critical juncture when deepfake videos and images mimicking real battlefield footage are flooding online spaces. Such hyper-realistic visuals risk misleading the public, distorting perceptions of actual events unfolding on the ground, and amplifying chaos during global crises.
X’s Product Head, Nikita Bier, announced the policy in a platform post, emphasizing the urgency of authentic information during wartime. ‘It’s now easier than ever to create AI content that deceives audiences,’ Bier wrote. Offenders will be barred from monetization for 90 days on first violation, with permanent expulsion for repeat infractions.
The rule specifically targets videos depicting armed conflicts without AI disclosure labels. X plans to deploy advanced automated detection tools alongside its Community Notes feature, where users can flag and fact-check suspicious posts. This hybrid approach underscores X’s shift toward decentralized content moderation.
Launched to boost creator engagement through ad revenue shares based on post interactions, the monetization program has drawn criticism for potentially incentivizing sensationalism. Detractors argue it encourages viral but misleading content, especially with lax eligibility checks.
For now, the policy zeroes in on war-related AI media, leaving broader misuse in politics or advertising unaddressed. As AI technology advances, platforms like X face mounting pressure to safeguard truth amid an onslaught of synthetic media.