Photo: NICOLAS TUCAT / AFP / Getty Images
X is taking aim at creators who post artificial intelligence (AI)-generated videos of armed conflicts without disclosing that the content is fake. The social media platform announced Tuesday (March 3) that creators who break the new rule will lose access to its revenue-sharing program.
Nikita Bier, head of product at X, announced the updated Creator Revenue Sharing policies in a post on the platform. The move comes after social media feeds were flooded with fake battle scenes following the start of the conflict in Iran.
"During times of war, it is critical that people have access to authentic information on the ground," Bier wrote. "With today's AI technologies, it is trivial to create content that can mislead people."
Under the new policy, any creator who posts an AI-generated video of an armed conflict without clearly labeling it as AI-made will be suspended from X's Creator Revenue Sharing Program for 90 days. A second violation will result in a permanent ban from the program.
According to The Guardian, the policy update follows a wave of fake war footage that spread rapidly across X, Instagram, and Facebook. One fabricated clip showed Iranian rockets shooting down a US jet and was viewed 70 million times. Another used AI to replace smoke from a real missile strike with a much larger fake fireball.
X says it will catch rule-breakers by scanning posts for metadata and other signals tied to generative AI tools, as well as through its crowdsourced fact-checking feature, Community Notes.
The platform's Creator Revenue Sharing Program lets popular accounts earn money by sharing in advertising revenue. Critics have long argued the program rewards sensational or outrage-driven content. Some have also pointed to its lax content controls as a problem.
The new rule only applies to war-related AI content. AI-generated misinformation outside of conflict zones — including political deepfakes and deceptive influencer content — is not currently covered by the policy. Critics say this leaves a significant gap in X's content controls.
Bier said X plans to keep improving its approach. "We will continue to refine our policies and product to ensure X can be trusted during these critical moments," he wrote.