In a bid to curb misinformation, Elon Musk’s social media platform X (formerly Twitter) has announced changes to its creator policy. Aimed at content generated using artificial intelligence, the update focuses on creators who upload AI-generated videos depicting armed conflicts without clearly stating that the footage is artificially created.
Under the revised rules, creators who fail to disclose that war-related videos are AI-generated risk losing their ability to earn through the platform. In some cases, they could also face a permanent ban from X’s Creator Revenue Sharing programme.
What the new rule states
Effective immediately, users who post AI-generated videos of armed conflicts without labeling them as AI-made will be suspended from the platform’s revenue-sharing programme for 90 days for their first violation. If the same creator repeats the offense, they could be permanently removed from the monetisation programme.
The policy update was announced by Nikita Bier, X’s head of product, in a post on the platform. Bier said the move is intended to protect the authenticity of information shared online, especially during wartime when misleading content can spread rapidly.
How X will detect AI-generated war videos
According to the platform, undisclosed AI-generated content will be identified using multiple detection methods:
Community Notes: X’s crowdsourced fact-checking feature will help flag misleading or synthetic content.
Metadata analysis: Technical information embedded within media files will be examined to identify AI-generated material.
AI detection signals: Additional technical indicators commonly present in generative AI videos will also be used.
Creator warning
The update sends a clear message to the millions of creators who earn through X’s monetisation system: any AI-generated footage related to armed conflicts must be clearly labeled. Failure to disclose that such content is AI-made could lead to suspension from the platform’s revenue-sharing programme or even permanent removal.
Under the revised rules, creators who fail to disclose that war-related videos are AI-generated risk losing their ability to earn through the platform. In some cases, they could also face a permanent ban from X’s Creator Revenue Sharing programme.
What the new rule states
Effective immediately, users who post AI-generated videos of armed conflicts without labeling them as AI-made will be suspended from the platform’s revenue-sharing programme for 90 days for their first violation. If the same creator repeats the offense, they could be permanently removed from the monetisation programme.The policy update was announced by Nikita Bier, X’s head of product, in a post on the platform. Bier said the move is intended to protect the authenticity of information shared online, especially during wartime when misleading content can spread rapidly.
How X will detect AI-generated war videos
According to the platform, undisclosed AI-generated content will be identified using multiple detection methods:Community Notes: X’s crowdsourced fact-checking feature will help flag misleading or synthetic content.
Metadata analysis: Technical information embedded within media files will be examined to identify AI-generated material.
AI detection signals: Additional technical indicators commonly present in generative AI videos will also be used.




