As AI-generated images and video become more prominent on Twitter, the company is testing out a new feature that could make it easier for people to identify potentially “misleading media.” The company is experimenting with Community Notes for media, which will apply the site’s crowd-sourced fact checks to specific photos and video clips.The feature allows for Community Notes contributors who have high enough ratings to apply notes to images shared within tweets. Like notes on tweets, the labels could add additional "context" to images, like indicating if a photo was created using generative AI or is otherwise manipulated.From AI-generated images to manipulated videos, it’s common to come across misleading media. Today we’re piloting a feature that puts a superpower into contributors’ hands: Notes on MediaNotes attached to an image will automatically appear on recent & future matching images. pic.twitter.com/89mxYU2Kir— Community Notes (@CommunityNotes) May 30, 202