Meta is making a change to how it labels AI content. The parent company of Facebook and Instagram announced in an updated blog post that it’s changing its labeling from “Made with AI” to “AI info,” noting that its AI-detection tools sometimes flagged images that were edited with AI but not made with it exclusively.
“We want people to know when they see posts that have been made with AI,” Meta wrote. “While we work with companies across the industry to improve the process so our labeling approach better matches our intent, we’re updating the ‘Made with AI’ label to ‘AI info’ across our apps, which people can click for more information.”
Though the new label acknowledges the messy moment artists and creators find themselves in, “AI info” is a confusing term that doesn’t really tell users much of anything. Does it mean something was created wholesale with AI? What about if a photographer edited an image with Adobe Firefly tools? The use of “AI info” might flag that the image involved some kind of AI, but it still lacks the context and specificity that would make the label truly useful.
Meta says the new label better matches its intentions, which is to convey that there are nuances to how AI is used. “Like others across the industry, we’ve found that our labels based on these indicators weren’t always aligned with people’s expectations and didn’t always provide enough context,” Meta wrote. “For example, some content that included minor modifications using AI, such as retouching tools, included industry standard indicators that were then labeled ‘Made with AI.’”
The company says it will begin including the “AI info” label on video, audio, and image content that is either detected with “industry-standard AI image indicators” or when users note themselves that the content is AI generated.
The change comes as other platforms develop their own tools and language to detect and label AI. TikTok launched AI labeling earlier this year, as did Google’s YouTube, which added a “How this content was made” section to video descriptions that include the information, “sound or visuals were significantly edited or digitally generated.”
But labels buried in descriptions or information that’s accessible only by tapping unclear tags, like Meta’s “AI info” solution, might not be the best way to keep users informed. Artefact, a Seattle-based studio, designed a concept for labeling AI-generated content that could show how much of a piece of content was created by humans versus AI. A photo taken by a human and lightly retouched with AI wouldn’t be shown as a completely AI-generated image; instead it would be represented as a hybrid piece of content.
This level of nuance is hard to convey in a clean and simple graphic that must be integrated into an existing app interface. That’s likely one of the reasons Meta opted for brevity over giving users complete information. But in these early days of AI, when people are still navigating what authorship means, a better label from Meta could have gone a long way to creating some much-needed standards.