Branded is a weekly column devoted to the intersection of marketing, business, design, and culture.
Plenty of brands seem eager to signal their artificial intelligence chops these days—maybe too eager. Consider Toys “R” Us. It set out to grab attention at the recent Cannes Lions festival, and beyond, with a bold example of AI as a creative tool. And what it touted as the first brand video generated with AI certainly got a strong reaction. In short, many found it creepy and off-putting, as well as a slight to human ad creatives. Jeff Beer, Fast Company‘s senior staff editor covering advertising and branding, pronounced it an “abomination.”
In fact, the spot’s dreamy depiction of the chain’s origin story, made with OpenAI’s text-to-video tool, Sora, became just the latest example of a brand scrambling to embrace—and being seen to embrace—AI’s potential, and basically stepping on a rake. It should be (yet another) reminder of what brands have to lose in the rush to do something, anything, involving AI. Whatever the ambition, it ended up the most recent high-profile entry on the roster of the biggest brand mistakes of the AI era. So far.
But it certainly has a wide variety of company. Just a few weeks ago, McDonald’s pulled the plug on an experiment with AI handling drive-through orders. The system’s botched interpretations of certain orders—mistakenly accepting that customers had asked for hundreds of McNuggets or ice cream with bacon on it—went viral on social media. The burger giant announced it would “explore voice-ordering solutions more broadly,” essentially conceding that the technology’s not ready for prime time just yet. (McDonald’s wasn’t the only brand burned by the incident; the episode was also a bad look for IBM, McDonald’s tech partner on the effort.)
Earlier this year, a Canadian tribunal ruled that Air Canada would have to repay one of its customers who received erroneous information about its bereavement policy rules from the airline’s chatbot. Air Canada’s defense involved an argument that the chatbot was in effect a separate legal entity “responsible for its own actions.” The amount in dispute was around $600 (plus tribunal fees)—which just makes the brand-mistake cost seem even more ridiculous.
In one of the most high-profile AI debacles to date, Sports Illustrated was found to have used the technology to create and publish AI-generated articles attributed to fake “authors.” The scandal wreaked havoc on an already struggling but storied sports journalism brand; the CEO of its operating entity was fired in the aftermath. (Authentic Brands Group, owner of SI‘s intellectual property rights, later signed a licensing agreement with a different operator.) Much of the automated content was dubious and strange, and the debacle became an object lesson for brands on the need to be honest and transparent about AI experiments.
And of course the companies actually fueling the AI tech boom have hardly been immune to brand mistakes as they’ve battled each other for customers and attention. Quite the contrary. The much ballyhooed OpenAI has practically become a household name—and its notorious gaffes have been part of that story. Its technology infamously dreamed up imaginary case law that was actually used (and exposed as fake) in actual legal proceedings. The company was also accused of generating an unauthorized imitation of Scarlett Johansson’s voice for its ChatGPT product, stepping on a creative-community nerve about generative AI copying without permission; its denial was undercut by CEO Sam Altman’s tweeting “her” to promote the release, seemingly a direct reference to the movie Her in which Johansson voiced a fictional AI assistant.
Anxious not to be left behind, Google has scrambled to add AI to its search arsenal, and its AI Overviews product has definitely gotten attention—particularly for doling out dubious (and soon viral) advice involving eating rocks and adding glue to a pizza recipe.
But Microsoft, another participant in the AI scrum, arguably gets the first-mover advantage nod in the brief history of AI gaffes. Way back in 2016, it debuted Tay, a social media chatbot powered by AI and supposedly designed to converse with humans and learn from those interactions. Unfortunately, a number of those humans promptly trained Tay to spew racist and antisemitic views; it was shuttered the next day. (Microsoft has more recently looked like a winner in the AI race, but its Bing search engine has produced its share of attention-grabbing “hallucinations.”)
In fairness, AI has come a long way in a short period of time, and will presumably continue to improve. But that doesn’t change the challenge of today’s feature becoming tomorrow’s glitch. Smaller-scale examples keep piling up, too, from Snapchat’s AI help bot alarming users by seeming to quit its job, to Adobe accidentally ticking off some of its photographer customers by noting Photoshop users could “skip the photo shoot” thanks to AI, to Figma disabling an AI design tool that apparently copied the design of Apple’s Weather app. It also won’t change the underlying risk for brands—the rush to brag about incorporating the latest AI bells and whistles can end up making them look not just clueless but untrustworthy when things go sideways. That’s a problem for the brand, not the technology. After all, each of these gaffes resulting from the current AI scramble can be attributed partly, if not mostly, to poor human judgment. And fixing that might take a while.