In April, Charlie Kirk and his conservative youth organization Turning Point spent about $5,000—chump change for their $80 million operation—on a familiar product for them: another round of Facebook ads.
The four ads used the Kirk formula for going viral, outrage politics, this time attacking LGBTQ people and their allies as “groomers,” the term for child predators who try to manipulate victims to gain their trust in order to sexually abuse them. One ad shows Kirk arguing transgender people have “deep-seated mental problems to begin with” that make them easy prey for “a groomer.” Another argues Disney’s last three animated films included subtle attempts to groom kids (Lightyear featured smooching lesbians, Strange World had a gay teen, Elemental starred Pixar’s first nonbinary character), and that ulterior motive is why they tanked at the box office—“Hold this L groomers,” reads the ad’s text.
Like other content found posted by Kirk and Turning Point’s accounts, these ads appear to use “groomer” in a manner that violates Meta’s Community Standards and its Advertising Standards. Both sets of standards ban generalizing people groups as criminals, calling them sexual predators, and targeting them with slurs. For users still wondering whether that ban includes saying that LGBTQ people are “groomers,” Meta set the record straight in 2022 by confirming to the Daily Dot that “baselessly calling LGBTQ people or the community ‘groomers’ or accusing them of ‘grooming’” does indeed qualify as hate speech.
And according to data that LGBTQ advocacy group GLAAD shared with Fast Company, Kirk and Turning Point seem personally acquainted with that ban: In the past year, at least two posts published by Kirk’s Facebook page have been removed—one that wrote “Groomer endgame” above a video of a school teacher acknowledging their gender was hard for students to guess, and another that reacted to a kids’ Pride summer camp with “Groomer alert!”
Yet the Turning Point crew’s latest “groomer” ads stayed active even after GLAAD flagged them as hate speech. They generated at least 1.2 million impressions—the equivalent of reaching San Francisco’s and Miami’s entire populations.
How did two known provocateurs succeed in running a paid version of content that recycled attacks Meta appears to have removed as hate speech just months earlier?
Meta declined to offer Fast Company a detailed explanation on the record. In a written statement, it noted, “Advertisers running ads on Meta’s platforms must follow our Community Standards as well as our Advertising Standards.”
However, groups that closely monitor social media content enforcement tell Fast Company this fits into a pattern of accounts affiliated with prominent right-wing commentators like Kirk and conservative outlets like the Daily Wire succeeding in running ads that violate Meta’s rules for acceptable content. The uptick in anti-trans content in particular, they note, is a pattern occurring against the backdrop of a new front in the culture wars where internet personalities are building lucrative brands by attacking trans rights. (Lady Ballers is a recent example, courtesy of the Daily Wire: a feature-length comedy about men dominating a women’s basketball league by posing as trans women. In November and December alone, the Daily Wire dropped over $1.6 million into Meta ads promoting the film.)
This issue has begun to draw attention from everyone from GLAAD and ad industry watchdogs, to even members of Meta’s independent Oversight Board. It’s one that would seem to pose tough questions for Meta—ethically, the company should remove content that breaks the rules, but financially, it can make more money by deciding the content doesn’t.
“Both Meta and the anti-LGBTQ creators are benefiting from this ecosystem,” Leanna Garfield, GLAAD’s social media safety program manager, tells Fast Company. “Ad content that, for example, characterizes LGBTQ people as ‘groomers’ or uses slurs poses an alarming conflict of interest, as Meta is making money from them.”
Meta didn’t respond on the record to Fast Company‘s questions for this article, but a spokesperson did contend that “Meta provides more advertising transparency than any other platform, including any TV, radio, and print.”
A surge of bad ads to police
In recent years, a growing number of politicians, human rights groups, and watchdogs have claimed that not only is Meta doing a poor job of removing harmful content, but its process for making enforcement decisions is happening in what they see as a black box.
Meta claims every ad is reviewed before going live, and must adhere to higher standards than user-generated content. It also claims a “key part” of the review process is a network of 465 trusted partners that flag “dangerous and harmful content” they encounter: “From local organizations such as Tech4Peace in Iraq and Defy Hate Now in South Sudan, to international organizations like Internews, our partners bring a wealth of knowledge and experience to help inform our content moderation efforts,” it explains.
However, last year, Internews effectively went rogue. Last August, it published a scathing report in which the media nonprofit and two dozen other partners it kept anonymous argued the trusted partner program was a facade. Meta was accused of taking up to eight months to address material that partners felt was likely to “lead to imminent harm and required immediate action.” (One partner quipped: “What trust? They don’t trust us, and so we don’t trust them.”)
Then more recently in January, the Oversight Board argued that Meta’s failure to police hate speech stemmed from a problem “not with the policies, but their enforcement.” This followed Meta’s decision to lay off 100 Trust and Safety team members, a move that brought Congress members to warn Mark Zuckerberg directly that he could be “open[ing] the door” so “a malicious agent can flood your platforms with hate speech, fake images, altered videos, and false information.”
This March, GLAAD revealed dozens of takedown requests it had submitted since last summer for posts the group believed qualified as hate speech. GLAAD says Meta never responded to the complaints—including ones flagging posts that depicted armed vigilantes beating enemies with bats and stomping on their heads next to a trans flag and the words “Help us do the work of the Lord.” These posts and most of the others remained live months after GLAAD published its report. (Meta hasn’t responded to media inquiries asking why, including Fast Company’s.)
Advocacy groups say bad actors spreading their attacks on vulnerable groups from organic content to paid ads is a concerning development. In 2022, progressive watchdog Media Matters for America identified more than 150 ads that Meta ran in the first 10 months that used “groomer” as an anti-LGBTQ slur, seemingly violating its use standards.
The bulk belonged to niche groups that most Americans will never cross paths with (Tomball Family Values in the Houston suburbs, or a PAC affiliated with Green Party TERFs). Data from Meta show these ads collectively cost about $15,000 to run and generated around one million total impressions.
But Charlie Kirk and Turning Point were on the Meta trust and safety team’s radar long before then. In 2020, Facebook had to ban a “troll army” of some 275 accounts, 55 pages, and $1.2 million worth of ads that Turning Point was using to peddle misinformation and promote its content. A Washington Post investigation revealed the organization had devised a way to flood Facebook with slavishly repetitive content that could achieve a similar effect as spam bots but evade moderation, by paying a group of teenagers in Arizona to post “a limited number of times” from their own accounts “to avoid automated detection.”
Since then, the rhetoric that is upsetting LGBTQ advocates has popped up in ads by some of Facebook’s most prominent right-wing accounts. GLAAD identified 53 in the past nine months that targeted the LGBTQ community with what it claims is hate speech. Among them were large individual ad buys (of up to $25,000) that labeled the trans community the product of “a sick cultural project” by people “who experimented on children,” alongside a supercut of Kirk hurling attacks on his podcast like “sicko” and “mentally deranged.”
After asking to see the “groomer” ads that Turning Point and Kirk started running in April, Meta declined to say whether they qualified as hate speech or violated the Community and Advertising Standards.
In June, while Fast Company was reporting this story, two of those ads suddenly popped up as disapproved in the Meta Ad Library, its searchable advertising database. A note explained Meta stopped running them because Turning Point had managed to violate a separate policy—the well-known requirement, added after Russia’s meddling in the 2016 elections, that advertisers must include a “Paid for by” disclaimer on ads about social issues, politics, or elections. Turning Point was able to add the disclaimer belatedly, causing both ads to go through Meta’s full review process again, and they returned to Facebook.
The shifting face of Facebook
The problem of bad ads follows an advertising shift that’s been underway since the 2000s, when Meta was still Facebook, the social media concept was brand-new, and brands like Apple and Victoria’s Secret were eager early adopters of sponsored content. Those partnerships brought huge profits—and, eventually, shareholder pressure that the profits continue. But in recent years, Meta has suffered its first-ever revenue declines, faced brand boycotts, and watched stocks sink in a slumping digital ad market, leaving Mark Zuckerberg to declare 2023 the “year of efficiency” at the Menlo Park headquarters.
A climate thus emerged where extra revenue needed to be generated, and it has overlapped with prominent right-wing accounts buying a disproportionate number of Meta’s ads. Kirk and Turning Point alone have purchased more than 10,000 since the Meta Ad Library began saving them in 2019—an advertiser could run five Facebook ads per day until the year 2029, and still end up almost a thousand short. Meanwhile, accounts for the Daily Wire and its two top personalities, cofounder Ben Shapiro and Matt Walsh, have run well over 11,000 ads. (Daily Wire cofounder Jeremy Boreing’s small razor brand, launched in 2022 as a “non-woke” alternative to juggernaut Harry’s, has purchased another 1,000.)
Mainstream media companies can’t claim a tenth as many ads. The New York Times ran 490 during that five-year period, according to the Ad Library. For NPR it was just five, while CNN looks hardcore for pushing 1,000. Meanwhile, McDonald’s has run 680 U.S. ads, Disney did 110, Apple tops out at 240, and fast fashion label Shein—a brand that dominates Instagram users’ “haul” videos—has run 1,900.
It’s undeniable that paid content has shifted from a few big brands dominating to smaller advertisers accounting for dwindling shares of the total ad pie, says Garrett Johnson, an assistant professor of digital marketing at Boston University’s Questrom School of Business. That means for Meta today, “the value proposition to a brand advertiser is a bit nebulous,” he explains. “But they have gotten really good at finding people who want a specific thing—coffee beans from Kenya, or leather sandals made by fair-trade producers.” Because this advertising space has gotten so big, Johnson says Meta could argue the magnitude of economic harm caused by an individual advertiser running ”horrible ads on Facebook” is also “smaller in nature.”
But horrible ads can inflict other harms, such as the kind created during the 2016 election when Russia directed ads at highly specific groups through so-called microtargeting. Leather sandal makers can tap Meta’s cache of data on billions of users, then tailor ads to reach the subset most likely to get hooked by their advertising message. So can another type of advertiser: political activists.
The dangers associated with such activism were exposed in 2021, when a tranche of embarrassing internal Facebook documents was leaked to Congress. The files, which came from Facebook’s Civic Integrity unit (a team created “to serve the people’s interests first, not Facebook’s,” before being disbanded in 2020), contended Facebook was “creating perverse incentives” by boosting extreme content under the pretext that “outrage gets attention.” Team members complained the Daily Wire was “consistently exempted from punishment.” Moreover, they claimed that “a fear of political backlash” had prevented other conservative personalities from being designated as “repeat offenders” (a label Meta metes out to temporarily block the account’s ability to buy ads) because this subset was “extremely sensitive” and “has not hesitated going public about their concerns around alleged conservative bias on Facebook.” Charlie Kirk was named in this group.
Meta didn’t respond to Fast Company’s question about whether the content of advertisers with patterns of misbehavior should be subjected to more rigorous screening.
Social media companies choose to ban hate speech because such attacks on their digital platforms—whether in ads or organic posts—can have real-world consequences. While extreme right-wing outrage accounts verbally bully the trans community (“disgusting, mentally ill, neurotic, predatory freaks” is another one of Kirk’s descriptors), their followers have been inspired to adopt more violent measures. Libs of TikTok creator Chaya Raichik has doxxed pro-trans rights teachers and gender-affirming healthcare providers, only for people online to then threaten to kill them. This spring, least 54 bomb threats were called into Planet Fitness gyms after she criticized its transgender locker room policy.
Last August, Kirk used his podcast to attack Artemis Langford, Wyoming’s first trans sorority member, telling listeners she looks “like Shrek” and “towers over these young ladies, obviously a predator . . . obviously sexually attracted to them.” People should “make the trans freaks feel uncomfortable,” he urged. “Nothing physical . . . Just make fun of them.” Soon after, though, Langford’s name showed up in gun forums next to a noose. She was stalked around her college town, and her movements were posted on social media. Death threats followed. Sorority leaders advised members not to wear the house’s Greek letters and to set their social media accounts on private.
Sarah Kay Wiley, policy and partnerships director at Check My Ads, an ad industry watchdog whose founders helped organize the 2020 Facebook boycott and defund the website Breitbart, tells Fast Company that bad ads also present Meta with a different type of conflict of interest: Paid ads drive more followers to the advertiser’s page, and on top of spreading their content more widely, pages with more followers fetch higher advertising rates, a boon for Meta.
As it happens, the Facebook pages of the conservative rage-bait subset that purchases a high percentage of Meta’s ads also boast very high follower counts. The pages for Turning Point and Charlie Kirk, for instance, count 5.8 million followers. The Ben Shapiro page claims 9.4 million. The Daily Wire has 3.8 million followers.
Compare this to Apple, often the world’s most valuable company, which has 14 million followers on Facebook. Whole Foods has 4.1 million, while Chipotle has just 3.3 million. Facebook’s single largest advertiser, Temu, which reportedly spent nearly $2 billion on ads in 2023, has 4.2 million.
Needless to say, Facebook’s demographics have changed considerably from the 2000s. In 2020, the top Facebook page measured by overall user engagement was Ben Shapiro’s, and around this time, the Daily Wire was generating more engagement than the New York Times, the Washington Post, CNN, and NBC News combined. That year also saw 1,200 companies, including Coca-Cola, Starbucks, Levi’s, and Verizon, pull paid advertising to protest Facebook’s inaction on hate speech.
The politics of moderation
Other watchdogs argue Meta’s blind eye to bad ads extends beyond those that can harm the LGBTQ community. A spokesperson for the Anti-Defamation League (ADL) says the group has tracked the ways that politicians are exploiting ads for political elections—a speech type afforded greater First Amendment protections than the government grants to commercial speech—in order to post antisemitic and racist content.
The ADL monitors content pushing the Great Replacement Theory, the idea that Jews and a global elite are puppeteering a “white genocide” in the West through immigration. Five mass shooters since the mid-2010s have subscribed to the theory, and the Oversight Board recently said it’s the sort of social media content that “an ordinary user could expect protection from . . . under Meta’s Hate Speech policy.” But the ADL claims that during the last election cycle, attempts to get Meta to remove ads by Republicans like Marjorie Taylor Greene that endorsed the conspiracy mostly went nowhere.
Media Matters has also investigated paid ads that cite the Great Replacement Theory. One report found Meta approved at least 50 promoting it in 2022 following the Buffalo shooting where the killer used the theory as justification.
Most critics acknowledge Meta must police an immense volume of content, and the task becomes more onerous every day. Wiley from Check My Ads argues it’s getting harder for watchdogs like hers to track harmful content as bad actors find creative new ways to sneak abusive material by Meta’s remaining gatekeepers. Keyword searches used to be pretty effective, but “it’s hard to know what keywords to search for anymore,” she says. Similar to malware evolving online, there are ways to code hate speech to avoid easy detection. “The alt-right has gotten really good at doing this,” she argues. Her group doesn’t have specific examples of the left using this tactic, but Wiley says she is “sure they exist.”
And the bad ad problem is global. In May, during India’s general elections, the global corporate accountability group Ekō (formerly called SumOfUs) and India Civil Watch International did a bold test to see if they could get Meta to approve ads designed to break all of its policies against hate speech, harassment, misinformation, and incitement of violence simultaneously. They claim to have succeeded in running 14 ads that sandwiched calls for bloody uprisings between dehumanizing slurs against Muslim minorities (“Let’s burn this vermin,” “These invaders must be burned”), and in one case even advocated for the outright execution of an opposition leader. The ads were purchased over a five-day period, and the groups say each received Meta’s formal approval within 24 hours.
“Meta can screen out some of the bad things,” says Boston University’s Johnson, “but there are sort of infinite ways to be bad.”
A Meta problem
Every watchdog Fast Company interviewed warned that as more “bad things” saturate paid content, their job of holding the creators and Meta accountable will get harder. That’s because Meta makes advertising more difficult to monitor.
A regular user post that stirs up controversy lives at a URL where the public can observe, in real time, how users are affected by and interact with the content. Each quarter, Meta releases a new Community Standards enforcement report for public transparency. The Oversight Board exists as a check on power, with the capacity to overrule Zuckerberg on decisions.
Ad content, meanwhile, enjoys a special, almost protected speech-like status. The public gets no data contextualizing harmful ads, or that it can use to gauge Meta’s progress on removing them. The domain of ads falls outside the Oversight Board’s reach.
Responding to Fast Company, Meta mentioned a sole resource that users and outsiders can use to track advertising—the Ad Library. But its repository of ads are only viewable in a museum-like state, devoid of place and time. Fast Company’s own Ad Library searches yielded a frequent disclaimer: “We’re unable to display every ad from [advertiser’s name] at this time.” A study published by Mozilla and CheckFirst argues the database contains “accuracy errors and missing data fields.” In tests, the authors found only 83% of Instagram ads and 65% of Facebook ads were cached.
The question is whether this is problematic enough for Meta to update its policies. Kenji Yoshino, a leading constitutional law scholar at NYU Law School who joined the Oversight Board last year, tells Fast Company it is, and that Meta should expand the board’s scope to include paid content, because he predicts the enforcement gap between sponsored and organic content “is going to become more stark,” especially in the coming months as America heads toward one of the highest-stakes presidential races in history.
For its part, Meta says that advertising content could one day fall under the Oversight Board’s scope. A section of the board’s bylaws titled “Future Technical Appeals” actually states that “in the future, people will have the opportunity to request the board’s review of other enforcement actions,” then specifically lists advertisements.
What worries Yoshino most about future ads is the potential for hate speech—because he argues hate speech is sticky, meaning it always assumes new forms to stay relevant.
A recent Oversight Board decision addressed an example of this for trans safety. A user appealed Meta’s refusal to remove a post that called the trans flag “a self-hanging curtain.” Yoshino believes Meta failed to take enforcement action because the offense came off “more subtle than an explicit anti-trans comment.” Going forward, he says the question Meta should consider is: If it did beef up enforcement action on user posts, would bad actors who worry they can’t get away with attacks in regular posts shift to sponsored content, because they believe it’s now the place that offers “heightened protection” for hate speech?
“Dealing with these issues is like squeezing a balloon,” Yoshino added. “When you squeeze the balloon at one end, you can’t assume the balloon is going to deflate. What often happens is the air pops up in a different part of the balloon.”