As concerns about foreign interference in the 2024 election linger and politicians continue to worry about the potential negative impacts of artificial intelligence, OpenAI, late last Friday, announced it had discovered an “Iranian influence operation” using its tools to generate content to spread election disinformation, particularly about the U.S. presidential race.
OpenAI said it had banned several accounts that were tied to the campaign—and that it continues to monitor for further attempts at disruption. Posts were generated in both English and Spanish, the company said.
“We take seriously any efforts to use our services in foreign influence operations,” the company said in a blog post. “Accordingly, as part of our work to support the wider community in disrupting this activity after removing the accounts from our services, we have shared threat intelligence with government, campaign, and industry stakeholders. OpenAI remains dedicated to uncovering and mitigating this type of abuse at scale.”
Here’s what to know about the group and what OpenAI says it was trying to do.
What were the operators doing with OpenAI?
The group, known as Storm-2035, generated fake news articles and social-media comments in an attempt to shape public opinion around Kamala Harris and Donald Trump.
Did the operators seem to support one candidate over another?
Not really. Storm-2035 attempted to get both the hashtags #DumpTrump and #DumpKamala trending—and generated stories and social media posts about both candidates.
Did any of those stories or posts gain traction?
OpenAI says engagement on virtually all the posts from this group was low, with little engagement on the fake news sites and few (or no) likes, comments, or shares on social media. An assessment of the impact on the Brookings Breakout Scale, which assesses impact on a 1-6 scale (with 1 being lowest), ranked this operation at the low end of Category 2, meaning it was active on many platforms, but there was no evidence real people had paid it much attention.
“The majority of social media posts that we identified received few or no likes, shares, or comments. We similarly did not find indications of the web articles being shared across social media,” OpenAI wrote.
How many articles and comments were generated?
OpenAI did not disclose how many articles and posts the operation generated.
What were some of the domains associated with the generated posts?
OpenAI said Storm-2035 had created both long-form articles that were distributed across five websites posting as news and information outlets. All of these are still active and online.
- niothinker[.]com
- savannahtime[.]com
- evenpolitics[.]com
- teorator[.]com
- westlandsun[.]com
Have there been other campaigns?
Yes. Earlier this month, Microsoft’s Threat Analysis Center report showed groups connected to Iran’s government were using a wide spectrum of methods to influence the elections. OpenAI said its investigation benefited from information in that report. Iran was also linked in a Secret Service report to a plot to assassinate Trump, which was not connected to the attempt on his life earlier this year.
Did Storm-2035 focus solely on the 2024 election?
No. The group also generated content around Israel’s invasion of Gaza, as well as Israel’s presence at the 2024 Olympics. Storm-2035 also appears to have used disinformation to interfere with politics in Venezuela, the rights of Latinx communities in the U.S. (both in Spanish and English), and Scottish independence.
Additionally, says OpenAI, “they interspersed their political content with comments about fashion and beauty, possibly to appear more authentic or in an attempt to build a following.”
Are other countries trying to use OpenAI to disrupt the election?
Yes. Three months ago, OpenAI released a report saying it had disrupted five online campaigns that attempted to manipulate public opinion via its technologies. The company said state actors and private companies in Russia, China, and Israel (along with Iran) were behind those efforts.
What else can voters expect between now and the election?
We’ve already seen deepfakes in this election cycle, including one that used AI to impersonate President Biden, discouraging people from voting in New Hampshire (although not all claims about images, etc. being AI fakes are true).
While OpenAI managed to stop this effort by Iran, this is unlikely to be the last attempt to sway voters with misinformation before November 5. A report from the director of National Intelligence, issued late last month, warned of “a range of foreign actors conducting or planning influence operations targeting U.S. elections this November.” Russia and China were pointed to as two of the biggest threats.
“Moscow continues to use a broad stable of influence actors and tactics and is working to better hide its hand, enhance its reach, and create content that resonates more with U.S. audiences,” the report read. “These actors are seeking to back a presidential candidate in addition to influencing congressional electoral outcomes, undermine public confidence in the electoral process, and exacerbate sociopolitical divisions.”
The Electronic Frontier Foundation has some advice on how to spot fake news, from checking whether the author is real and learning more about the source of the information to consulting a fact-checking site when a bold claim is made.
Alao, you can report election disinformation to https://reportdisinfo.org.