There are 101 days until the U.S. presidential election and, as in years past, misinformation is once again an issue. But this year, AI chatbots are also factoring into the equation.
One recent example: After President Joe Biden dropped out of the race on Sunday, screenshots circulating on social media claimed that nine states, including Minnesota, had already “locked and loaded” their ballots ahead of the November 5 election.
That’s not true, and the office of Minnesota Secretary of State Steve Simon said the spread of this particular misinformation could be traced back to Grok, according to reporting by the Minneapolis Star Tribune. The generative AI chatbot was developed by xAI and is available exclusively to premium users of X, both of which are owned by Elon Musk.
While an election-heavy year will understandably mean there’s more focus on misinformation right now, these risks are persistent, Shannon Raj Singh, principal and founder of Athena Tech & Atrocities Advisory, tells Fast Company. One of her concerns is the cumulative impact of lower levels of misinformation and disinformation—content that doesn’t reach the threshold of violating various platforms’ content policies—and the impact on people becoming accustomed to hearing narratives that are false or misleading.
“Social media and other digital public spaces continue to be incredibly central to political debate and discourse,” says Singh, the former human rights counsel at then-Twitter. “It’s really central to our conversations, and therefore guarding against those risks is important.”
One reason to be optimistic, she says, is that because the U.S. presidential election falls so late in the year, election integrity teams will have a lot of time and experience dealing with misinformation shared on their social media platforms related to other elections around the world. That said, the potential for new manifestations—like deepfake audio—are among the things that keep her up at night, she says, pointing to an example of a deepfake in Moldova targeting pro-Western President Maia Sandu.
Singh cautions that companies must remain vigilant even after the election is over because of the risk of post-election violence. What’s more, not all political candidates or leaders face equal risks when it comes to AI-generated misinformation, she says, adding that women leaders and people of color face heightened risk.
For his part, Musk has touted the power of the community and relies on a program called Community Notes, which lets X users write fact-checking labels and vote on whether or not they’re helpful. He has otherwise done away with most content moderation rules since buying the social media platform in 2022.
Meanwhile, Meta has made changes that CEO Mark Zuckerberg hopes will mean that Facebook and Instagram “play less of a role in this election than they have in the past,” he told Bloomberg in a recent interview.
Still, the mass layoffs in tech, including among teams once tasked with combating misinformation, is a worrying backdrop in this election cycle, Singh notes. But she says there’s also more awareness of the problem at hand. “Keeping the focus on the responsibility of companies and learning what’s happened in the past will be very important.”