Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
This article is part of a series, Bots and ballots: How artificial intelligence is reshaping elections worldwide, presented by Luminate.
When it comes to artificial intelligence’s role in elections, it’s easy to get lost in a sea of buzzwords, tech industry jargon and murky political activity.
AI-generated deepfakes. Large language models. Recommendation algorithms powering social media.
But behind the rise of this emerging technology lies a sea of election officials, civil society groups and fact-checking organizations from Peru to the Philippines — all trying to corral potential abuses of AI while, at the same time, attempting to harness the technology to improve how elections operate worldwide.
It’s not an easy task.
Ever-changing technical advances, limited budgets and breathtaking hysteria around what AI can supposedly do have created endless difficulties for those on the front line of protecting global elections from AI-fueled disinformation.
Below are the stories of three such individuals from South Africa, Pakistan and Argentina, respectively.
While the “Bots and Ballots” series has primarily focused on more advanced Western economies, it’s in the so-called Global Majority — those in developing and middle-income countries — where the technology has taken off the fastest with little, if any, regulatory oversight. Many of these countries have fragile democratic institutions, limited technical capacity and minimal contact with AI tech giants to make their voices heard.
What follows is merely a snapshot of more than a dozen interviews that POLITICO conducted with such groups and individuals from Indonesia to Costa Rica.
What became clear from these discussions was a real-time effort to both understand and corral new forms of technology that are having a sizable influence on society — but within significant constraints on budgets, technical know-how and knowledge.
Janet Love has a busy week ahead of her. As head of South Africa’s Independent Election Commission, the government agency in charge of the country’s nationwide election on May 29, the longtime official must oversee the inner workings of ballot counting, election monitoring and various other technical work in a country whose democratic credentials have frayed in the decades since the Apartheid regime collapsed.
Into that mix, Love — a onetime member of the paramilitary wing of the African National Congress — must come to terms with the rise of artificial intelligence, including so-called deepfakes, and political attacks on social media.
“It’s tough. I’m not going to say we feel all is great,” she admitted over a Zoom call in early May. “Our capacity to [respond to] disinformation and misinformation has really increased. But I don’t want to give you a sense that we feel all is dealt with.”
Pan-African guidelines for social media and elections, published in March, partly explain Love’s equal feelings of hope and caution. They outline voluntary commitments — for election commissions, political parties, tech companies and civil society groups — for how the continent’s elections can be safeguarded from digital threats. That includes how best to promote legitimate election information on social media to would-be voters and the need for greater transparency on the latest advances in AI.
In South Africa, a local civil society group oversees an online portal where locals can report online disinformation — including a direct line to the country’s election commission if such falsehoods may undermine the upcoming vote. Platforms like Facebook, YouTube and TikTok — but, according to Love, not Elon Musk’s X — have proactively pushed authentic information about the election, though widespread falsehoods still get through, according to local fact-checking groups.
When it comes to artificial intelligence, Love acknowledges her agency is entering the unknown.
“I think there is a lot of concern because it’s uncharted terrain,” she admitted. “We really have ramped out our own capacity, but also encouraged other players to work as actively as possible, not just with the public, but also with competitors.”
Still, just days before South Africans head to the polls, Love concedes her agency’s efforts are still a work in progress. She doesn’t have the regulatory power to force Big Tech companies to the table, in contrast to her European and North American counterparts. The level of understanding, within the government, political parties, and the electorate, of the digital risks — including those tied to AI — is often far from ideal.
“All of us are feeling a huge need for greater capacity and expertise,” she said. “The difference between having appropriate measures in place, and capacities to implement those measures — you feel it all the time.”
When Imran Khan, the former cricketing icon-turned-imprisoned-politician, did unexpectedly well in Pakistan’s February nationwide election — in part because of his use of generative AI — many locals cheered. And it was not just the independent candidates who had supported him. Average voters, many of whom had received AI-cloned voice messages from Khan, sent directly to their smartphones via WhatsApp, were also over the moon. His message urged them to head to the polls in February’s vote.
“That had a huge impact,” said Nighat Dad, founder of the Digital Rights Foundation, a local nonprofit organization based in Lahore, via Zoom last week. “People could listen to the voice of Imran Khan telling them what to do was a big deal.”
Dad is less optimistic than many of her compatriots about how generative AI has seeped into society in the election’s wake.
In the recent campaign, Khan — behind bars for leaking state secrets, alongside other charges that he refutes — spoke directly to supporters nationwide via AI-powered videos, speeches and audio messages. It was arguably the first time in history that generative AI had directly affected an election result, mostly because the former prime minister could not campaign from his prison cell.
Yet Dad, who also sits on Meta’s Oversight Board, or independent adjudicator of what posts can be published on Facebook and Instagram, worries that too many people are focusing only on the positives of AI — and not the technology’s potential downsides.
“The overwhelming debate, at the moment, is, ‘Oh, we can use AI in this sector, or in that sector,'” she said. “Not many people are talking about harms.”
The campaigner has two primary concerns.
In the final days of the election, both Khan’s candidates and their opponents flooded social media with deepfakes, mostly falsely claiming the other side was forgoing the election. Such disinformation has been rife for years. But as Khan’s use of AI garnered political attention, all campaigns jumped on that bandwagon, and many of their posts went viral.
“People were sharing [those AI-generated posts] even if they knew it was fake,” Dad added.
The other, more worrying trend is what the technology will mean for the country’s minority groups and women in the years to come.
Already, in the months after the election, several female social media influencers have been attacked via sexualized deepfakes — a potential direct threat to their personal security in such a conservative Islamic country. People from non-Muslim religions, too, have been targeted with AI-powered forgeries, including some that falsely showed these individuals committing blasphemy. Such acts can hypothetically lead to a death sentence — even if they are 100 percent fake.
“My real concern is, to be honest, not really political parties and how they’re using it,” said the Pakistani campaigner. “My real concern is marginalized groups and how the fakes and generative AI content will be used against them.”
Laura Zommer has a love-hate relationship with artificial intelligence.
The Argentine fact-checker, whose organization, Factchequeado, has expanded to debunk Spanish-language falsehoods throughout Latin America and the United States, has relied on the technology for years to speed up her work.
In 2017, she began using so-called machine-learning tools — a form of AI — to analyze large amounts of potentially dubious social media posts. More recently, her team even created an in-house tool, called El Monitor (The Monitor), to find connections between disinformation campaigns to uncover those behind spreading falsehoods.
“We don’t listen to interviews anymore,” she admitted earlier this month over Zoom, as Zommer drove from her Buenos Aires home to the airport to catch a flight to New York. “Because the robots can do that for us and identify what needs to be checked.”
Still, it’s not all good news.
Latin America is in the throes of an ongoing series of elections. It started in October with Argentina’s national election, and continues with a presidential runoff in Mexico next month and Brazilian local elections in October. Unlike their English-speaking North American counterparts, Latinos rely more heavily on messaging platform WhatsApp for news and to keep in touch with friends and family. That platform, Zommer adds, is difficult to monitor because many conversations are encrypted. Circulating fact-checks, too, is equally difficult.
Factchequeado’s analysts report an ongoing drumbeat of AI-powered lies, though it’s a more complex picture than that of English-language fact-checking groups.
For now, Spanish-language deepfakes are significantly cruder than those generated in English, Zommer said. That means there’s an over-indexing of English-language deepfakes targeting Latinos, while those in Spanish are pretty low-grade.
“What I’m more worried about is audio,” she said. Many Spanish speakers stay connected with loved ones overseas via short audio messages, and the Argentine frets that a sea of deepfake audio clips are already targeting that community. “A lot of it is just [financial] scams,” Zommer added.
With more elections on the horizon, including the one in the U.S. in November, the Argentinian fact-checker says disinformation merchants are testing the waters with different AI tactics aimed at Latinos.
One includes widely circulated deepfake videos of Jorge Ramos, a famous television presenter for Univisión, a Spanish-language television network. Those AI-generated clips involve Ramos falsely claiming U.S. President Joe Biden has earmarked federal funding for immigrants. Others ask would-be victims to click on links to receive payouts that allow hackers to potentially access individuals’ bank accounts and personal information.
“They are testing out what works,” said Zommer, adding that it was still unclear who was behind such AI-inspired attacks. “So far, it’s mainly been about making money. That has been their focus.”
This article is part of a series, Bots and ballots: How artificial intelligence is reshaping elections worldwide, presented by Luminate. The article is produced with full editorial independence by POLITICO reporters and editors. Learn more about editorial content presented by outside advertisers.