AI researchers call on governments to adopt safeguards ahead of elections
Regulatory | 02/12/2025 4:45 pm EST
A group of artificial intelligence experts from Canada, South America, Africa, and Europe are calling on governments around the world to take urgent measures ensuring the responsible use of AI during their elections.
The experts call for immediate and coordinated action, with specific recommendations to update electoral rules, establish a code of conduct for political parties, and create independent teams to monitor and respond to AI threats to elections. They also emphasized the need for governments to establish mechanisms for international cooperation in combatting cross-border interference.
“AI is not [just] a domestic issue, but it’s also a global issue,” University of Ottawa law professor and AI + Society Initiative director Florian Martin-Bariteau told The Wire Report, “and we need to find solutions that many countries could implement.”
The recommended measures were released as part of the first instalment of the Global Policy Briefs on AI, led by Canadian researchers aimed at developing policy guidance that addresses challenges posed by AI. Published ahead of this week’s AI Action Summit in Paris, the inaugural brief lays out a series of recommendations for policymakers to tackle the impact of AI on electoral integrity and democracy.
Martin-Bariteau is one of the co-authors of the brief, together with University of Montreal law professor Catherine Régis and five other researchers from England, Nigeria, Colombia, Brazil, and France. Members of the expert group presented their recommendations during the AI summit on Monday.
The brief points to recent cases of AI’s effects on elections in Brazil, Gabon, Romania, and the United States. In the six months leading up to Brazil’s 2024 elections, the report notes, AI was used at least 75 times to produce synthetic content meant to either boost or undermine candidates. Five female candidates were also victims of deepfake pornography. Romania’s 2024 presidential elections were the target of a large-scale, allegedly Russian interference campaign, which involved the spread of AI-generated content across TikTok in an effort to influence the vote. Ultimately, the interference resulted in the annulment of the country’s first round of presidential elections, after a surprise win from far-right independent Călin Georgescu.
“The year 2025 will be an important one for democracies worldwide,” a release accompanying the brief reads, noting that “several dozen countries are due to hold national elections in a world profoundly transformed by artificial intelligence.”
Canada is supposed to have an election by October at the latest, but if the opposition parties pass a vote of non-confidence when the House returns in March, the country could hold an election much sooner.
In 2019, officials warned of the potential for AI-generated content to trigger Canada’s Critical Election Incident Public Protocol (CEIPP), a mechanism for a panel of senior government staff to make a public announcement about an incident that threatens the integrity of a federal election. However, in its recently released final report, the Foreign Interference Commission found that the threshold for that announcement is “very high.” Commissioner Marie-Josée Hogue recommended that the government consider whether the panel should be able to take other “less drastic” measures.
“There may be a level of interference that is not sufficient to affect the integrity of the elections overall but is nonetheless significant enough to warrant some action by the panel itself rather than an individual government department or agency,” the report states.
Beyond the CEIPP, Elections Canada and the Security and Intelligence Threats to Elections Task Force (SITE) also have responsibilities to safeguard the country’s electoral process. On Feb. 7, SITE identified a “disparaging” and “malicious” campaign against Liberal leadership candidate Chrystia Freeland on the Chinese social media platform WeChat.
Countries unprepared to handle challenges of AI in elections, researchers find
Many countries are not prepared to deal with the challenges posed by AI, according to the researchers behind the recent AI brief. To better protect their electoral integrity, the experts propose four actions all governments should take.
Firstly, countries must modernize their regulatory frameworks, adopting clear definitions and rules for the use of AI during elections, the authors argue.
“The absence of clear and specific rules governing the use of AI in elections creates legal uncertainty, making it difficult for authorities to assign liability or take effective action against abuses,” the brief states.
Governments should update their laws to ban misleading AI-generated content meant to influence elections and require politicians to abide by certain transparency obligations, such as labelling any AI-generated content that they share. Online platforms should also be mandated to label AI-generated political ads and enforce moderation rules that curb the spread of harmful AI-generated content.
Canada’s Chief Electoral Officer made similar recommendations in a November report, calling for updates to the Canada Elections Act that would require all AI-generated electoral content to be labelled. The elections watchdog also wants the law to be expanded to deepfakes and for all electoral communications to include citations to their sources.
“To ensure that freedom of political communication is not significantly restricted, new electoral rules should be proportionate to the risk they seek to prevent,” the authors of the AI policy brief emphasize, adding that “independent authorities overseeing AI in elections will need adequate technical expertise and funding to effectively enforce these rules.”
Canada was making some strides of its own in tackling election disinformation and AI-generated deepfakes, says Martin-Bariteau. He pointed to certain provisions in Bill C-65, the proposed electoral participation act, which he says would have made offenses under the Canada Elections Act “more applicable to digital contexts,” however it died on the order paper when Parliament was prorogued early last month.
“When Parliament resumes in March, before we go to a new election, I would hope for all the parties to come together to pass a few important bills,” says Martin-Bariteau.
Given geopolitical tensions with the United States, India, and China, “we would hope that Parliament could pass some baseline protections to uphold Canadian democracy before the next election.”
However, Martin-Bariteau acknowledged the political gridlock that stalled most movement in Parliament before prorogation. “It might be wishful thinking,” he says. “It might be a bit naive.”
Parties should commit to AI codes of conduct at ‘the bare minimum’
“If we cannot do this in the House, I would at least hope for Canadian federal parties to agree to a set of codes of conduct or guidelines [around the use of AI],” says Martin-Bariteau, pointing to the next recommendation outlined in the brief.
“In the [current] political context,” establishing a code of conduct for Canadian political parties to abide by is “the most likely” measure that can be taken before the next election, he said, “and also maybe the easiest.”
The policy brief calls on political parties to adopt codes of conduct that include transparency in their use of AI, clear labelling of AI content, a commitment not to share misleading content, and staff training on the ethical use of AI in campaigning.
Some jurisdictions have agreed to similar codes in recent years. According to the brief, five Swiss parties committed to explicitly declaring any use of AI in their 2023 campaigns, while the 2024 European Parliament elections saw a code of conduct requiring AI-generated content to be labelled and bans on deepfakes.
READ MORE:
– Election overhaul must include AI advertising, politicos say
– Bletchley Declaration signals international efforts to tame AI risks
– Champagne announces launch of Canadian AI Safety Institute
Beyond campaign ethics, the authors also highlight an urgent need for transparent oversight mechanisms to counter AI-related electoral disruptions. Electoral authorities, such as Elections Canada, should have specialized teams tasked with monitoring, preventing, and responding to AI-driven threats, the brief suggests. These teams must operate independently and have representatives from civil society, media organizations, and technology platforms to ensure a coordinated approach to tackling AI-generated disinformation.
The authors draw parallels to crisis management strategies used in other areas, such as natural disaster response and public health emergencies, advocating for the implementation of early warning systems, training simulations, incident reporting mechanisms, and rapid-response protocols to detect and mitigate AI-driven electoral manipulation. Additionally, the report stresses the importance of providing AI literacy training to election officials, poll workers, and political observers, ensuring that those responsible for overseeing elections are equipped to recognize and address AI-related threats in real time.
“This is not something that most countries have in place [for their elections],” says Martin-Bariteau. He explained that, if countries do not have any mechanisms in place, they risk “overreacting, because it’s dramatic [and there’s lots of] panic.”
“When it’s a last-minute, emergency decision, there might be issues for political expression,” he added, emphasizing the importance of maintaining the public’s trust in the democratic process.
Finally, the policy brief calls for international cooperation to combat AI-driven electoral interference, recognizing that many threats originate beyond national borders. The authors propose establishing an “International AI Electoral Trustkeepers” initiative — a global coalition of experts and institutions dedicated to detecting and responding to AI-related election threats. This body would provide technical assistance to countries that lack the expertise or resources to handle AI-driven election interference, offering crisis support and oversight. The initiative would also facilitate international legal cooperation, helping governments coordinate their responses to AI-driven attacks.
“Like-minded countries should come together and [maybe even] leverage existing mechanisms. Like at the United Nations, there is the electoral assistance division,” said Martin-Bariteau as an example.
“Those kinds of legal mechanisms,” which allow different countries to support each other, “they exist, but there is no protocol in place to activate [them]” when it comes to AI-driven electoral interference, he said.
“Between countries, co-operation is essential,” the authors write. “No nation can face AI challenges alone. Countries need to align their laws on AI-enabled election interference. This will both strengthen individual defences and build collective resistance against attempts to undermine democracy worldwide.
“By taking these steps today, we will create stronger, more inclusive, and more trustworthy democratic systems for tomorrow,” they conclude.
It is critical that Canada be prepared to handle this issue, stressed Martin-Bariteau, highlighting the increasing prominence of AI-driven disinformation campaigns.
“We know it’s going to happen,” he said. “We don’t know exactly how, when, or from whom, but it’s going to happen.”