INNOVATION
Issue 44: Winter 2026
When AI meets the ballot box: Protecting democracy in a new digital age
Shaping Policy
When AI meets the ballot box: Protecting democracy in a new digital age
In recent years, artificial intelligence (AI) has moved from science fiction into our daily lives. It can write our emails, generate images and translate languages in seconds. But as these tools become more powerful and easier to access, they are also creating serious risks to democracy. Deepfake videos, automated bots and AI-generated misinformation are increasingly being used to influence voters, distort public debate and weaken trust in elections around the world.
In a 2024 poll, the Canadian Internet Registration Authority found that 51 per cent of Canadians believed deepfakes were a threat to elections, and only half felt confident they could detect scams or false information online. According to Jake Effoduh, a law professor at Toronto Metropolitan University (TMU), these threats emerged far faster than most societies were prepared for.
“Ten years ago, we did not envision that this would become a critical issue in our democratic process,” he said. “Right now, AI has shown immense potential to reshape elections and influence outcomes.”
This growing concern motivated him to work with an international team of legal scholars, technologists and policy experts to produce one of the first global policy briefs focused specifically on AI and fair elections. Their aim was not to raise alarm, but to offer practical guidance to governments, political parties and electoral authorities struggling to keep up with fast-moving technologies.
A new kind of election threat
Election interference driven by AI differs from past digital challenges in its scale and accessibility. Today, internet users without specialized skills can easily generate realistic videos, images or audio that appear authentic, even to trained viewers. For example, in Gabon, a real video of the country’s president was widely dismissed as a deepfake amid widespread rumours about his health. The confusion contributed to political instability and a military takeover. This case showed how deeply public trust had eroded in the age of AI – so much so that even genuine evidence can be doubted.
At the same time, traditional gatekeepers such as journalists and public institutions no longer control the flow of news. Social media algorithms now decide what many people see. These systems often reinforce existing beliefs and push users toward more extreme views.
“We didn’t have that with the internet or earlier technologies,” professor Effoduh explained. “AI shapes what we see, what we believe and how we understand the world.”
Learning from global case studies
To understand how these risks play out in real life, the research team examined elections and political crises across several regions, including Brazil, Romania, Gabon and the United States. They found that while the political contexts differed, the patterns were strikingly similar: AI was being used to amplify misinformation, impersonate political actors and flood online spaces with fake engagement.
“There is literally no region and no election in the last few years that has been untouched by AI,” professor Effoduh said. “And many governments simply don’t have rules in place to deal with it.”
Practical solutions to reduce harm
Rather than calling for bans on AI, the policy brief outlines four concrete steps to reduce harm while ensuring people can still take part in democratic processes. These steps include updating electoral laws, creating independent teams to monitor AI-driven disruptions and encouraging political parties to adopt voluntary codes of conduct on responsible AI use.
One of the brief’s most innovative ideas is the creation of “International AI Electoral Trustkeepers.” Using new protocols, these national and international groups of experts and institutions could support countries that lack the technical know-how to respond to sophisticated AI-driven interference. By sharing legal advice, they could help verify information, counter misinformation and restore public confidence during elections.
“A threat to one democracy is a threat to all democracies,” professor Effoduh said. “AI doesn’t respect borders, so our response can’t stop at borders either.”
Using AI to strengthen democracy
The research also emphasized that AI can be a force for good when used responsibly. AI tools can help translate political information into multiple languages, make civic content more accessible to people with disabilities and support journalists investigating corruption or human rights abuses.
The policy brief has gained international attention and uptake, including interest from Canadian policymakers. For professor Effoduh, this response highlights the importance of timely, action-oriented research.
“We wanted something that was educational, but also something that could inform real action,” he said. As elections continue to unfold in an AI-driven world, that balance may prove essential to protecting democratic integrity.
Read “AI in the Ballot Box: Four Actions to Safeguard Election Integrity and Uphold Democracy,” (external link) published by the University of Ottawa to learn more.
Ten years ago, we did not envision that this would become a critical issue in our democratic process. Right now, AI has shown immense potential to reshape elections and influence outcomes.
