You are now in the main content area

Designing safeguards against deepfake deception in the digital age

Technology & Design

Designing safeguards against deepfake deception in the digital age

A woman’s face becoming stretched and smeared by digital distortion.

Artificial intelligence (AI) can now generate highly realistic videos, images, audio and text that make people appear to say or do things they never did. Known as deepfakes, this media is increasingly used by cybercriminals to trick people into revealing personal information or performing risky behaviours. By mimicking emotion and human presence, deepfakes can trigger automatic trust responses before viewers have time to question what they’re seeing.

“These responses serve us well in everyday life, but they also make us more vulnerable to deception,” explained Burcu Bulgurcu, a professor of information technology management at Toronto Metropolitan University (TMU). To better understand how people respond to deepfakes and how technology can be designed to shape viewers’ trust responses, she conducted a research study with TMU graduate student Steven Gal. 

How deepfakes short-circuit human judgment

The research team grounded their study in the Elaboration Likelihood Model. This well-established persuasion theory suggests people process information in two ways: through slow, analytical thinking or fast, intuitive thinking.

Deepfakes are especially effective, professor Bulgurcu explained, because they push viewers toward fast, intuitive judgments rather than careful analysis. “Deepfakes don’t just lie. They perform the lie,” she explained. “When we see a face expressing emotion or hear a familiar voice, our brains respond immediately with, ‘This looks real enough.’ Cybercriminals know this, and they exploit it.”

Testing trust in a simulated social media environment

To study trust in action, the team created two realistic deepfake videos, a process professor Bulgurcu said was “surprisingly easy.” One video featured former U.S. President Barack Obama encouraging donations during an election-related appeal. The other featured U.S. government official Dr. Oz promoting a pseudoscientific health product. Each video urged viewers to click a link and complete a small financial transaction.

Participants viewed the videos on a mock social media platform where the team manipulated two key elements: the presence or absence of disclaimers indicating the videos were deepfakes, and the level of user engagement, such as likes and comments. Participants’ reactions and trust in the videos were then measured.

The results were sobering. A clear on-screen statement identifying the video as a deepfake proved highly effective in lowering viewers’ perceptions of the information’s quality and the source’s credibility, thereby significantly reducing trust. On the other hand, the videos with more likes or comments increased viewers’ perception of information quality but had little effect on overall trust. 

Strikingly, even in a sample of university students familiar with digital culture, a small but noteworthy percentage trusted the videos enough to take the action requested by the deepfake. “That surprised me,” Bulgurcu admitted. “It shows how persuasive these videos can be, even when the message is extreme.” 

Design and policy interventions that influence trust

For professor Bulgurcu, the findings point to an urgent need for safeguards built directly into technology and platform design. “We need to provide people with cognitive seatbelts,” she said. “Not to create fear, but to help them navigate this new reality with awareness.”

As deepfake tools become more sophisticated, she argued that users need support systems that introduce moments of pause and reflection. “Design interventions, such as clear labelling and friction that slows decision-making, could create space for users to engage more deliberately and make better long-term choices.”

Professor Bulgurcu also advocates for mandatory transparent labelling of deepfake content, stronger digital literacy education and user controls that allow individuals to opt out of AI-generated media. “One of the most important findings is how powerful a straightforward disclaimer can be,” she said. “That alone has major policy implications.”

Her next phase of research will expand the study to more diverse populations, examining how age and education influence susceptibility to deepfake deception. For professor Bulgurcu, understanding how people perceive and react to emerging technologies is essential to building safer digital environments.

Design interventions, such as clear labelling and friction that slows decision-making, could create space for users to engage more deliberately and make better long-term choices.