Researchers Study How Deepfake Detection Tools Influence Journalists

Role-playing approach offers valuable insight to quell misinformation spread

A man holds a pen to a piece of paper covered with symbols on a table covered with role playing game materials and labels reading "Real" and "Fake"

OXFORD, Miss. – The scenario-based role playing inspired by the popular Dungeons & Dragons game has given University of Mississippi researchers a novel approach to study how journalists use artificial intelligence tools to verify audiovisual media.

The goal is to help journalists identify deepfakes: manipulated video, audio or images that can make it appear that a person said or did something that they did not. As more deepfakes emerge, misrepresenting political leaders, celebrities and even high school students, detecting them is becoming more crucial.

One study found a 303% increase in deepfakes detected in the U.S. between 2023 and 2024.

ucimg-3192-2.jpg
Andrea Hickerson

"If journalists are sending out information that's incorrect, they're really amplifying misinformation or disinformation at a much larger scale," said Andrea Hickerson, dean of the School of Journalism and New Media and one of the lead researchers.

Hickerson worked with Saniat "John" Javid Sohrawardi, Matthew Wright and Y. Kelly Wu, of the Global Cybersecurity Institute at Rochester Institute of Technology, to complete the "Dungeons & Deepfakes" study. The Association for Computing Machinery published the research as a part of the 2024 CHI Conference on Human Factors in Computing Systems.

Emerging deepfake detection software, designed to identify synthetic media, sometimes produces inaccurate results, potentially leading journalists to inadvertently publish false information or dismiss valid media.

The researchers created breaking news scenarios where 24 U.S. journalists chose how to verify the authenticity of videos, and they had access to DeFake, an exclusive deepfake detection tool created for journalists by the researchers.

The scenario-based role-play structure of this study revealed when journalists use the tool, how they use traditional verification methods and how the tool affects their decisions to publish or not.

Journalists rely on traditional verification methods, such as contacting established sources, first, and may use deepfake detection tools after running into barriers, including time limitations, according to the findings.

"Contextual verification is still possible," Sohrawardi said.

"These tools are there to help speed up the process and help provide information, but even if these tools did not exist, contextually, it should be definitely possible to verify any piece of information; it will just take longer."

The study results indicated:

ucimg-3192-3.jpg
Saniat Javid Sohrawardi
  • Journalists were more thorough in verifying videos with high social or political impact
  • A few journalists relied too much on the detection tool, especially when results matched their initial impressions
  • The Dungeons & Dragons methodology was engaging for journalists and could be helpful for deepfake detection training.

The researchers caution the release of deepfake detection technology until journalists get a better understanding of its limitations.

Developers must prioritize transparency, informing users about factors that can impact the accuracy of their tools, such as poor lighting or overly compressed video, Sohrawardi said. "This helps manage expectations and ensures users understand the limitations of the technology.

"Give enough information about what's the best-case scenario for using this tool and what's the worst-case scenario for using this tool. These AI-based tools are trained in specific environments, and they might fail in some environments."

The Knight Foundation began providing funding for the DeFake app in 2019, searching for deepfake detections solutions for journalists. The Knight Foundation and National Science Foundation provided funding for this study.

"We see journalists as a key intermediary between real and fake information out there," Hickerson said. "The risks for journalists are much higher as a profession because it's their credibility, and (mistakes) undermine the whole institution of journalism.

"Also, they've got a big audience. So, if they're sending out information that's incorrect, they're really amplifying misinformation or disinformation at a much larger scale. The research shows that you can't 'put the genie back in the bottle' after misinformation is out there."

Top: Researchers at the University of Mississippi and Rochester Institute of Technology are using role-playing scenarios inspired by the Dungeons & Dragons game to learn how journalists use artificial intelligence tools to spot deepfakes and other misinformation. Photo illustration by John McCustion/University Marketing and Communications

By

Marvis Herring

Campus

Published

July 15, 2024

Topics