NewsCovering Colorado

Actions

Why experts say Colorado plan to fight AI election interference likely won’t work

Deepfake audio has begun permeating national and international politics, but it's difficult to combat
Posted
and last updated

DEEPFAKES AND ELECTION DISINFORMATION

AI-generated content has already caused domestic and international political concerns in recent weeks, and some cybersecurity experts fear it might get worse through the US presidential election year.

Last week, a video of Texas Governor Greg Abbott (R) circulated on social media featuring an interview on Fox News about President Biden and the southern border crisis.

But once the camera shot with Abbott’s face cuts away, a deepfake audio of his voice takes effect. Audio and video deepfakes are artificially created to make something look and sound like someone else.

“If only president had been dealing with real internal problems, and not trying to play with Putin, with whom he needs to learn to work for national interests,” the Republican governor said (or seemed to say) in the video.

Gov. Greg Abbott Deepfake

The original clip was taken directly from the governor’s YouTube page. Abbott said no such thing about wanting President Biden to align with Russian President Vladimir Putin.

Similarly, during the New Hampshire primary in January, robocalls with deepfake audio of President Biden went out to voters discouraging them from participating in voting.

CNN reported last week that deepfake audio of a Slovakian candidate possibly influenced voters in the eastern European country’s recent election.

Artificial Intelligence has also created fake images that have stirred controversy, including graphic photos, like those of pop star Taylor Swift, in January.

Dan Evon, senior manager of education design with the News Literacy Project (NLP), said AI-generated images are becoming a problem, but they often have tells that reveal they’re fake. Evon primarily works with NLP’s RumorGuard, which focuses on dispelling fake AI content spreading disinformation.

He said the images are becoming an increasingly bigger issue, but he is most worried about the spread of the deepfake audio messages.

“The voice clones are what I really find troubling. There just seems to be a lot more strategic ways that they could be used in a very convincing fashion,” said Evon. “And there's no visual clue to see them. So you encounter a piece of audio, or you encounter a video that everything looks real as audio behind it. And it's really hard to detect looking at it. And it's very convincing.”

He predicts fake audio will become one of the big issues this election season.

DIGITAL ADS UNDER THE RADAR

Dr. Rory Lewis, an associate professor of computer science at UCCS who does research in artificial intelligence, said targeted ads are another concern since they fly under the radar and aren’t as widely seen or scrutinized like those posted on social media.

Due to website tracking and personal data collected online, website users may receive hyper-targeted political ads that could be full of lies or deceptive statements, but other users on that same website might see a completely unrelated advertisement.

“If a Super PAC wants to focus on a particular group of people, it's not as if there's a billboard in the center of town,” said Dr. Lewis. “They will then say, ‘Hey, this politician may have said this, but he rarely meant that.’ And no one else is there to combat that or to say, ‘No, that's not true.’”

Lewis said unless those specific web users report a deceptive ad, knowledge of its existence may never be known.

Though some of these AI-generated disinformation campaigns might originate domestically, Lewis said a large number are sourced or created from outside the United State. Legislating against them or putting a stop to them then becomes a much more difficult prospect.

“Invariably, it's going to come from somewhere outside of the United States of America, which means that from a legal standpoint, there's no ability for us to go to a huge law firm in Panama City,” he said as an example.

LEGISLATION TO ADDRESS DEEPFAKES IN COLORADO AND ELSEWHERE

Nationwide, only a handful of states have laws on the books to punish deepfakes when it comes to election interference including California, Michigan, Minnesota, Texas, and Washington. But over two dozen other states are currently considering legislation this year ahead of the election, according to the nonprofit Public Citizen.

In Colorado, Secretary of State Jena Griswold announced her legislative priorities in a January press conference, which included combatting deepfakes in state elections.

“The emergence of AI threatens American democracy,” said Grisowld. “Coloradans and voters deserve to know when political content they're consuming is real, or whether it has been generated, manipulated, deepfaked, or been enhanced by AI.”

Griswold, along with elected Democratic state lawmakers, introduced HB 24-1147, their deepfake disclosure bill, on Jan. 29. It’s scheduled for a State, Civic, Military, & Veterans Affairs

Committee hearing on Feb 22.

The bill would require a conspicuous disclosure and watermark on political communication featuring a candidate or officeholder if it’s manipulated by AI.

Communications without a disclaimer would be subject to campaign finance, enforcement, and penalties, Griswold said. And the person who is being targeted with deepfake manipulation can bring a lawsuit against the creators.

But due to issues of where these deepfakes are often generated, mostly outside of the country as Dr. Rory Lewis noted, experts don’t expect a bill like Colorado’s to have much effect.

“I think that there's a certain amount of naïveté in election officials that are saying, ‘Oh, well, we can do that. Or we can do that.’ The guys that are doing this are so far ahead of the game,” said Lewis.

Similarly, Dan Evon with RumorGuard, commends election officials for making strides in addressing the deepfake issue, but doesn’t think it will serve as much of a deterrence.

“You block one avenue, it's going to spread in some other form,” said Evon. “And the people who are creating these sort of fakes to fool people might not be deterred by a requirement to put a watermark. I don't see them following those guidelines.”

If passed, the Colorado bill would take effect July 1 this year.

TOOLS TO RECOGNIZE AI-GENERATED DISINFORMATION

Dr. Rory Lewis, with UCCS, said there are tools to determine if something is a deepfake and it’s “very, very easy done.” But not everyone has those tools.

Dan Evon said RumorGuard encourages people to think about what they call their “Five Factors” when coming across a suspicious video, image, or audio or otherwise to determine its truthfulness.

  1. AUTHENTICITY-Is it authentic?
  2. SOURCE-Has it been posted or confirmed by a credible source?
  3. EVIDENCE-Is there evidence that proves the claim?
  4. CONTEXT-Is the context accurate?
  5. REASONING-Is it based on solid reasoning?

“It's not necessarily for everything that you come across. But just to be aware that this content spreads–to be aware that people are trying to deceive them,” said Evon. “Beware that it's all across social media. So hopefully, it'll give people some things to think about when they come across a post that can be misleading.”____

Email senior reporter Brett Forrest at brett.forrest@koaa.com. Follow @brettforrestTVon X and Brett Forrest News on Facebook.

Watch KOAA News5 on your time, anytime with our free streaming app available for your Roku, FireTV, AppleTV and Android TV. Just search KOAA News5, download and start watching.