Frederica Freyberg:
Is that the candidate’s actual voice and image on the air or on-line, or one generated by artificial intelligence in an effort to deceive voters? This election season, it could be hard to tell. The recent release of OpenAI’s latest text-to-video feature called Sora makes hyper realistic yet complex artificially generated videos. The images are potentially impossible to discern as fake. That’s why states including Wisconsin are moving to enact laws around the use of AI in campaigns. A bill moving quickly through the Legislature would require campaign ads that contain synthetic media, or audio or video content substantially produced by means of generative artificial intelligence to include a disclaimer. That’s a good thing, according to Edgar Lin, Wisconsin policy strategist and an attorney with the nonprofit, non-partisan Protect Democracy. He joins us now. Thanks very much for being here.
Edgar Lin:
Thank you for having me.
Frederica Freyberg:
So are there examples that we’ve already seen out there that are artificial intelligence-generated campaign materials?
Edgar Lin:
Yes. The U.S. election season is only in the midst of its primaries and already we have examples of political parties, campaigns and at least one super PAC using AI-generated content in ads, campaign videos and voter outreach. I’ll give you two examples. Last spring, the Republican National Committee responded to President Biden’s re-election campaign announcement with an AI-generated video illustrating the country’s projected future during a second Biden term. The other example that I can give is at the end of last year, Shamaine Daniels, a Democratic congressional candidate in Pennsylvania, launched an interactive, AI-powered political campaign caller for voter outreach. So they are being used. What we’re seeing in the U.S. is consistent with the proliferation of AI-generated content in elections around the world. Elections in Slovakia, Argentina last year, most recently Taiwan and Indonesia.
Frederica Freyberg:
Are they always used deceptively?
Edgar Lin:
Not necessarily. That will depend on the ad itself.
Frederica Freyberg:
So who is making this stuff?
Edgar Lin:
Well, I think — there’s a lot of technology platforms that are creating these types of artificial intelligence technology and there’s a list of them that I think people generally hear about them. There’s ChatGPT, there’s — there’s a whole slate of them.
Frederica Freyberg:
Are they easy or hard to spot for the average person?
Edgar Lin:
Heh. This is a great question. It depends. Now, I will say that it’s an arms race in terms of spotting, in terms of the technology. The bottom line is detection capabilities are developing, but they are and never will keep up with the increased sophistication and realism of AI-generated content. As you’ve seen, we’ve seen photos from maybe last year where perhaps the fingers are a little unusual. But today, that technology has already increased capabilities to a level that we — what we have today is different from last year.
Frederica Freyberg:
So how likely is it that synthetic media would deceive voters?
Edgar Lin:
You know, it is likely, and this is something that we are not used to because historically, we trust what we see and we trust what we hear. That’s video and audio. In today’s world as the technology ramps up at an increasing speed, the detection is very hard and so with that, the likelihood of deception is very possible.
Frederica Freyberg:
So at the very least, how important is it, in your mind, that there’s a disclosure that says the audio or video contains content generated by AI?
Edgar Lin:
You know, it is incredibly important. Voters should anticipate they may encounter AI-generated content related to the election and should not just rely on their senses alone to identify what’s human-generated versus what’s AI-generated. So it is incredibly important in this — but I’ll just say that there is a — this is a portfolio of tools. There is not one silver bullet. Disclosure is one. Detection, you know, journalistic integrity, all these are a host of tools that could be helpful for people viewing these ads.
Frederica Freyberg:
How could this synthetic or AI-generated media cause even more mistrust in elections than already exists?
Edgar Lin:
That’s a great question. So the threats to our democracy, the misinformation, the playbook that people use, they’re still the same. The difference is that AI makes things bigger, faster and stronger. It’s about the accessibility to the public, because you can imagine in the past, we watch movies. There are special effects and they’re very good, but that’s limited to people who can make those. Even if you think of Photoshopping, right? That’s limited to people who know how to use Photoshop well. But with synthetic media, with AI-generated content, now that door has been opened for the public to use and so it’s about accessibility to these awesome technologies.
Frederica Freyberg:
It’s pretty scary stuff. Edgar Lin, thanks very much.
Edgar Lin:
Thank you.
Frederica Freyberg:
For more on this and other issues facing Wisconsin, visit our website at PBSwisconsin.org and then click on the news tab.
Follow Us