Policy

Wisconsin Assembly passes bills addressing AI use in election ads, child sexual abuse, state jobs

The Wisconsin Assembly passed three bills addressing artificial intelligence — requiring disclaimers in election ads, making its use in creating child sexual abuse materials a felony, and reviewing its application by state agencies — joining other states grappling with the emerging technology.

Associated Press

February 15, 2024

FacebookRedditGoogle ClassroomEmail
Legislators sit in high-backed leather chairs in multiple rows of desks facing a wood legislative dais where people are sitting and standing, with a U.S. and Wisconsin flags, a taxidermy bald eagle and large painting behind it, a digital vote register to the side of the dais, in a room with marble pillars and masonry.

The Wisconsin Assembly holds a floor session on Jan. 24, 2024. The chamber voted on Feb. 15 to pass three bills that would regulate artificial intelligence in the state. (Credit: PBS Wisconsin)


AP News

By Todd Richmond, AP

MADISON, Wis. (AP) — Wisconsin lawmakers on Feb. 15 passed bills to regulate artificial intelligence, joining a growing number of states grappling with how to control the technology as November’s elections loom.

The Assembly approved a bipartisan measure to require political candidates and groups to include disclaimers in ads that use AI technology. Violators would face a $1,000 fine.

Voters need disclosures and disclaimers when AI is being used to help them determine the difference between fact and fiction, said the bill’s sponsor, Republican Rep. Adam Neylon. He said the measure was an “important first step that gives clarity to voters,” but more action will be needed as the technology evolves.

“With artificial intelligence, it’s getting harder and harder to know what is true,” Neylon said.

More than half a dozen organizations have registered in support of the proposal, including the League of Women Voters and the state’s newspaper and broadcaster associations. No groups have registered against the measure.

The Assembly also passed on a voice vote a Republican-authored proposal that would make manufacturing and possessing images of child sexual abuse produced with AI technology a felony punishable by up to 25 years in prison. Current state law already makes producing and possessing such images a felony with a 25-year maximum sentence, but the statutes don’t address digital representations of children. No groups have registered against the bill.

The Assembly also approved a bill that calls for auditors to review how state agencies use AI. The measure also would give agencies until 2030 to develop a plan to reduce their positions. By 2026. the agencies would have to report to legislators which positions AI could help make more efficient and report their progress.

The bill doesn’t lay out any specific workforce reduction goals and doesn’t explicitly call for replacing state employees with AI. Republican Rep. Nate Gustafson said Thursday that the goal is to find efficiencies in the face of worker shortages and not replace human beings.

“That’s flat out false,” Gustafson said of claims the bills are designed to replace humans with AI technology.

AI can include a host of different technologies, ranging from algorithms recommending what to watch on Netflix to generative systems such as ChatGPT that can aid in writing or create new images or other media. The surge of commercial investment in generative AI tools has generated public fascination and concerns about their ability to trick people and spread disinformation.

States across the U.S. have taken steps to regulate AI within the last two years. Overall, at least 25 states, Puerto Rico and the District of Columbia introduced artificial intelligence bills in 2023 alone.

Legislatures in Texas, North Dakota, West Virginia and Puerto Rico have created advisory bodies to study and monitor AI systems their state agencies are using. Louisiana formed a new security committee to study AI’s impact on state operations, procurement and policy.

The Federal Communications Commission earlier this month outlawed robocalls using AI-generated voices. The move came in the wake of AI-generated robocalls that mimicked President Joe Biden’s voice to discourage voting in New Hampshire’s first-in-the-nation primary in January.

Sophisticated generative AI tools, from voice-cloning software to image generators, already are in use in elections in the U.S. and around the world. In 2023, as the U.S. presidential race got underway, several campaign advertisements used AI-generated audio or imagery, and some candidates experimented with using AI chatbots to communicate with voters.

The Biden administration issued guidelines for using AI technology in 2022 but they include mostly far-reaching goals and aren’t binding. Congress has yet to pass any federal legislation regulating AI in political campaigns.

Statement to the Communities We Serve

There is no place for racism in our society. We must work together as a community to ensure we no longer teach, or tolerate it.  Read the full statement.