Frederica Freyberg:
The explosion of mainstream use of artificial intelligence tools has led to the very developers of this technology warning of its dangers, including extinction, but their extensive everyday use is here to stay. We turn to Lois Brooks, chief information officer and vice provost for information technology at UW-Madison and thanks for being here.
Lois Brooks:
Thank you for inviting me.
Frederica Freyberg:
So it’s not only AI, but generative AI, which can generate context — content like text or images that has really exploded into the mainstream. What is it about the mass use of this generative AI that’s been a game-changer?
Lois Brooks:
So AI has been present for quite a long time, and you’ve probably interacted with it through things like chatbots or finding the best travel fares. What happened with generative AI is tools like ChatGPT made it accessible to anyone to begin to generate content. So you can go generate a report or the draft of an email or an image or video yourself, and so we’ve seen this explosion in creativity with people creating songs and poetry and movies and all sorts of things with AI. But at the same time, the quality of some of it is so good that it’s hard to tell whether it was generated by a machine or not. So we have simultaneously an explosion of new content and a concern that you may not know if what you’re looking at was created by an AI engine or the human who it’s attributed to.
Frederica Freyberg:
Why is that dangerous, if at all?
Lois Brooks:
If you’re looking, for example, at a video of someone saying something on social media, you don’t know if that person actually said that or if someone used AI to generate it. Similarly, if you’re reading a report, an analysis of something, you don’t know whether that was written by an expert who could be credible or an engine that may be putting out false information. And again, because the quality is good enough, sometimes it’s quite hard to tell.
Frederica Freyberg:
Is the accuracy getting better and better?
Lois Brooks:
It’s hard to say. There is more and more data being amassed within the engines that can be used to create more output, but the accuracy would come from two things. One, from the data itself being accurate that went into it and then from the algorithms used to create giving you accurate information. So I couldn’t say that it’s becoming more accurate, but I will say the questions about its accuracy are becoming more present and sharper.
Frederica Freyberg:
Which is important as well. So speaking of data, the university recently put out a statement restricting the use of institutional data that is not considered public from being entered into any generative AI tool. Why that restriction?
Lois Brooks:
Well, we already have restrictions in place. We have a lot of data that’s legally protected, like student data or personnel data or medical information. We do use software applications to manage that data, but we very carefully check it for cybersecurity and enter into contracts that help with protection. These AI tools haven’t been scrutinized. The way they work may or may not be appropriate for certain kinds of data and we’re not in a contractually protected environment. So there was nothing — AI itself isn’t bad and there’s nothing new about this. It was really a reminder to our community of the standards we already have in place for working with sensitive data.
Frederica Freyberg:
So machine learning has been around for a while and in academic use. What’s an example of it, its use at the university?
Lois Brooks:
Well, we’ve been doing AI research for more than a decade at the university and our researchers are contributing to the development, to the power and the possibility of AI to research into ethics and bias. We’re also using it in research. So just a few examples. If you’re working with a chemical therapy, perhaps medicine, and you need to understand the chemical and physical interactions, you can train AI algorithms to do that work for you, but it can do it on a mass scale that you couldn’t achieve before. Another example, one of our researchers is working with agricultural animals and tracking data on the animals around their well-being, their nutrition and so on. But again, the difference is he can do this work on a massive scale that he wasn’t able to accomplish in the past, and so we’re seeing it come up in transportation, in space science, in medicine, in many areas.
Frederica Freyberg:
As to students’ use of AI, generative AI, to complete assignments, even major ones like term papers, what is the university’s position on this?
Lois Brooks:
Well, there are two ways to take that question. Are they using AI in a way that’s been assigned or that’s not been assigned? So we teach AI. We teach it in computer science. We teach science that uses AI, so helping our students learn how to use this appropriately. There is also an active conversation around the use of AI to, for example, generate a paper that the student was expected to write. There’s an active conversation in a couple of ways. First, the standards haven’t changed. Cheating has always been disallowed. This is just a new way to potentially do it. But we’re also having active conversation with our professors around are there ways to assess student performance that step around this question: oral exams, active learning, project-based learning, that would not introduce the possibility of a machine-generated response.
Frederica Freyberg:
Super interesting. Are we moving too fast for these tools without adequate safeguards in place?
Lois Brooks:
There’s an interesting question now around regulatory compliance and AI and in fact, a potential statement from the White House today around the use of water marking in images. The — I think it’s moving very, very quickly, and the concern I have is whether the conversation around regulatory compliance is strictly between the government, the Congress and Legislature, and big tech companies. And I think to do this correctly, there needs to be a way for experts, for ethicists, for citizens to weigh into the conversation as well. So we are moving quickly. Will we move so quickly that we as citizens can’t weigh in on the conversation is a concern for me.
Frederica Freyberg:
Interesting. Lois Brooks, thanks very much.
Lois Brooks:
Thank you.
Follow Us