Michelle Sweetman:Welcome to SC Johnson’s Golden Rondelle Theater for tonight’s program, “What Does AI Mean to Our Future?” We are delighted to welcome you to SC Johnson’s Golden Rondelle Theater in Racine. My name is Michelle Sweetman, and I am a member of SC Johnson’s Community Affairs team.
This historic Golden Rondelle building was originally built for the 1964-’65 New York World’s Fair to show the short documentaryTo Be Alive!The film was a hit. It actually won an Academy Award. Following the fair, the pavilion was dismantled, shipped to Racine, where it was rebuilt where we are here today, and it opened in 1967. We still today show the filmTo Be Alive!as well.
We are really excited to partner with Badger Talks and PBS Wisconsin for this program this evening, “What Does AI Mean to Our Future?” And I am pleased to introduce tonight’s presenter, Professor Patrick McDaniel.
Patrick is a Tsun-Ming Shih Professor of Computer Sciences in the School of Computer Data and Information Sciences at the University of Wisconsin-Madison. Professor McDaniel is the director of the National Science Foundation Frontier Center for Trustworthy Machine Learning. He also served as the program manager and lead scientist for the Army Research Laboratory’s Cyber Security Collaborative Research Alliance from 2013 to 2018.
Patrick’s research focuses on a wide range of topics in computer and network security and technical public policy, with interests in mobile device security, the security of machine learning systems, program analysis for security, sustainability, and election systems.
Patrick was made available to us tonight through Badger Talks, which is the speakers’ bureau for the University of Wisconsin-Madison. Organizations, businesses, and groups from around the state are welcome to search their website of over 500 faculty and staff and make online requests. Most speakers volunteer their time, and the Badger Talks then covers their travel expenses. Visit BadgerTalks.wisc.edu for more information.
Now, would you please help me welcome Professor Patrick McDaniel? [applause]
Patrick McDaniel:Thank you so much for that kind introduction. I think it’s important for us to take a moment in time. I’m fundamentally a technologist. I do cybersecurity for lots of things. But I’m also a student of history. And I think it’s important to recognize the moment we’re in.
So there have been a series of major shifts in the nature of thought and life on this planet over the last 50,000 or 100,000 years. We as a species. Spoken language, written language, mathematics, the printing press, computers were fundamental shifts in the way our society is structured and thought is communicated. And everything about all of those advancements leads us here today.
And what I’m going to say is that we are in a moment that we don’t really quite recognize yet. AI is in a position to fundamentally alter our experience as a species and the trajectory of us as a people. And this is not another AI hype talk, which you’ve heard too many of. But what this is, is a thoughtful experiment that I want you to come with me on to think about exactly what this means and how this will play out.
And for that, I think we need to start where this ends. What I think we need to do is we need to start by asking ourselves, what will the future actually look like? And so, as I often do, I’ll start with a story about my family.
So, my wife and I, with our two kids, were in Germany. I think it’s six years ago now. And we were on the Autobahn, and we were driving towards Poland, I believe. And in the far right-hand lane was a series of trucks. And they were all branded the same. And there was a first truck, and it had a driver in it. And the second truck was about four feet behind it. And the truck behind that was about four feet behind that. And I said to myself, “Well, that looks really, really dangerous.” And then I realized that the people in the following trucks weren’t actually driving. This was the first generation, or a major generation, in autonomous vehicles. This was a phase where people were beginning to experiment with something called platooning. You have a human at the head of the line, and then you have computers just following the front truck.
And so, whatever the human did could be emulated by all of the computing infrastructure behind it, these magical autonomous trucks. And then I sat there. And we sat at breakfast. And I was rather quiet, as I am often at breakfast. And I was beginning to ask myself some questions about, what does this mean? What does that experience mean? And then I began to think about my childhood. I actually grew up in the eighth-poorest county in the United States. It’s actually a little county called Athens County on the Ohio River, right across from West Virginia in far east-southeast Ohio. It’s what many would refer to as a flyover county. The only reason that most people go there, apart from a small university, is basically transiting the county using trucks.
And I began to ask myself some questions about what happened in Germany, what that will do to my small county, my small rural county in Ohio. And I began to understand something. I began to understand that there’s a connection here, and the connection is following. Not only do the trucks come through Athens County, but they stop. They stop at rest stops, they get food. They sometimes stay. There are people that live in Athens County who drive trucks. And all of those things will be affected if we have platooning. And then the next generation, there won’t even be a human in the front. In fact, we’re basically there now, some six years later.
And so, what does that mean for Athens County? Well, some large percentage of people, you know, non-zero percentage of people in Athens County will no longer have a job. There will be no reason for there to be a truck stop because these are autonomous vehicles. They’ll stop and they’ll get recharged. They don’t need the drivers’ computers. They don’t need to be fed. And then the things that support the truck stop. People, for example, who deliver the goods. The people that deliver the gas. And so on and so on and so on. And then you take that big hole out of your economy, and you replace it with something else. We don’t know what that is.
So something like 2% of the people of the United States are involved in some way in the trucking industry. So even if we were in an environment where it was only, say, half of those people suddenly became unemployed, we’re talking about a substantial impact on us as a society. And so, what does that actually mean? Well, okay, so that’s the trucking industry, that’s one. But that’s not actually what’s going to be the biggest impact. The biggest impact is it’s going to happen everywhere. AI is reaching– And we’re gonna see an example here in a moment. AI is reaching incredible heights.
About nine years ago, I was sitting– I was in Mountain View, California. That’s where, the headquarters of Google. And every year, they bring in the heads of the big, you know, 15 or 20 security labs from across the country. They bring you in. Google shows us, you know, their latest stuff. And they say, “What say you? Do you think it’s interesting? And what are the security challenges?” And I remember nine years ago, I was sitting there, and they had a presentation from this AI group, and they did what was called recognition, image recognition, where you take a picture and you recognize what’s in a picture. Now, we know in computation, writing algorithms to do that reliably is very, very hard. And, in fact, it was basically impossible.
And what they did is they showed a figure, and it said not only is that a puppy, it’s a brown puppy in a green field next to an apple, and the dog is fuzzy, and one ear is raised. And I leaned over to one of my colleagues, who is also a professor at the University of Wisconsin, and I said, “That looks like magic to me.” And he said to me, “If it’s magic, it can be abused.” And that’s what I wanna focus a little bit on in the latter half of this. How does this AI– We’re gonna talk about how it makes the world better, but I wanna talk about what are some of the challenges we’re gonna face as we move into this world.
So, coming back to history, where we are in history. This fundamental shift in the way we consume and reproduce knowledge has changed. Throughout the rest of history, there is a before and after, and we don’t even realize we’re already in the after. And I imagine a number of you have actually used something like ChatGPT, where you can give it some instructions, and it can do a pretty good job. If you haven’t, I encourage you to do it. And it does something that would take you probably 20 minutes in about 10 seconds. And it does it kinda reliably. And we’ll talk about the “kinda” here in a minute because perhaps there’s some devil in the details there.
But with that, with that capability, we’re going to see it change everything. Things like health care. What if in 10 years, which is very likely, you will have the ability to talk to an AI who is probably a better doctor than the best doctor at Johns Hopkins today? It will have experiences of every medical case ever recorded in its purview. It will be able to identify you and understand about your specific dynamics. And this is the holy grail of health care. We try to treat people as if all people are built the same. We’re not. Personalized health care fundamentally alters outcomes. Full stop.
We are going to see– You know, I was asked recently, what are going to be the game-changers that AI is going to bring? Health care is number one. We will live longer, we will live better. We will have better diagnoses. We will have less health care bad outcomes because the doctor doesn’t understand what’s going on. AI is just going to be better. But now, think about how many people are involved in the health care industry. What does that do?
Same is true of anything. Insurance, marketing, commerce, manufacturing. I don’t know if anybody has seen recent videos from Boston Dynamics. Border on creepy. The robots. Transportation. We’re seeing that now with autonomous vehicles, although that, you know, has its pros and cons. And education. I’m not unrealistic. I’d love to believe, as a professor, I am irreplaceable. I’m not.
What if, for every student that needed to learn cybersecurity in my Intro to Cybersecurity class, which I love teaching to first or third-year undergrads at UW-Madison. What if the teacher was solely focused on that student, knew how they reacted to every experiment all the way back to pre-kindergarten, and knew what excited them, what helped them learn. Do they need to be visual? Do they know what words they understand? What if that perfect teacher who had complete information about that person existed? I mean, I would count myself as a good teacher, but, like, I can’t compete with that. That’s gonna fundamentally alter education, and that’s going to change the way we learn.
And you multiply that across our entire spectrum of society. We are going to see social disruption at scale. Every part of our lives is going to change in the next 15 years. And I was somewhat of a skeptic at the beginning, but, you know, every day, I tell myself, “Don’t be surprised, don’t be surprised by AI.” And then, boy, yeah, I keep getting surprised. It’s amazing how fast it’s going.
And so, what’s interesting about the social disruption is that it’s not– I don’t wanna just sort of focus on the economy part of it or even the vocational part of it. It’s going to mean that we restructure our lives. And there are a couple of different directions we as a society– This could lead to one of the great renaissance of man, right? This could lead to massive amounts– ability to think about philosophy and art and experience lectures and do all of this. What if all that time that buys… Or we could turn it into– And I’m gonna pick on TikTok because they deserve it. [audience chuckles] TikTok is an algorithm. TikTok is a self-reinforcing system. You like TikTok because you like TikTok. And the more you like TikTok, the more time you spend on it, which means the more you like TikTok, right? It becomes this closed loop.
And there are social dimensions to this, particularly those of you who have kids, about that algorithm. And what if AI is used for something like that? Turn us into perfect consumers, right? That’s the decision we have to make as a society. And so, what’s important and what I wanna talk about today is, what are the other challenges and why is AI not perfect?
So let me start by telling you a little bit about the way I think. So, I’m a cybersecurity person going all the way back to my days in Bell Labs, where I think about how to break things, how things are breakable. And so, I have been fortunate enough to experience major epochs in technology over the last 25 years, 30 years of my career, including, you know, really kind of the birth of the Internet. I was a graduate student. We saw real online commerce explode, and we saw social media and we saw smartphones and then we saw streaming that were all disruptive, completely changing the way we do things like consume content. Everything, so much has actually changed over that time. But one thing is invariant. One thing is absolutely true every single time.
And that is, if you introduce a new technology, it can be abused and it can be used against us. And the trick about having good adoption, for whatever value you wanna put on “good,” good adoption of technology is understanding that. And perhaps if you get nothing else from the next hour, what I want to urge you is I want you to think about what are the challenges? How can this technology be used against us? Who loses? And I’m gonna give you lots of examples, and I’m gonna get into a little economic theory in the back half of this to talk a little bit about why perhaps using AI for particularly human-scale things might not be a societal good.
Always the answer is yes. So, I did a little experiment before today. Anybody remember Scholastic when you were in elementary and it said, “Pick the things wrong with this picture?” Right, so you first looked up at this slide. I challenge you. You all went, “Yeah, it’s red. It’s got something that looks like the Badger. We’re good to go,” right? But, of course, there are all sorts of interesting… Like, I actually asked AI, this is actually Microsoft’s Bing image generation, last night… “PBS Wisconsin hosts Badger Talks to Alumni.” And I said, “Okay, draw me a picture.” And this is what it drew.
So, okay, AI. What’s the story with the guitars? I’m not really sure what that has to do with guitars. Somebody’s missing teeth somebody had noticed here. I don’t know who “Babs” is. The “PBS” is not quite right. And last time I checked, that is not a badger paw. So… AI gets things wrong sometimes. That’s the trade-off we’re making.
So to back off to computer science just for a minute, we have lived under a world of algorithms which are basically code that we program, that we define the logic for, and we get very predictable results. Unless there are bugs, which there always are. We get very predictable results. And that’s the world we’ve lived in until recently. And now, we have AI, where we have absolutely massive… what are called models. And I’m gonna explain what a model– And I’m actually gonna turn you into robots in a couple minutes. These models have literally trillions of bits of information, and they do things for us that are really amazing. We’ve spent 50 years getting good at this. This is what this came up with.
And actually, you know, I don’t wanna denigrate Bing’s image generation. It’s getting better every time. In the next few years, when I give it those words, it’s probably gonna get better and better and better. But the point here isn’t really that it’s getting better. The point is, is that it’s sometimes wrong. And if we wanna understand the answer to that previous question about where the harm comes from, we need to understand exactly when it’s wrong. Because if we understand when it’s wrong, we know when it can be abused.
So, here’s a good question. What is AI? Now, AI is a super broad term. And I’ll define it throughout the rest of my talk. But I’m gonna talk about one kind, sort of the dominant kind of AI. It’s called machine learning. For the purpose of this discussion, AI and machine learning are the same thing. And the reason I want to talk about AI as machine learning is because I want you to understand how it really works.
So this is really complicated, but here’s what happens. See that? That’s an autonomous vehicle, okay? An autonomous vehicle has a camera. The car works just like any other car, but it has some special parts. It has some special parts. It has a camera on there. And on that camera, it’s taking images, and it has specialized software that says, “Hey, there are things called objects in my environment.” So the car’s going down the street, and it recognizes, you know, the algorithms recognize, “There’s something I need to know in order to drive this vehicle.”
So in this case, it recognized that there is some shape that’s lifted off the ground, probably a sign. And so what happens is, is that image is then fed into the AI. In this case, this is called a deep learning system. It’s incredibly complex math that sort of models a little bit like the human brain works, but not really. But it basically hands that information off, and it pops out with an answer that says, “The model thinks that with–” And we’ll talk percentages, it’s not really quite perfect. But 93%, “I’m 93% sure this is a stop sign.” Okay.
And then, you know, it says, for all the other signs, “I give it a 1% chance of being a yield sign,” for example. Okay. Now, what’s interesting about this is when it’s doing this process, what it’s doing is it’s taking that thing I talked about, this image generation problem, and what it’s doing is image classification. It says, “Oh, I’ve got a thing, I need to recognize it.” We don’t know how to write an algorithm in the first– You know, like, literally sit down and write the code. We’re gonna give it to AI. And so, we trust AI to do this. And we’re gonna learn in just a second how that model learns to recognize signs.
Okay. So, what is the Internet good at? Kittens and puppies. Okay, now here’s the interactive part of the talk. So what I want you to do is I want you all to be robots, okay? And we’re gonna do a very simple task. This simple task simply says that what we have is a set of– I’m gonna give you a picture, and you’re gonna tell me if it’s a kitten or puppy. But here’s where you’re a robot. I want you to forget everything you ever knew about kittens and puppies. Okay? All you know about kittens and puppies is right here.
Okay, now I’m gonna give you a hint. I’m gonna say the bottom left, top second from the left, top second from the right, and the one right beneath that are all kittens. Everybody with me? The rest are dogs. Now, I remind you, this is all you know about kittens and puppies. Okay, so now what I have just done… You might have heard this term before. I have just trained you, right? I have given you examples of a phenomenon. And this phenomenon in this case is images of kittens and puppies, okay? And your task, as AI, is just to distinguish which is which.
Okay, and this is really the way AI works. And so, what we’re gonna do is I’m going to ask you to only look at two things in the images. I’m gonna ask you to look at the ears. And I want you to ask yourself, how pointy are they? And I’m gonna ask you to look at the tails and how long they are, okay? And so, what I’m gonna tell you is that those two things are what determines for you as AI is a kitten or puppy.
Now, what’s going on here is I’m giving you what are called features. And these are indicators of some phenomenon, okay? So, what I’m gonna ask you to do is I want you, in your mind, and I’ve done it for you, is put on a 2-D graph. You know, if you remember from middle school or high school algebra. On that graph, I’m gonna put how pointy are they on one side, and then how long or short are their tails.
Okay, now, if I’ve done my job right and your data is good– and this is gonna be an important part– is the cats and the dogs will all be in the same kind of areas, right? So what you’re recognizing is cats tend to have pointy ears and longer tails, right? So if I see something with pointy ears and longer tails, hey, we’re good to go. You’re AI; that’s it. The most complicated model you’re ever gonna encounter in AI is essentially doing this. What it’s doing is it’s observing things in their natural state, measuring something, like the pointing of their ears and tails, and deriving what are called correlations. This is all statistics and math and stuff like that. But really, what it means is, hey, certain things, things of the same thing tend to have the same characteristics. And if you just looked at the characteristics, that’ll tell you. And this works, really, we use this, we’re taught this very early in life. This is one of those experiments we, as toddlers, we learn. That’s one of the things we learn. Hey, tall people play basketball, right? These are– That’s not always true, but there’s a high correlation, right? And so, we go through our lives making judgments about the environment we live in all the time using these gross generalizations about the phenomenon. That’s what AI is doing. We’ve just turned it into math.
But it’s not entirely perfect. So as AI… we’re gonna give it a new cat. So, pointy ears, long tail. Good to go, right? It’s called a true positive. So what we’ve done is, you are my AI. You said, “Yeah, that’s a cat.” Okay, that’s great, all right. I’m gonna challenge you the next one. Is this a kitten or a puppy? [audience laughs] Right?
Now… Here’s the challenge. All you’ve ever known about kittens and puppies is in those eight pictures. You don’t know that that’s not a kitten or puppy. So what do you do? Well, if you’re like– I’ve done some experiments with this. Most people will look, “Uh, that looks like a tail. It’s long, it’s a cat,” right? And so, here’s the thing. Most of the time, even if the model’s not really sure, it’s gonna tell you something. And figuring out when it’s just telling you something versus it’s actually pretty sure, eh, you know, that’s pretty good math. But it turns out, it’s really hard to know. And as it turns out, this ends up being one of the big challenges of AI.
But it’s not magic. Remember I said I was sitting in Mountain View and I said, “This is magic?” It’s not. And I’m gonna give you some examples. Okay, this is our training set. What is this thing on the top right? We’re getting away from our tails and ears right now. What is this thing up top? It’s a cat. Why is it cat? ‘Cause it’s white.
Now, the key here is, is that you as AI didn’t know that there isn’t a correlation. That is called a bias in the data, right? If all of the kittens you see in your training data are white, anything white becomes a kitten, right? You’re making a false correlation, or a bias in the data derives an AI model that gets it wrong and gets it wrong consistently. Okay, so now this is a little bit tougher. The one on the right is a dog. And I’m gonna let you think about that for a second. It’s because of the grass.
So you did something from an evolutionary standpoint that we all do. Our brains are great at two things: pattern recognition. That’s a tiger, I didn’t get eaten. And filtering. I can see the tiger, the outline of the tiger, through the trees at night. Those, for evolutionary reasons, were some of the dominant traits of us as a species. So what you just experienced, you looked at this kitten and you ignored the grass. Because you can’t stop being human, right? We filter automatically. Our brains just do it. AI has a really hard time figuring out what is kitten and what isn’t. And as a practical matter, when it looks at something like an image in a real system, what you’re actually getting is that the actual pixels are fed to the AI, and they’re saying, “What are the pixels arranged like?” Pixels being the individual dots on these things. So what matters is your AI is only as good as your data and only as good as your problem definition. And this will be a challenge.
Okay, I’m gonna show you some famous people. Anybody recognize any of these people? No, ’cause they never existed. So if you ever wanna have fun, there’s a website called ThisPersonDoesNotExist.com. Actually, Nvidia, which makes the graphics cards– they used to be for gaming, now used for AI– actually has a website. You can give, like, demographic information, age. You plug it in, you hit a button, and then there’s a person. It is frightening. What’s going on there is we are able to generate those people. And this is actually, these images are, like, five years old. They’re even better. I saw a few days ago that actually the most recent version of what are called deepfakes can take one of these pictures and turn it into a real, talking person. You simply could not determine you were not talking to a real person online. It’s astonishing.
Same is true for things like text. This was a paper from several years ago from the University of Chicago. You know, it generated reviews by looking at the web, and in particular something called Yelp, which rates restaurants. And it came up with, “I had the grilled veggie burger with fries. Ohhh and ahh, omgggg, very flavorful. It was delicious. I didn’t spell it.” That sounds like my undergrads, right? But, actually, it’s pretty good mimicking of the kinds of things you see on Yelp. And this is gonna be an interesting problem. What we’re seeing is objective reality is being challenged here. We are being able to generate artifacts that under any other circumstances would not be distinguishable, or would– And every other example up until the recent history would be distinguishable because they were fakes. We’re gonna see this, and this is gonna come into some real interesting– And we’re starting to see social problems.
Okay. How many people went to kindergarten? Hopefully most, okay. The thing on the right is a stop sign. The thing on the left is a yield sign. Interesting. Well, actually, the thing on the right actually comes from a paper that me and several of my colleagues wrote several years ago where we took an image recognition system, and what we did is we began to ask a question: what can we do? Can we take something like a stop sign and get it to misclassify? And what we did– And there’s some complicated algorithms and things like that and a lot of math involved. But essentially, if you look at the thing on the right, you might notice it’s just got a little bit more yellow. Just a little bit more yellow.
As it turns out, we just turned up the yellow more or less, a little bit and a little bit, a little bit until the model changed its mind, right? And it turns out that these models, which are doing, you know, basically taking all of these examples and turning it into models that are estimating things are actually really fragile, right? And so, one of the truisms about AI is AI is good on average under what we refer to as nonadversarial conditions. That is, if somebody isn’t trying to beat the AI, the AI is fine. But once they try to– you have somebody who’s actually trying to mislead it, it turns out that it’s extremely fragile. And I don’t wanna get too much into the science half of this, but as it turns out, there’s been something like nine different defenses over the last 10 years to try to defend AI, and all of them have been broken within a year. We do not know how to defend AI as it exists today.
This is known as adversarial input. And the way to think about adversarial inputs is, let’s say we’ve got an autonomous vehicle. There was a great study from, I believe it was Columbia University. University of Michigan, I’m sorry, and UCSD. Where they created these patches. They were little stickers that they could put on signs so autonomous vehicles got it wrong. Simply because the AI, it would focus in on this little tiny piece of the sign, which we, you know, you’ve been driving around and somebody took a marker or, you know, back in the woods, shot a hole through it or whatever. And the sign is just a little, but you knew what it was. AI focuses on that piece.
And so, this is actually really dangerous. Models can be fooled. There’s also poisoning of the data. So, if you can understand, like if I had control over the kittens and puppies and I was trying to get you to give me the wrong answer, I could slide some data in there into your training data to make you think that for any time that you’re seeing the left side of an animal, it’s always a kitten, right? But somebody looking at the model, it would work fine and fine and fine and fine until you showed the left side. That’s called a trap door. So we don’t actually know what’s inside these models. They have correlations we can’t really control, we can’t really detect, and we don’t really have, again, some of the more recent models have literally a trillion bits of data in them. And so, we don’t have any ability to really understand where these trap doors are.
There are other problems. And this actually has been a– This is, like, a really big public policy problem, is that if I can talk to AI, I can learn something about the data that was used to train it. So, for example, if I took all of your medical data and trained a model on living in Wisconsin, you know, whether you’re likely to get hay fever, right? And I used all of your medical data. And I had that model to, say, help diagnose other people. That’s a great social good. But what if I could talk to that model and get your personal data straight out of there? Your full medical record. And it turns out that we, when you have AI, by definition– And there are some information theoretic reasons that this is true. Every time AI does something, it tells you a little bit more and a little bit more and a little bit more about the data it was used to create it.
And so, that creates a real conundrum. That is, by using data in training, are you violating people’s privacy? Are you violating people’s safety? There’s a lot going on inside the legal industry about what is what’s called “fair use.” Pretty much every large language model like ChatGPT has sucked up every paper I ever wrote. Probably everything I’ve ever written. You know, every abstract for every talk I’ve ever given. Sucked it up into their universe and used it to train their model. They didn’t ask me. And the same is true for you. Anything that you put online, pictures, social media, anything, got sucked up in that model, as long as it was publicly visible. And some stuff that weren’t. Yeah, I would say that anything you put public, anything that was in the scope of your, what you’ve allowed by the policy that’s given to you, the privacy policy that’s given by the corporation or the organization that you gave that data to, if it’s private in that context. If they’re allowed to use it for training data, then it’s subject to being used for training data.
Okay. So these are all sort of big societal problems. So, there are other problems. What happens when the data is in that model? There’s all sorts of reconnaissance. For example, if we have something like– We use AI, for example, who gets medical health insurance versus who doesn’t. If I can talk to that model, I can figure out exactly what criteria they use and who’s likely to get and who isn’t. And that has enormous challenges. Intellectual property. I spend a lot of time talking to folks on Wall Street, particularly high-speed traders, and they use algorithms. They use AI now to make high-speed trading decisions and things like that. If I can talk to their model, I can reverse-engineer it. If I know their algorithm, I know their model. There’s something called transferability, and there are a bunch of attacks that we here at University of Wisconsin actually pioneered a bunch of. We can actually reproduce a copy of that model, or at least what’s called a surrogate, which is a good enough model to be able to learn enough about it to be able to attack the task. So if I was able to do that for a high-speed trader, I could probably bankrupt them in seconds.
Privacy, we already sort of talked about. That’s really an important problem as a society. How do we deal with all of these models? Because they are, right, health care. we all can agree that that’s a great outcome. It’s changing things. But we haven’t decided either as a legal precedent or as a ethical or societal norm about what data people should be able to use for models and what you have to do to allow that to happen. Right now, everybody’s just grabbing every data that’s out there and waiting for the litigation to happen.
So there’s an interesting question about how this is all gonna impact society. And I think I’ve sort of touched on some of the sort of the key points, which is how are things going to play out in the sort of economic side. But I wanna talk about the societal side.
Charles Goodhart, perhaps one of the most British men to ever live, was an English economist who came up with something that’s called Goodhart’s Law. And he said, “When a measure becomes a target, it ceases to be a good measure.” And this was a very fundamental insight that he made in the 1970s about the nature of using proxies for policy decisions.
So let me give you a great example of this. Miles per gallon. We’re all familiar with miles per gallon, right? So this was introduced in, I believe, the 1960s, 1970s. And this was used as a response to people’s growing concern about the impacts on the environment of cars. And so, miles per gallon was decided, like, the higher the miles per gallon, the less fuel you burned, the less emission you had. That’s great.
So then that became the policy of the day. So that was the key policy about how we are going to defend ourselves from cars in the environment. Great, and, in fact, we’ve spent, the industry has spent the last 40-some years actually getting really good at, the miles per gallon are infinitely higher than they were back when this started. You know, module of certain kinds of vehicles. And so, in some sense, it’s been successful.
But it’s also been a tremendous policy failure. And it’s been a policy failure because when you tell the auto industry– and they’re doing what all industries do– that that’s the thing that measures impact on the environment, they ignore everything else. Cars are enormously dangerous for the environment to build. Heavy metals, tires. Tires are awful for the environment. They are literally crayons spreading up and down our streets. They’re creating pollution as they go. And there are dozens or hundreds of different ways, the production of cars and the paint that goes on them and the way they’re manufactured and recycled and all of those other costs or impacts on our environment are lost if you just focus on miles per gallon.
And so, in one sense, yes, cars have gotten more efficient, but more efficient doesn’t mean, necessarily, better for the environment or the sum gain of the impact on the environment. And this is where AI comes in. This is where I think what concerns me most. There is a strong inclination within the policy community.
And I’m gonna pick on Harvard just because I like them. Harvard has had a lot of problems with admissions. They have been seeing a lot– They’ve been in litigation for many years about the disparity between different ethnic groups and admissions. And I’m not gonna weigh in on that particular argument. But the question was asked, why not use AI? Right, why don’t we just use AI to make the decision? Right?
And so, there’s kind of two problems. First one is, what do you train the model on? Well, let’s do it on historical Harvard admissions. Well, if you’re trying to fix a problem by using examples of the problem, you’re gonna generate a model that just does the same thing. Right, so that’s not gonna work. You’re gonna have to correct that problem. But I think there’s an even more fundamental problem with using AI that has human-scale impacts. And that is, people will figure out how to make the model happy.
So I don’t know this for sure, so let’s do a thought experiment. The thought experiment says, okay, we are going to create a model that’s going to determine for every applicant to Harvard whether they get in. Okay, when we do that, somebody’s gonna sit there and is gonna look at that. And we’re gonna ask ourselves, two years down the road, what’s the most likely thing to get me into Harvard? Some people might say grades. Maybe some people might say being a legacy, having an income in the nine figures. It won’t be. It’ll be something banal like varsity tennis. Why varsity tennis? Why would that model be happy with varsity tennis? It would be happy with varsity tennis because it’s predominantly white, it’s predominantly in wealthy neighborhoods, and it’s predominantly in the northeast. Again, there are subtle ways that you can reproduce the same biases in AI. And there’s a fundamental problem here that we as a society, we as a species, have to figure out what’s okay to use AI for. And I don’t know the answer to the question because I think that’s going to be the defining question for the next 50 or 100 years of life on this planet, is, what is okay for AI to do?
Because when you hand over decision-making for important, impactful, potentially devastating or consequential problems or decisions that we as a society have struggled with since the birth of man, you’re suddenly making the model a proxy for not having to deal with hard problems. And it has this exact property that I was talking about all along, that all I need to do is figure out what makes the model happy, not what it means to be a good Harvard applicant. Figure out the highest correlation between getting in and the things you’re putting in your rsum. That’s it, and you can’t beat that. And that’s where I think I’m tearing whatever hair I have left out, is every time I hear somebody in policy, “Oh, you know, hey, let’s talk about, you know, sentencing or let’s talk about insurance, or let’s talk about who gets a loan,” right?
All of those things have human, consequential decisions that require us to make discerning, hard decisions about who we are as a species. And just pretending an algorithm can do it isn’t gonna get us there. What it’s gonna do is it’s just going to make the problem worse. And that’s where I think we need to have some sense of cognition of what AI is there to do and what it isn’t.
What are we comfortable handing the reins on? We are on the cusp. I think it was the head of AI for either Google or OpenAI, which are two of the big players. And he said, “What’s the one thing that worries you at night?” Is that we are on the verge of generalized artificial intelligence, which basically means human-scale intelligence, within the next few years, and we as a species are just not ready for it, right? We live in a hype-curve world, and we don’t hear much about the downside of all of this. We don’t really hear. We’re not having that important discussion. We need to make that discussion front and center before we make any hardened decisions about how we adjudicate the important problems in our society.
AI will not solve us being human. Full stop. There are certain requirements for us as being humans on this planet. All the AI in the world is not gonna change that. We can pretend it will, but it won’t. And therefore, we really need to be judicious. And unfortunately, there’s a strong current. There’s economic things that are counterweights here. There are strong economic factors for avoiding hard problems by just throwing it at AI. And I see this across the spectrum. So I would end this little vignette by saying Goodhart was right. The moment you make AI the decision-maker, it’s a bad decision-maker, right? Because it doesn’t have subtlety, it doesn’t have ethics, it doesn’t have historical perspective. It doesn’t have all of the other things we absolutely need in these critical societal decisions that we need to make.
So I have a recurring nightmare. My nightmare of AI begins with high school. My two kids are… One is graduating in about a month. The other one is somewhere in the middle. They are post-high school. And I thank the universe for that. High school is probably one of the worst universes there is. And I say that not as, like, a criticism of today versus later. It’s just part of development. There’s this phase we go through as humans called pre-consequence, where we don’t actually understand the consequences of our actions. And this is prominent in high school. And so, we get in this weird spot as a growing organism where we are too big for our bodies. We don’t know how to physically control our bodies. And we don’t know how to control our brains. And we don’t understand the consequences of not doing both of those things, right? That’s why being in high school is so dangerous. And that’s why we all, on one level or another, you know, exhibited or participated in lots of really unhealthy behavior, self-destructive, that later in life you go, “Oh, my God, I can’t believe I did that,” right?
Now, where this lands with my nightmare is you take a 13-year-old girl. She’s a freshman in high school. She’s got a rough home situation. She’s never been very popular. You know, she has low self-esteem because she spends way too much time in the wrong places in social media, and she is in a low place. And she gets in a fight with some mean girl. I don’t mean that as a pejorative, but I guess, you know, high school is high school. Okay, she gets in a fight with a mean girl. The mean girl has some knowledge of the Internet, but not much. She now knows that there’s this website on the Dark Web– it’s the sketchy part of the Web– that you can basically go on there, and for five bucks, you can hire a harasser.
So this harasser is a human. You sort of pick what age they are and where they’re from, right? You insert them into the social media. They are incredibly adept at communicating with other teens. They’re incredibly adept at integrating with existing groups and understanding the dynamics of something like a particular high school that this girl lives in. And this harasser begins saying things, generating pictures, generating text. And she gets lower and lower. Huge scandals around her and her high school that this thing generates. And it continues unstopped. And it crosses different social media. And it begins sending her e-mails. This 13-year-old girl kills herself. Happens every day. Now, within a couple years, I’ll be able to download that and create a billion of ’em in 10 seconds.
This is a fundamental threat to us as a species. We are on the cusp of indistinguishability of reality from synthetic. And that’s really, really scary. We’re sort of seeing some of this now in some public discourse. And I don’t care where you lie on the spectrum, but there’s a lot of information. There’s something called the Dead Internet Theory. Anybody heard this one? The Dead Internet theory says there’s actually nobody on the Internet. It’s all bots talking to each other. Right, but there’s lots of this stuff floating around the Internet, and it’s getting less and less real. And our ability to understand what’s real and what isn’t is getting harder and harder and harder. And here’s the thing, is that this AI is going so fast, it’s astonishing. Things that I didn’t think would be available for 20 years are available in six months. It’s absolutely astonishing. And it’s a Pandora’s box, and we really don’t know what to do with it.
And this leads to, like, a whole– You know, I don’t wanna sound too science-fictiony. But there will be humans on the internet who don’t actually exist. I mean, that’s what bots are, essentially. But they can actually be personas. They can live places and they can have faces and they can get on Zoom with you, right? I think of the exploitation, particularly of the elderly, that has begun. I don’t know if anybody’s heard about this. Just recently, they’ve found out that people are getting very short snippets of somebody’s son’s voice and then using a deepfake to create that voice and call somebody and target them and say, “Mom, I got to have, you know, a thousand bucks. I’m in jail in Texas,” right? And it’s incredibly convincing, right? And these– Don’t kid yourself. This is organized crime. This is straight-up organized crime, right? And one of the things we’ve learned in cybersecurity is if there’s money to be made, people will do it. Because there are people for whom ethics doesn’t even enter the picture. And it has always been thus, there’s nothing new there. It’s the tooling is incredibly challenging.
And so… I don’t want to sound too much of a downer. But I feel like we’re at this incredible nexus point in the evolution of our society. And we hear about how I don’t have to write my own reports and how I can make my photo– I’m a terrible photographer, ask my wife. And make it look perfect. And all of the other, you know, things that make our lives better. You know, some of the incredible things that I’ve seen. Turns out, loneliness is the number-one killer in the United States. And actually, chatbots are incredibly, incredibly effective at actually making people’s lives better. And it doesn’t matter that they’re not real people. It’s the human interaction. It’s that interaction which fundamentally alters lives. And so, I’m not saying that AI doesn’t have all of those benefits. I just feel like we’re so in the hype curve that we’re not asking ourselves these tough questions. Where’s the line? How right does it have to be? How much do you want to put out there that somebody can actually exploit? And then, fundamentally, how do we actually manage the fruits of that AI? All of this content, all of this stuff that it’s generating can be used for good or ill, and we don’t really have a society of figuring out which is which.
And unless we have that conversation, we’re gonna be in a world of hurt over the next 10 years. Now, one of the great things about being a human being if we’re self-correcting, right? This is what our society does, is we see a problem and we fix it. And I mean “we,” all of us. And that’s what gives me great hope. I’m not really worried that we’re gonna end up in some sort of dystopian novel. What I’m worried about is it’s gonna take us too long, and too many people are gonna get hurt before we realize what we have to do. We have real work as a society, as a species, as a government, as a legislation, as courts. We have a lot of work to do right now that we’re not doing. And that’s where your advocacy comes in. It’s really incredibly important that as you encounter these things, as your bank creates a new something for you, ask ’em why. Ask ’em what data it actually has access to. Ask them, “Hey, are you using my data to help somebody else?” And if that doesn’t feel comfortable, you say, “No, no, no.”
These are important questions to ask as a society. And I think as we get, we normalize the use of our data and the use of our time and our eyeballs, you know, online and all of those other things, the community, the AI community, the people that are building these amazing products, doing incredible science, doing incredible engineering, will begin to say, “Okay, there’s societal norms and requirements for us to get the benefits of this as a society.” And, ultimately, we just have to act. I feel we’re in a little bit of the “Cinderella at the ball” moment where we’re– Midnight’s coming, right? And I’m not suggesting that planes are gonna fall from the skies and things like that. But people will be hurt. There will have to be some 13-year-old girls who are harmed. There will be conversations that are gonna be tough about what is okay with technology. And we’re just not having it right now.
But I’ll end by saying that I have great hope. You know, organizations like the University of Wisconsin are committed to this, as the larger technical community is, largely, in this boat. They know we need the will of the people to do what we need to do. So, with that, I’ll say thank you and take questions. [applause]
Search Episodes
Donate to sign up. Activate and sign in to Passport. It's that easy to help PBS Wisconsin serve your community through media that educates, inspires, and entertains.
Make your membership gift today
Only for new users: Activate Passport using your code or email address
Already a member?
Look up my account
Need some help? Go to FAQ or visit PBS Passport Help
Need help accessing PBS Wisconsin anywhere?
Online Access | Platform & Device Access | Cable or Satellite Access | Over-The-Air Access
Visit Access Guide
Need help accessing PBS Wisconsin anywhere?
Visit Our
Live TV Access Guide
Online AccessPlatform & Device Access
Cable or Satellite Access
Over-The-Air Access
Visit Access Guide
Passport

Follow Us