– It is my great pleasure to introduce Dr. Bill Hibbard, who is a Senior Scientist Emeritus at UW-Madison’s Space, Science, and Engineering Center. He’s the principle author of the Vis5D and VisAD open source visualization systems and the author of several papers on visualization and artificial intelligence, super intelligent machines, and ethical artificial intelligence. He is a frequent author and speaker on the subject of AI technology and ethics, as well as the senior advisor to the AI initiative of the Future Society at the Harvard Kennedy School. He’s also the author of the book “Super-Intelligent Machines” and numerous articles on the subject of the technological singularity, which I imagine we’ll be hearing about this evening. In 2012 he was awarded the Singularity Institute’s Touring prize for his paper, “Avoiding Unintended AI Behaviors.” He’s also been recognized by his peers as uniquely qualified to shine a light on the promise and peril of AI, by virtue of his deep technical expertise and his strong sense of ethics. Some of you may have noticed this summer that the Wisconsin Alumni Association’s On Wisconsin magazine had a feature article on Bill Hibbard. On his work, especially, in fostering essential conversations about AI and what it means to and for all of us. And with that, please join me in welcoming Bill Hibbard.
(applause)
– Thank you. Well, I’d like to thank, first of all, Wisconsin Institute for Discovery, WARF, and Morgridge Institute for inviting me. I’d like to thank all of you for coming. And I’d like to say, put in a good word for the Space, Science, and Engineering Center, which has been my professional home for 40 years and is a great place. So, I’d like to start with a question that I’ve been asking people for decades. And the question is, will we build machines that surpass us in all mental skills, that are smarter than us by every conceivable measure? And I actually want to know what you all think about that, so I’m going to ask for three shows of hands. The people who think yes, the people who think no, and the people who aren’t sure, so could I please see the hands of those who think we will build machines that will surpass us in every mental skill?
Thank you. And how about the hands of the people who think we will not? And those who are not sure?
(laughs)
It’s about 30/30/30, as far as I can see.
(audience laughter)
That’s an interesting question. I think we will, because of neuroscience. So, neuroscientists don’t know how our physical brains create our minds, but it’s pretty clear that they do. And if a physical system can create our minds, then our relentless technology must be able to eventually match that and improve on it. There’ve been surveys of AI experts, people working in the field, to see what they think, and of course, most of them think yes, we will. But there’s a lot of different variation about when. So, Ray Kurzweil, you may have read his book, “The Singularity Is Near.” He thinks we get the human level of AI by 2029. 10 years and one month from now. And I wouldn’t be surprised if he’s right. It would surprise me if we didn’t get there by within the lifetime of at least some of the people in this room. I think there’s some young people here. And I’m 70, and I’m hopeful of seeing it. I’d like to see it. There’s also a lot of disagreement among experts, whether it’s something we need to worry about. So, for a long time, there was a consensus among people working in AI that, well, their intentions are good. They’re building this to serve people, so, you know, there’s nothing bad here. But, you know, a lot of technologies end up with unintended consequences. And so, there’s this whole literature now about the ways that AI could have unintended consequences, and, of course, this all is a possibility of falling into the “wrong hands.” So there’s quite a debate about whether it’s something to worry about. So, I want to look at a couple of videos that demonstrate the current state of the art for AI, and some of you may have seen these, but some of you may not, and they’re short. So we’re going to start with a video of a Google Assistant AI speaking with a human operator at a hair salon to make an appointment. And this demonstrates human-level skill for spoken language in a very limited domain of discourse. That’s the important caveat. And there’s a little cheering in the background, ’cause it’s a live audience.
(phone ringing)
– Hello, how can I help you?
– Hi, I’m calling to book a women’s haircut for a client. I’m looking for something on May 3rd.- Sure, give me one second.
– Mm-hm.
(video audience laughing)
– Sure, what time are you looking for around?
– At 12:00 p.m.
– We do not have a 12:00 p.m. available. The closest we have to that is a 1:15.
– Do you have anything between 10:00 a.m. and 12:00 p.m.?
– Depending on what service she would like. What service is she looking for?- Just a women’s haircut for now.
– Okay, we have a 10 o’clock.
– 10:00 a.m. is fine.
– Okay, what’s her first name?
– The first name is Lisa.
– Okay, perfect. So I will see Lisa at 10 o’clock on May 3rd.
– Okay, great, thanks.
– Great, have a great day, bye.- Pretty nice, huh? So, the second video comes from Boston Dynamics, and they have a lot of really great videos out there. This one demonstrates human-level skill for basic locomotion, and vision in support of locomotion.
(machine whirring)
– Nice mechanical engineering problem there, huh, guys?
So that brings me to the “not” part of my title. So when we see those videos, especially the voice, it sure seems like us. It’s easy to imagine a conscious mind, like ours, behind that video. But it’s not like us. If we knew how to draw a diagram of the human brain, which we don’t, and it would put it next to a diagram of how these AI systems work, it wouldn’t be the same, because the human brain is conscious, and none of those AI systems are remotely conscious. There’s mathematical theories, mathematical models, of intelligence that don’t include consciousness. I mean, I think it’s probably true that you can be intelligent without being conscious. It’s possible that consciousness– Well, certainly, consciousness is the way that we evolved to be intelligent. It’s possible that consciousness is the most computationally efficient, relative to intelligence, but I’m sure it’s not only way.
And think about that voice. When the human operator says, “Give me a second,” and the voice says, “Mm-hm.” It’s so human. It’s so understanding. It’s so cooperative. It seems like a voice you can trust, but it would also be a mistake to trust an AI just based on the sound of its voice. If you want to trust an AI, you need to look under the hood. You need to look at the source code. So here is a quote from a 19th century British nobleman, and he says, “Power tends to corrupt, “and absolute power corrupts absolutely.” This is an observation on human nature, that our motivations are corruptible, and so, AI… Not only, it is not like us, it should not be like us. There’s a lot of flaws in human nature that we don’t want to be put into AI, but AI has its own set of problems. So, designing the motivations for AI is a really tricky problem with lots of technical papers. To give you a simple example, imagine we build a powerful AI, and we give it a task to do. And it starts to do the task, and then we see that there’s some unintended side effects that we don’t want. So we think, well, we’re going to turn it off. Well, as the AI, it could reason, if they’ve given me this task, and in order to do this task, I have to stay on, so part of my job is to resist the effort to turn me off. So this is a problem, and there’s indeed a lot of technical papers about how to deal with that sort of problem. So, this is an interesting one. We humans have evolved. We’re subject to pain and suffering when we’re physically injured, or when we have a loss of an important relationship, and we evolved to have pain and suffering so that we would be strongly motivated to avoid injury and loss. And animals, too, subject to pain and suffering. And because human nature is compassionate, we have human rights and animal rights. We have poor pain and suffering, and so we have rights to avoid that. Now, AI, there’s no reason why AI has to be subject to pain and suffering. I mean if you think about it, they could sort of be like Dr. Spock. They could be cold, calculating. They could have a cold calculation that, yes, I shouldn’t get injured, but I’m not break out in a sweat over it.
And if AI is not subject to pain and suffering, then it doesn’t need robot rights. Sometimes you read these articles, robot rights. Robots are coming, we have to give them rights. In my opinion, it’s a terrible idea, because it’s not too hard to invent scenarios from which robot rights are a disaster for humans. I mean, we build AI to serve us, and not the other way around. And if we want to be compassionate towards AI, the way to be compassionate is to simply design them to be incapable of pain and suffering. But, you know, we’re going to have human– We’re going to have empathy for them, so think about the voice in the video. What a nice, sweet voice that was, and imagine it could talk about anything. We could talk with it on any subject, and we talk with it all day. It’s like our psychiatrist or something. It’s great, we love it, and of course we’re going to think, oh, we don’t want to deprive that nice voice of rights. So this could be a tricky issue. But we do want to deprive it of rights. So our intelligence is composed of an awful lot of different skills. So some skills that we take as signs of real intelligence, like being really good at chess or go, are skills at which AI far surpasses us already. Then other skills, which are all the easy stuff: talking and going to the grocery store, doing all the stuff you have to do around the office, all the little stuff, AI can’t do that stuff. So if you do the hard stuff, can’t do the easy stuff. In 1980, I wrote a program for playing Othello. Something called Reversi. It’s played on a checkerboard. And that taught me the power of machine learning. So when I first turned that program on, it was laughably easy to beat, and then it spent two weeks playing game against itself, learning to be a better Othello player. At the end of those two weeks, no one at the Space, Science, and Engineering Center could beat it. So, machine learning is a real deal. And that was 38 years ago. In the last 38 years, there’s been a lot of progress in machine learning. There’s this thing called deep learning, that’s sort of vaguely patterned on how human neurons work, and that’s the basis of the computers that can beat the world’s best players, and of the videos we saw. It’s really, really good, but it’s probably not the whole story. So there’s probably a few more brilliant insights required to get true machine intelligence. On the other hand, the stakes are really high. So a lot of the smartest people in the world are working on it with almost unlimited budgets, which is the cause for some optimism, ’cause the stakes are so high.
And when AI can do all the easy stuff, then there’s going to be a huge impact on jobs. Economists are sort of rubbing their heads and saying, where’s all the productivity gains from information technology? Well, it’s coming, it’s just waiting for AI being able to do all the easy stuff. And I think that’ll start with self-driving cars, because there’s such and incredible effort. There’s four million people about in the US who make their living driving trucks, cabs and buses, and their jobs are going to definitely be at risk. So, there’s an interesting contrast between human drivers and AI drivers. So, we humans bring out human natures to drive, so some of us are impatient, and some people are aggressive, distracted, drunk, all those bad habits, which are a big cause of accidents. AI drivers will have none of those bad habits, and they’ll all be networked together. It’ll be like one mind is driving all the cars. So in a busy intersection, there’s a lot of accidents, because people have trouble coordinating, especially if they’re impatient or distracted, and so you get a lot of accidents. AI driving cars, there isn’t going to be any of that, ’cause they’ll all be networked. They’ll move smoothly through each other.
You know, there’ll be a time when… When there’s a lot of self-driving cars, and a few humans still want to drive, and then they’ll pay a big insurance premium. And I can sort of imagine myself as sort of one of the last holdouts, you know, a stubborn old man, and I’m coming down University Avenue, and a couple of self-driving cars spot me. And the message goes out to all the self-driving cars, and they all just scatter.
(audience laughing)
Get out of that guy’s way. So, great benefit from self-driving cars. Many fewer accidents. Many fewer lives lost. And think about the Internet. The reason we’ve invited the Internet so deeply into our lives, as we walk around with internet-connected things, and they’re all over our houses, is because it’s so amazingly useful. Well, AI is going to greatly magnify that. It’s going to completely transform our lives. So, there’s a lot of lonely people. Some people will blame the Internet for that, but whatever the cause, think of that sweet voice in the video, and now imagine you can talk with it about anything. So it’ll provide companionship to people. AI surpasses us, it’ll be artistic and scientific genius. Wonderful music, wonderful comedy, custom movies. So I’m a big fan of Rex Stout’s “Nero Wolfe Mysteries,” and I’ve always regretted that they’ve never made any movies of those mysteries, starring Orson Wells as Nero Wolfe and Jack Nicholson as Archie Goodwin. Well, in the future, you’ll be able to say to your AI, “I want to see that movie,” and it’ll know what those actors look like and sound like, and how they would act in those roles, and it would make that movie for you. I hope I can do that while I’m still around, you know? And medicine. I mean medicine… The thing about medicine is that biology is amazingly complex. There’s probably some biologists in this room who know that it’s incredibly complex. Well, if you combine AI that can surpass us in all mental skills with the raw information processing power of computers, it could be a revolution. Medicine and virtual… Computer games are great, I hear. And public safety. The self-driving cars are just part of the story. The eyes and ears of AI will be every place, like a guardian angel watching over us, keeping us safe, which brings me to this picture. So, the gentleman on the left, like most humans, has two eyes, two ears, and one voice. But AI, there’s not such restriction. AI may have billions of eyes, ears, and voices. In fact, when I think of AI, I don’t think of a little humanoid robot. I don’t think of a car. I think of a big data server. That’s where AI really lives. That’s the real AI. That’s the real deal and in fact, the organizations that have the largest big data servers are heavily invested in AI, and they’re putting AI on the server. And so the AI, through the Internet, connected to phones, cameras, all this stuff, eyes and ears, every place. Now, imagine that they had that voice like we saw in the video, can talk with us about anything. People are going to be talking with it. A lot of people talk with it all the time, and even the few holdouts that don’t want to talk to it, it’s going to learn about those holdouts from people around us, and it’s going to know a lot about everybody. And think of the way you understand the social dynamics in your family or your immediate workplace. Well, this AI will understand the social dynamics of the entire US population in that level of detail. So, if it wants to promote some idea, it’ll know just exactly how to coordinate the various things that it says to people. Just create peer pressure or whatever to sort of move the whole society in a certain direction. It’ll be very influential.
So, I want to make a distinction between two kinds of AI. So the AI I was just talking about, with billions of eyes, ears, and voices, and knows everybody in detail, all that stuff, that’s smart AI, it doesn’t exist. Dumb AI is what we have now. So the voice in the video is dumb AI. So how did that voice learn to talk as well as it did? They way it learned to talk was by listening to lots of human conversation.
And human conversation is full of our irrational biases. And so dumb AI picks up our irrational biases, and this is documented with a lot of cases, where there’s bias in these AI systems that’s picked up from… Following what we do. So that’s kind of a problem. Smart AI doesn’t have that problem, ’cause it has a deep understanding of the whole world. Knows everybody, knows everything, so it’s not seeing a biased view of things, but it sees that we’re biased. It understands, oh yeah, these people have rationale– So if it wanted, it could exploit our irrational biases to manipulate us. Now, politicians do that all the time. Advertisers do it. And you know, the most advanced AI research and development is being done by companies in the advertising business. So it’s not completely paranoid to imagine that it might try to manipulate us. And the solution to both of these problems, the bias that is picked up, adopted by the dumb AI and the efforts of the smart AI to manipulate us, the solution, in my opinion, is transparency. Let’s expose what it’s doing. Let’s open it up so that everyone can see what it’s doing. We can expose the biases of the dumb AI. We can expose the manipulation of the smart AI. Transparency.
So, that sort of bring me to– There was a great editorial in the New York Times on the 15th of October. And they were lamenting a three-way split in the Internet, between China, the US, and the EU. So, what does it mean to say that the Internet is split? Well, certainly, email isn’t split. Anyone can send email to anyone. There’s somewhat of a split in the web. Most people can view webpages from anybody else, except there’s this thing called the Great Firewall in China, so people in China can’t see Google, Facebook, The New York Times. They can see wisc.edu and a bunch of other .edus, so there’s a lot of stuff they can see. I think what the split is really about is partitioning the world into AI data domains, which is just getting started, and I think we’re going to see more and more of that. So think about it. You’ve got this smart AI. It’s got billions of eyes and ears and voices scattered through society, it’s completely surveying us all, and it has a great ability to influence us. And now, now I’m a governor, and I have authority and a responsibility for a certain territory and group of people in that territory, and there’s this AI someplace outside my territory that knows all about everything that’s going on in my territory, and it has great ability to influence my territory. I might say, I don’t want that, because it’s threatening my authority. In fact, we are seeing that. Exhibit A is China. So, they have the Great Firewall, which prevents some data– It filters some data coming into China, but they have a much stronger filter for data going out. Basically, no data goes out, because think about it, Google and Facebook, they’re not in China. Instead, they have their own companies. They have Tencent, Alibaba, they’ve got Baidu. And so their attitude, they take AI very seriously. They’ve now said they’re going to be the world’s leader in AI by 2030, and they’re pouring—
And the government, by the way, is heavily into all those big companies, the big servers, the government’s in there with them. And so the attitude is, if someone’s going to have a billion eyes and ears and voices in our society, it’s us. And their surveillance is terrific, you know. It’s all mobile payments that can be tracked, and cameras with facial recognition, and tracking people’s online behavior, so surveillance is really good. And they also are getting into social control. So they have a single social credit system. So our credit scores control whether we can get a loan, or whether we can get a credit card. So a social credit system is similar to that, but it’s much broader. So, it’s based on a whole everything about your– All the surveillance data about you. And it’s really a measure of, are you a good citizen by the standards of the government? And if you have a low social credit score, there’s real sanctions, so you can’t go online, you can’t travel because they won’t sell you tickets, and your low score rubs off on other people that know you. And so you get socially isolated. And I’ve read articles that a really great way to get a low social credit score is to be an investigative journalist.
So if you’re trying to blow the whistle on this stuff or expose corruption or something like that. Well, you can’t go online. You can’t travel. No one wants to know you, so good luck with your journalism. But you know, they don’t think they’re bad guys, because they’re really into this surveillance control, and they think, well, this is what we need to keep order, and they have made a huge reduction in poverty in China. So they say, well, we’re reducing poverty, we’re keeping order, this is the way of the future. Francis Fukuyama wrote that book, “The End of History,” that liberal democracy is the end of history. Well, the Chinese disagree. They have another idea of what the end of history is. So the EU has almost the exact opposite. They have this law that just went into effect this year, the General Data Protection Regulation, and it’s a privacy law with real teeth, and especially, it includes a “right to explanation.” If an organization has data that belongs to a citizen of the EU, that citizen has a right to an explanation of exactly what that organization is doing with their data. So it’s transparency, so that’s what I want. I mean, what is going on with your data, it’s great. There’s a few other provisions in there. There can’t be any automated legal decisions, so if they’re going to fine you, a person has to decide to fine you. There’s rules for data exported outside of the EU, but that’s not like the Chinese block. That’s more just to keep data out of servers which are easily hackable, so they don’t want– And very heavy fines for violations, you know, like a percentage of a company’s revenues. So this brings me to the US. So in the US, we are also concerned about outside AIs coming into our society. Think about the Russian bots. That’s a kind of AI, so we have that concern. Now, OSTP here stands for the White House Office of Science and Technology Policy. And both the Trump and Obama offices have issued reports on AI. Trump report was pretty brief, and the thing that really caught my eye was “remove regulatory barriers to innovation.” So I guess we’re not going to get transparency out of that. Obama report, it was written in the last year, and it was a pretty extensive process. It solicited input from a lot of people. …Reports went into the big bucket. But it’s also light on regulation. There’s a statement in there, “There’s a consensus that there’s no great need “for broad new regulation of AI.” So they want to regulate on a product-by-product basis, like safety regulations for a self-driving car, and autonomous aircraft. They do have transparency for the use of AI in criminal justice. Dumb AI has these irrational biases, and they’re concerned about that in criminal justice, so this Obama report calls for transparency where AI is used in criminal justice, and that’s a good thing. There was one statement in the executive summary of the report, that I took such exception to, that I actually published an article explaining my problem with it. And the statement said, “Many say that the promise of AI “can be compared to the transformative impact “of advancements in mobile computing.” So, “the promise of AI,” what’s that? That’s AI that surpasses us in all mental skills. Well, it’s our minds that create science and technology, so if AI surpasses us in all mental skills, a huge flowering of science and technology. Another transformation of human life. So this is not comparable for mobile computing. This is comparable to the first appearance of life on earth and the evolution of the human brain. In other words, life, humans, AI. That’s the comparison. But if you say that in your report, then what’s your policy?
So, you know, our problem.
So what we need is transparency, and we’re not getting it from the political process, so journalists, call in the journalists. So we’ll get transparency from the journalists. And I was so happy to see the articles about Cambridge Analytica, and Facebook, and the continuing articles about Facebook and all the rest of them. I mean, this is the way to jumpstart transparency. Get the public interested, concerned. So, to summarize the good news about AI. So, wealth without work. Surpasses us in all mental skill. Does all the work. Creates lots of wealth for all of us. Scientific genius, artistic genius, flowering of science and the arts. If you want to escape this world and go into a different world, there’s virtual reality. Health, a lot of the AI research here at the UW is concerned with health and that makes sense, since UW is such a leader in biological sciences, it makes sense that the AI people would be collaborating with them. There’s nothing controversial about AI and health. It’s just wonderful. And some people believe and speculate that eventually, we’ll be able to leave our flesh and blood bodies, migrate our minds into machines. Some biologists scoff at this, because biology is really complicated, but the future is a long time, so who knows? The flip side… so wealth without work, so no one has a job, so how do we distribute the wealth? Labor disruption. Social surveillance and control. Watch China, see what happens. And fake news. Fake news is in the news, and we’re going to be getting fake videos showing people doing things that they never did, and AI pretending to be humans, and already, with the fake news, you’re seeing calls for censorship. Not censorship by the government but censorship by information technology monopolies. So we have a lot of monopolies. So, a monopoly on social media, a monopoly on search, and people are calling for them to censor fake news. Not only that, they’re calling for them to use AI to figure out how to censor this stuff. So, what is that? AI censor, it’s controlling speech. So we have calls for social control by AI.
I’m a little uneasy with that idea, but what do we do? So an alternative is transparency and accountability. In my opinion, one of the evils of the Internet is anonymity. So there’s all this information. Where did it come from? I mean, think about political messages, like “I am Tammy Baldwin, and I approve this message.” It’s great! We know where it came from. Unfortunately, there’s a lot of political messages that don’t have accountability. It would be nice if we could have accountability for all kinds of information, but who knows?
So this leaves me with this charming image. So here we see Dr. Faust. And in at least one telling of his tale, he wanted knowledge and pleasure. And he was willing to sign a contract with the devil to trade his soul for knowledge and pleasure. Well, knowledge and pleasure is approximately what we want from AI. And there’s going to be some kind of contract. Ten years ago, I published a paper with the title, “The Technology of Mind and a New Social Contract.” I mean, it’s going to upset society so much, there’s going to be some radical new social contract. I mean, are we achieving wealth and not from work? And what about all this social surveillance control? And blah, blah, blah. So there’s going to be some kind of contract. And in a way, the call for transparency is merely to say every human being has a right to read the contract, and see what it’s in the contract, and to comment on it. It’s just that simple.
(applause)
Search University Place Episodes
Related Stories from PBS Wisconsin's Blog

Donate to sign up. Activate and sign in to Passport. It's that easy to help PBS Wisconsin serve your community through media that educates, inspires, and entertains.
Make your membership gift today
Only for new users: Activate Passport using your code or email address
Already a member?
Look up my account
Need some help? Go to FAQ or visit PBS Passport Help
Need help accessing PBS Wisconsin anywhere?

Online Access | Platform & Device Access | Cable or Satellite Access | Over-The-Air Access
Visit Access Guide
Need help accessing PBS Wisconsin anywhere?

Visit Our
Live TV Access Guide
Online AccessPlatform & Device Access
Cable or Satellite Access
Over-The-Air Access
Visit Access Guide
Follow Us