Computer Simulation and Engineers
10/08/13 | 1h 3m 7s | Rating: TV-G
Dan Negrut, Associate Professor, Department of Mechanical Engineering, UW-Madison, explores ways that computers are being used by mechanical engineers to understand the behavior of complex systems by using computer simulations.
Copy and Paste the Following Code to Embed this Video:
Computer Simulation and Engineers
cc >> Welcome, everyone, to Wednesday Nite at the Lab. I'm Tom Zinnen. I work here at the UW Madison Biotechnology Center. I also work for UW Extension Cooperative Extension, and on behalf of those folks and our other co-organizers, Wisconsin Public Television, the Wisconsin Alumni Association, and the UW Madison Science Alliance, thanks again for coming to Wednesday Nite at the Lab. We do this every Wednesday night, 50 times a year, and it's your chance to participate and share in the discovery here at your public land-grant research university. Tonight, it's my pleasure to introduce to you associate professor Dan Negrut of the Department of Mechanical Engineering. He received his PhD in 1998 from the University of Iowa working under the supervision of Professor Edward J Haug. At the end of 2005, Dan joined the mechanical engineering faculty here at UW Madison. His interests are in computational science, and he leads the simulation based engineering lab here at the UW. He received a National Science Foundation Career Award in 2009 and having worked at NSF for a couple of years, that is the highest award that you can get as an early and mid career person from NSF. His research lab currently operates one of the fastest supercomputers here at UW Madison, and that computer was assembled with the support of the United States Army. He received the 2012 UW Madison College of Engineering Harvey Spangler Award for Technology-Enhanced Teaching, and he is the co-founder and current director of the Wisconsin Applied Computing Center. It's incredibly poignant and appropriate that this morning the Nobel Prize in chemistry went to three gentlemen who were pioneers in computer simulation and chemistry. That's working at the level of the molecule. Today we get to hear Dan talk about computer simulation at the level of stuff that you can handle and drive. Wonderful confluence, splendid serendipity. I'm looking forward to Dan's talk. Please join me in welcoming Dan Negrut to Wednesday Nite at the Lab.
APPLAUSE
>> Good evening, everybody. First of all, I want to start by thanking Tom for the opportunity to come here and talk in front of you, and I want to also thank you for taking the time to come here and listen to my talk. My name is Dan Negrut. I'm an associate professor in the mechanical engineering department, and today I'm going to share with you a couple of thoughts on fast computers and how they are used in mechanical engineering to basically understand the behavior of complex systems without having to first build them. The talk is structured as follows. We'll have two parts. The first part, which is shorter, is going to cover this topic of supercomputers, and this will be useful because it will help us to put things in perspective and understand the power of computers today and to justify their use on a wide scale in a variety of applications and fields. As Tom pointed out earlier, probably the best example, announced today, was the Nobel Prize in chemistry. It was awarded to people who used computer simulation to understand what happens at the microscale. I'm not operating there, but the principles are the same. You use computer simulation to, basically, understand all the minor details happening in various physics-based applications. And this is, basically, enabled by a generation of supercomputers that are very affordable at this point and at the same time very fast. The second part of the talk, what I'm going to do, I'm going to show you how we use in mechanical engineering these supercomputers. I want to start by acknowledging the contribution of a bunch of people. And I want to do this at the beginning of the talk because it's very important, and I don't want to run out of time and just skip over this. Without these people, this work would not be possible. Most of them are my students, and I'm very grateful to their effort. They are great people, and I have a lot of fun waking up in the morning and going to work. Also, I want to acknowledge the financial support of several organizations. First of all the National Science Foundation and then the army. They support most of the research going on in the lab. There are some private entities such as NVIDIA Corporation, Caterpillar, a company from Germany, a company from South Korea, MSC Software, and Advanced Micro Devices. With that out of the way, let's talk about computers and fast computers. Computers are good at doing operations. What do I mean by computations? A computation, for instance, is adding two numbers, multiplying two numbers, dividing two numbers, and such. The question is, how fast are they? People measure the speed of a computer in FLOPS per second. A flop is a floating-point operation where, for instance, you take 1.3 plus 2.2 and you compute the result. That's an operation, floating-point numbers, it has a dot there, and it produces a result. So, the question is, if you want to be able to compute one million operations in one second, how much money would that be? What would be the hardware that you'd have to buy to be able to do something like that? And to put things in perspective, when I prepared the beginning of the talk, I should tell you at the end of the talk I finished about 20 minutes prior to getting here, but I started it on October 5th. And went and I looked at the US outstanding public debt, and it was somewhere at $16 trillion. And you can see it there, the number. Well, it turns out that if you wanted to have the ability to perform one million operations in a second in 1961, it would have required something like 17 million IBM 1620 computers hooked up together. And at the price of $64,000 each, when you adjust for inflation, it would be more than half of the debt. So, that's the amount of money that you had to spend back then. To bad we cannot kind of sell them back and pay off half of the debt, but that's a different story for a different day. In 2000, about $1,000 is what it took to be able to perform one million operations in one second. Today, it takes less than 20 cents of the price of a computer to perform that many operations. So this is how things change in time over basically 50 years. Now, the reason why this revolution or the origin of this revolution is an observation that an individual called Moore did back in 1970s, early 1970s. He was working back then. He's actually one of the co-founders of Intel, and he noticed that the way they can pack transistors per unit area, basically has a certain law, and he noticed that you can basically double the number of transistor per unit area every 18 to 24 months. And that was back in 1972 or something like that. And that trend continued. So in the beginning, there were not that many transistors. There were some transistors but not that many of them. They kept doubling. And, today, the distance, the feature length when you look at the chip is at the scale of where you have billions of transistors. Specifically, this year, if you look at Intel Haswell, which is the latest architecture that was released by Intel, the feature length is 22 nanometers. In two years from now it's going to be 14 nanometers, 10 nanometers, and so on. In 2021, basically they are going to run out of ideas because beyond that they can not physically make them. It's too small, and they have leaks in the electrons and such. So it basically stops there. I don't know if you read the news on the technical side. About 10 days ago there was an article in the New York Times and Washington Post about some people at Stanford coming up with a chip that used carbon nanotubes, and the hope is if they improve that, that will continue this trend of miniaturization. But right now, this is where we are. And this is a road map that takes us to 2021, and we're going to be able to pack transistors at the feature length of five nanometers. What does that mean? It means one of two things. You can either increase the computational power of the chip since you have more transistors. And I'm sure that all of you heard about, oh, I'm going to get a new computer that has two cores or four cores or eight cores and such. That's where it comes from. You can pack more so you know you have this much, you have this data on a chip, you can put more transistors, you can organize them into more cores and you have the opportunity to do more computations at that point. Or, if you don't want to do that, what you can do is, okay, well, to put together a core I need, let's say, something like half a billion transistors. Well, then what it means is that the amount of area that you need to put together that core is going to shrink more and more and more. So this is where it's at. The first path forward is increase the number of transistors, and then here's what happens. Today you can buy a three billion transistor chip that has 12 cores on it. There are some chips that have 60 cores on them that Intel sells. Now, if you project ahead and you have 12 cores this year, you are going to have 24 cores in 2015, you're going to have 50 cores in 2017, and so on. In 2021, you're going to have 200 cores basically on the same chip. This is how the computational power is going to grow. At the other end of the spectrum, if you don't want to do that but you want to shrink the area on which you pack one core, basically you have here the feature length right now is, this is the size of the chip, 20 millimeters. 20 by 20. And that fits something like 12 cores today. In 2015, it will take 14 millimeters and so on. In 2021, on a chip that has the size five-millimeter by five-millimeter, you are going to fit the most powerful workstation that you can buy today, basically. What is that? That's going to probably fit inside your phone or maybe a watch, and you're going to have the fastest workstation that we have today inside your watch at that point. Now, where do we stand in terms of hardware that we operate in the lab? So, I work in the building right across the street in the mechanical engineering building. On the fourth floor we have this server room, and that's where we have our supercomputer. And our supercomputer is just a bunch of nodes, and each node talks to other nodes and together they combine to produce a supercomputer that has more than 1,200 cores. And, also, it has what is called GPU cards. GPU stands for graphics processing unit cards, and these are cards that in the past were used to play video games. I don't know if you're, this generation, like 16-year-olds and such, they play a lot of Grand Theft Auto and Doom and other things, and they need to play them fast. So they perfected these graphics cards, and these graphics cards, they have a lot of computational power that was, in the beginning, designed to help get the speed that these gamers required. Well it turns out that scientists can use these cards too. And this thought occurred relatively late in the game, but as of 2008 there is a very nice software interface. You can start talking with these graphics cards even if you're not a gamer. As a matter of fact, these people would do chemistry, like quantum mechanics and such. They use graphics cards a lot. Graphics cards, one that we have in the lab for instance, we have 60 of them, so one graphics card, the most powerful one in the lab, has close to 2,900 scalar processors. It has seven billion transistors on it. And it produces, basically it allows you to do a lot of computation, as I'll show you in a second. The most powerful node on our supercomputer, it's called Lagrange. The entire class is called Euler. This node is called Lagrange. And this node has the most recent Intel generation multi-core chip, two of them. It has three NVIDIA GPU cards. And it has one Intel -- accelerator, and together they combine for something that comes close to what people call seven teraflops. And to put things in perspective, what a teraflop is, remember I told you in the beginning about what is the price to do one million computations a second? So this node, in theory, this machine can do seven million, as I said here, seven million megaflops. Or if you want, seven thousand billion operations per second. This is how many operations it can do if it works at peak. What is the price for that? That's $20,000. To put things in perspective, on this slide I'm showing you a supercomputer, IBM Blue Gene/L. It was an entry-level machine back in 2007, and the price of this machine was $1.4 million. And its peak flop rate was 5.7 teraflops. Okay? This node that we have today in our supercomputer, one node is more than this, and, basically, it is $20,000 as opposed to $1.4 million. What that means is that, basically, it's a democratization of the supercomputing. Everybody can afford to use this if they know how to do it. And I'll talk more about that in a bit. Here, when you have a machine like this, a supercomputer, you need racks and racks and racks. You need a dedicated power source. You need a person to look after the machine. You need a special operating system, a special kind of software to operate this machine. Today, you have on your desk something that is more powerful than the $1.4 million machine of six years ago. This is how things change, and they change fast, and the most amazing thing is that they change exponentially. So it's already amazing, and we're going to double what's amazing and double it again and double it for four more times, basically, up until 2021. So this is where it's at, and it's very exciting. Let me show you one of the most powerful machines in the world. It's called Mira. And I showed you this supercomputer from 2007. This is 2013. It's in the same building at Argonne National Lab. It is an IBM Blue Gene/Q, more recent. It's ranked number five in the world. And theoretically, it's about 1,000 times faster than this fast node that I told you I'm very proud of in my lab. This machine is more than 1,000 times faster than that. In theory, it can carry out eight million billion operations in one second. Where is this machine relative to other machines in the world? I have here some information out of a website that's called Top 500. And what this website does twice a year, in June and November, it compiles the list of the 500 fastest supercomputers in the world. And they plot them on the line. And I don't know if you can see all the way to the left. It says 1993. It goes all the way to the right, 2013 June data. It extends through 2018. But where we are right now, the fastest machine is the line in the middle. Okay? The straight line up there is the sum of the speed of the first 500 machines. The bottom line is the machine ranked number 500 in this list. Okay? If you look, this is a very nice line that shows how the speed increases, and there is a prediction that says in 2018 we're going to reach what is called an exaflop per second. And that's a lot of computations per second. If you want to see the country where these computers operate, number one is -- in China, and it is, basically, the fastest machine. It relies on Intel chips. Number two is a supercomputer at Oak Ridge National Lab. Number three is a supercomputer at the Lawrence Livermore National Lab. Number four is a supercomputer from Japan, Fujitsu. And number five is the supercomputer that I showed you on the previous slide on the south side of Chicago. Moving on, I don't know how well you can see this, but this shows what the architecture of these supercomputers looks like. And most of them, it says here, clusters. Basically, clusters is what we have in the lab. It is a bunch of nodes put together interconnected through a fast network connection. Okay? Then you have dedicated machines. They are listed there as MPP. For instance, IBM Blue Gene/L is like a massively parallel machine, MPP. Then, on the right as you look at the screen, there's a plot that shows the type of chips that these supercomputers use. And the largest, if you look at it, the vendor with the biggest footprint is Intel. At the bottom there, you have a little bit of AMD, and at the top there, you have a little bit of IBM. But Intel is the biggest player right now in chips that basically are helping these supercomputers reach these flop rates. Who is using these supercomputers? On this slide, I don't know how well you can see this, but at the bottom there it says academic use, in the middle, a big chunk, the largest chunk is industry, and at the top it says research. So these are the three players. It's academia, it's industry, and research labs. And at the right there, I don't know how well you can see this, but there's a curve that shows an upward trend, and this plot basically shows the number of what people call accelerators. Let me explain. So you have a computer and it has a processor. The GPU, this card, comes next to the processor, and it helps it. It's kind of like Santa's helper. Like the little elf. So you have Santa Claus is the CPU, and you have some helpers. An accelerator like that is a GPU card or is another Intel Xeon Phi card which serves, basically --, helps the processor carry out simulations faster. Okay? It's an accelerator. And the trend shows that more and more these accelerators become irrelevant. Tom, do you have a question? >> Yeah. Dan, on that left slide, where does the government, especially the National Security Agency type folks... >> That's interesting. I don't know if you can see, but it's this line here. See this line? It says classified, and then there is something that says government, and apparently it kind of goes nowhere. But I don't know if it's true or not. That's what they say.
LAUGHTER
>>
INAUDIBLE
>> What's that?
LAUGHTER
Anyway, so that's where. Yes? >> Is Intel the biggest maker of graphics processing chips also? >> No. No. So the biggest, okay, so for scientific computing purposes, the biggest player is NVIDIA. Their core business is graphics cards. There was another company called ATI, it got purchased by AMD. And now that company has a solution where it has a hybrid, it has one processor and a GPU next to it. Intel also does the same thing now days. So basically, on one chip, on the same chip you have the CPU and you have a GPU. NVIDIA has individual GPU cards that you plug into the motherboard. So they are separate entities. They are also working today on integrating, developing their own CPU, but their cup of tea is GPU computing. And they are the most significant presence in scientific computing in terms of accelerators on the GPU side. >> Are they owned by AMD? >> No. That's ATI. The two companies was NVIDIA, NVIDIA was a stand-alone. ATI used to be their competitor; they got bought by AMD. Okay, so let's move on. Good. So this is the end, I'm on slide 18. Let me tell you something. I have 127 slides. And I'm halfway through the presentation so I have to speed up a little bit. But anyway, this is the end of the segment where I tried to convince that supercomputers today are very fast already, and basically every two years they get faster. So how do we use these supercomputers? When I say we, I don't necessarily mean me, but, as you saw today, the Nobel Prize in chemistry went to people who use supercomputers or developed software that leverages these supercomputers to perform simulation at the atomic level. I don't work there. Actually, my work goes back to these two gentlemen with wigs. One is Newton and one is Euler, and they lived 300 years ago. More than that. Okay? What they came up with are some equations that basically describe how things move. They are what is called equations of motion. If you solve something like this, these equations of motion, you can predict how a vehicle moves. Basically, you can understand every single component of it, how it moves. You can generate something like this. If you don't want to drive it on the ground, you can simulate, you put it on a four-post shaker. You can either do that and put it on a four-post shaker or you can simulate vehicle being on that. Simulating is much better than actually paying the money and taking the time to do it. Why? Because you might say, oh, look the suspension is not good; let's change it. So then you build another vehicle. Or maybe you are in the business of crashing vehicles into walls to understand if they protect the driver. You don't want to keep crashing cars, not because they are expensive, the car is like, what, $30,000 depending on what cars you get, but long story short, it's not that. It's the instrument in that car. That car is unique. That car is not a mass product. It's manufactured by a small group of people. It takes a long time and take it and smash it. And you say, oops, I forgot to hook up this sensor and now I have to smash another one. So that's why simulation is good in my business. In chemistry, for instance, sometimes you can't even observe certain things. And you can simulate them and you gain additional insights into how a problem, how a physical phenomenon takes place. Okay, so let me quote you here Frank Zappa, who said all the good music has already been written by people with wigs and stuff.
LAUGHTER
So if this was done 300 years ago, what am I doing here? Let me move on and show you a problem that is relevant and we can still not solve. If you have a robot like this, like a rover, let's say a rover, and you want to simulate the motion of sand, sand is a lot of bodies. In one kilometer of sand you have 1.5 billion particles. And the wheels spin, it gets stuck. Can you get it out of there or how big of a grade can this go up? How should I change the suspension design? Should I make it like lower? Should I change the center of gravity, and so on. These are things that, if you can simulate, you get the better picture of what happens with that mechanical system. We cannot do that today. Okay? To put things in perspective, this is an experiment taking 32 bowls and dropping them in a bucket to understand what the commercial software vendor can help you with. And if you are interested in how these bowls bounce in the bucket, for three seconds just look at 32 bowls and you simulate that. With a commercial solution, it takes something like 600 seconds. If you take only 16 bowls, it takes 1200 seconds, and so on. But what's important here is how the amount of time to simulate grows with the number of bodies. If you have one million bodies and you want to understand for three seconds how those bodies jump around in the bucket, with this software it would take 25,000 years, and I'm a young man but still, that's beyond my time horizon here. So engineers don't have that much time. They need to launch something at 5:00 PM. Tomorrow morning when they show up at work, that should be done so they can look at data and understand what goes on. Although the equations of motion, the math behind this was established 300 years ago, we still have difficulties solving some of these problems. So here are some of these problems, open problems. So you want to go from multi-body dynamics to many-body dynamics. What I mean, you have a vehicle, it has 50 parts moving, you want to go to a system that has one billion parts moving. You have bodies that interact through friction and contact. You have bodies that are compliant. You have bodies that interact with the fluid. So all these are even today problems that are difficult to solve. We're interested in them, but we cannot solve them easily. So, why is it worth reconsidering these tough problems? The reason is that although they go back to these two gentlemen, now there are other gentlemen that came into this game and that don't have wigs anymore. This is the former CEO of Intel. This is the current president, and he's the founder of NVIDIA. And these guys, because of their companies, today, as you saw 10 minutes ago, we have a lot of computational power that we can leverage. And it's worth revisiting some of these problems because now we are better equipped to attack them. What prevents us from using this hardware? The hardware is there. I told you we have in the lab something that is amazingly fast. The problem is that we don't have software. The software is hard to put together. The problems are hard to formulate. You have millions of equations. You have to solve them. You have to interpret data. You have to use parallel computers to have them work together. So, hardware is the first step. What you need is a long journey, and we are taking this journey and are trying to put together this software to leverage this hardware. So, basically, we're developing a simulation infrastructure called Chrono. What we want to do is advanced state of the art in modeling simulation and visualization. And to that end we want to use emerging hardware and new algorithms to solve open problems in engineering. What do I mean by emerging hardware? I mean supercomputers. I mean GPU and classes of CPUs. Open engineering problems is basically what I had two slides ago. Greater dynamic, advanced modeling, territory modeling, -- interaction, vehicle mobility. These are the problems that interest me. So, the rest of this talk is going to be a very quick journey through the components of this software infrastructure, just to give you an idea why this is difficult. The hardware is there. We need the software. And the software, as I said, it's not easy to produce, and I hope that the rest of this talk will give you, will convince you that that's the case. So, what do I need in order to use a supercomputer? So,
think about it this way
I look at the problem, I look at the physical thing, like a car driving on the road or in sand. Now, I need, the first thing I need to do is I need to model that, and what that means is formulating some equations, some math that if I solve the solution is going to tell me something about the motion of that car or about the dynamics of that system. Now, let's say that I formulate those equations and I know how to do that, which is not always the case, but let's say that I have those equations, I've been able to formulate them. The next thing, I just have a bunch of equations, I need to solve those equations. That's the second component. I need math there. Then, when I do that, in my case I have bodies that interact with each other. So there needs to be some type of support for computing this collision and carry out this proximity computation to understand how bodies heat each other, how they move together and such. The next component has to do with the fact that I have one big supercomputer and I have to basically farm out one big job, the solution of this problem, to a thousand players, let's say, or whatever number of cores you might have or skill processors in a GPU. And finally, if we do that successfully, then you have a ton of data that is just a bunch of numbers that mean nothing unless you do something with them. So you need to do some type of post-processing to visualize the system and how it evolves. So the rest of the talk is going to be quickly going through these stages of the solution. The modeling stage, the numerical solution stage, the proximity computation, the parallel computation, and the visualization and post-processing. And I think we're going to go
home probably at 5
00 AM.
LAUGHTER
home probably at 5
But I'll try to move fast here. Okay, so this is the software infrastructure. It has these five components at the foundation, and then it has an interface and then you can have applications that you run, you simulate, and you draw on the functionality provided by these five fundamental components of the software infrastructure. Okay, advanced modeling, what do I mean by that? Modeling is this process through which you look at the problem and you pose a mathematical set of equations that if you solve will provide insights into the problem that you care about. Every class of applications has its own set of equations. Let me give you an example here. Let's say that you have something like this. So this is one sphere that you drop, and these are some flexible bodies. So to understand how they deform, you have to formulate some equations. The motion of that fiber that deforms is governed by some equations. You have to have an automated way on the computer to formulate those equations. You don't want to write it by hand, and I'll tell you later why. And they have to formulate all sorts of terms. I'm not going to get into details. But you have something like this, where you have an articulated structure and you have joints and constraints here. And then if you have something like that, then you have to augment your set of equations with other equations that are induced by the presence of those joints. Then you have now bodies just like this. These look like hotdogs and such. They come in contact with each other. Okay? So you have a flexible body, but now all of the sudden they start coming together and they hit each other. Well, you have to account for that. You have to have a way of expressing them, that interaction, through some equations. Okay, so you add some more equations to that set of equations. Now here is another problem. This is a machine for 3D printing. The bodies are not deformable. As such, the set of equations is different. We have something like this. I don't know if you see this, but this is, in the middle there, there is powder, and it is like submicron level, sometimes, powder. And you come with the roller and you're supposed to flatten them and have a nice uniform layer. Okay? So on the rightmost picture, you can see that roller, and this is a simulation. This huge thing is that small roller, but it's so huge because the particles there underneath it is powder. As you can see, it just moves over the layer, and this is a simulation of what happens when it just moves over and tries to create a smooth layer of particles. And what happens after that, you have a laser that you shine the laser on these particles and they melt. And this is how you do 3D printing. Okay? So you want to understand what geometry, what dimension the particles should have, how thick the layer, how fast the roller, you can do this through simulation. And this is what -- in the lab is doing. Now, if you have something like this, again these are like M&Ms. We really like in the lab M&Ms. Every day my students get their quota of M&Ms.
00 or 3
00 PM it wakes them up. They need that burst of energy. So one of them, for fun, he took, what, 50,000 M&Ms, and he ran a simulation on one GPU card to mix them. And why did we do that? No good reason. Probably we could do it. But it's a fun simulation. Long story short, you have a different set of equations for this because the bodies are not flexible, the bodies are rigid. You have a lot of contact here. You have a lot of friction going on here. So it's a different set of equations. So these are the Newton-Euler equations that I showed you 15 slides ago. Now, because you have friction and contact, you have to augment those, and you have a different set of equations. Now, for a problem like you saw on the previous slide, if you go two slides ago where you had that 3D printing machine, there you have something like half a million bodies. Half a million bodies, the number of equations that you have to write down for that half a million bodies is really large. You have here, basically, six times, let's say, 500,000. Six times 500,000 bodies. That's three million. Another three million, six million. You don't have any equations like this, so it's six million here. And, basically, you have here a set of equations equal to the number of contacts that you have in that problem. If you have half a million bodies, probably you're going to have two million contacts. So each contact has three equations associated with it. So, basically, you end up with something that is close to 10 million to 15 million equations. You cannot take a pen and a piece of paper to write down that many equations. So you have to have a certain systematic way to look at the problem, to describe the problem, and then -- the description and automatically formulate those equations. Okay? This is what modeling is. It becomes more interesting if you now have bodies that move in the fluid. Because if you have something like that, you have your old Newton-Euler equations of motion but you have to deal with -- equations because that's what governs the dynamics of the fluid. So you have a two-way coupling. The bodies influence the fluid; the fluid, as it flows, it influences the dynamics of the bodies. Okay, anyway, let's say that somehow you formulate those equations and you have a lot of them, and you are about to solve them. So, how do you take care of that? That's a long story, but let me start with a quote. I have to tell you this. Originally I'm from Romania. You probably noticed that I have a really thick accent. I don't know if on TV maybe they have subtitles to understand what I'm saying here.
LAUGHTER
00 or 3
But growing up in Romania under communist regime, I was bound to like math. The reason being Plato here, 2,500 years ago, said, "Mathematics is like checkers in being suitable for the young, not too difficult, amusing, and without peril to the state."
LAUGHTER
00 or 3
So growing up in a communist country, you stayed out of trouble if you focused on that. So all of us in Romania, we have a good, strong mathematical background. Probably that explains the fact, I think, four or five professors in the math department are from Romania. I don't know if it's a coincidence or not. But anyway, so now we're going to talk about math. That's the idea. So I told you about those flexible bodies. You have some equations and you have to solve those equations. There is math behind it, and I'm not going to get into details for two reasons. A, I don't have time, and B, it's really boring. So I'm just going to tell you that it's challenging. Okay? And for every type of physics, you have a certain type of equations, and each type of equations calls for a specific approach to solving them. Okay? So this is for flexible bodies. Now you move to something like rigid bodies. Here, you have something like this, okay? It's like an anchoring example. It's a cut-away example, trying to understand what the force is to pull it out. You have things where you drop the ball and you're curious about how deep it penetrates. Or, here, you have like an extrusion problem. You extrude material, and this is kind of like a collection of rigid bodies. It looks something like that. Well, this is your problem, this is your set of equations that comes out of the modeling stage, now you have to solve those. And the way you solve those, these equations of motion, they are differential equations or differential algebra equations. They are something in the continuum form, and you have to -- them, and this is what this slide says. I'm not going to give you details, but, a long story short, you can solve them and basically at every single time, you can compute their solution. And you do it now and then again and then again and then again, and you stitch it together and you create a movie. You can see how the system changes in time. That's what happens here. We're not going to get into details. For instance, when you have this frictional contact problems, the way we solve them of every single instance in this simulation, we have to solve an optimization problem. How big is that optimization problem? Here is a physical problem with one million bodies. It's like a box, and in the box you have this soccer and this looks like a fluid but it's not a fluid. It's just a bunch of bodies. And you shake the box, and that ball floats on them. Here, the optimization problem that you have to solve has something close to six million variables in it. So you have to solve an optimization problem in six million variables, minimize the cost function. Okay, here is one example. This is a value of the function. You try to drive the value of the function to minimize it, okay? You have an algorithm that minimizes like this or one that drops like this. Boom. You don't want to wait for too long to minimize that. You want to minimize that function fast because if you do so, then your solution advances fast. So the art in is finding the appropriate solution to this optimization problem that is very fast. And this is what the topic of a PhD thesis was. The student graduated in May. And he started looking at all sorts of methods. GP minerals, and this was jacoby, and precondition spectral, project degrading with fold back methods. It's a long story reading the title already. This is another individual from Slovakia. We look at his algorithm. And then lo and behold, this was the winner. We're just trying and trying and trying. And this was a method that the student learned about while taking nonlinear optimization course here at the University of Wisconsin Madison. There was a good professor. He covered that. He tried the method, the student, and it worked, so he got the PhD. He kept trying and trying and trying. And in the end, what happened is his method, I'm not going to give you details, but his method was five to 10 times faster than everything else that was previous to that point. And this is what you hope for. You want to finish you simulation nothing like 10 days or five days. You want to finish it, basically, in one day or half a day. I told you this is, in industry, this is the golden rule.
If I can launch at 5
00 PM and
it's ready at 8
00 AM when I come back, that's what I like. So always you try to compress the solution time. Okay, so let's move on. The idea of this segment was you have a set of equations, you have to solve them somehow. There are various ways in which you can solve a set of equations, and these ways, they depend on what problem you're trying to solve. For flexible body dynamics is one class of solutions, method for rigid dynamics another one. When you have fluid, that's yet another one. You have to have software that handles any of these scenarios and combinations of them. And this is what we're working on. Now, next step, when you have these problems that are large, let's look at this. This is something like 300,000 bodies. This is a benchmark problem that we do. It's just like a tower, and it collapses. And what's challenging about this is that you have bodies of various geometries here and you have about two to 2.4 million contacts. And each time instance when you compute a solution, you have to understand who is colliding with whom because you need to compute the interaction force. And you basically have, to understand how this moves for one second, you have to carry out this collision detection job 1,000 to 10,000 times. Okay? So it's a lot of work, so it better be done fast. You have other systems, like a track vehicle. Here you have weird geometries, and to understand who is colliding with whom is more challenging. Here you have a simulation. Where we're going with this, we're interested in polymers, and these are fibers and we're mixing them and such. But long story short, you have all sorts of geometries and you have to understand, okay, how do they collide with each other? What are the bodies they collide with? This thing is moving. So you have to do this again for each second of the dynamics of this thing somewhere like 1,000 to 10,000 times. You think that it's straightforward, but it's not. Ever if you have ellipsoids. Ellipsoids are like spheres. What can simpler than to understand if two spheres collide with each other? You just look at the distance between the centers, you look at the radii, and if the sum of the radii is less than the distance between the centers, then they collide. Well, you have an ellipsoid which is the brother, or not the brother, but the cousin of a sphere, and all of the sudden the problem becomes very nasty. And you have to solve a small optimization problem in two variables. But if you have a million collisions, you have to solve a million optimization problems. And this is where supercomputing comes into play because it has a lot of cores, then you can do a lot of things at the same time and be in a position to attack problems like this. I'm going to skip this part. It's boring, and it doesn't bring anything into the picture. It just details how the collisions detection is done. Okay? So bear with me here. Okay. Here's some results. So this is a workstation, just like a computer but a little bit more powerful. It still fits under your desk. It has four GPU cards. One, two, three, four. And the question is, what can you do with a machine like this if you want to do collision detection? What we did at some point in the lab, we took one CPU that had four cores, and each core talked one of these four GPU cards, and we ran a simulation to understand how many contacts can we figure out, process, in a second, for instance. And we just, then we started to increase the numbers of contacts, and in the end, we noticed that, for instance, to figure out six billion contacts, to understand, to sort things out for six billion contacts it took us about three minutes. Six billion contacts is the number of contacts that you have in one cubic meter of sand. So if you have this much sand by that much by that much in a bucket, this is how many contacts you have. So you can basically sort out all those contacts. The point of contact and what the tangent plane is and the normal in three minutes. This is the contacts part. Still we don't have the math to solve this big of a problem yet. Our methods cannot scale to this level. They just don't find the solution. They are not powerful enough at this point. But on the collision side, you can do something like that. Also, compared to something that run sequentially. If you have one CPU and run it on one core or if you have four GPUs and you do parallel computing, it runs about 180 times faster. That's what we noticed. So that's why parallel computing is good. Okay, moving on to basically farming out jobs for parallel computing. I'll start with this picture. I was looking for a picture that shows a team of people working together on a task where you have dynamics. So you have objects that I throw at you and you throw and some things come at me and we work like a family and we're all friends and have a lot of fun. Okay? It turns out that, of all places, the world record was in Madison, and that was amazing. It was 64 people. And it's very interesting because 64 is a power of two, and it's what you see in computers. You have 64 cores. We have, actually, workstations that have 64 cores. So it was interesting. Okay, so this is what I'm trying to do here. I have one big problem and I have 64 players and I want to have them coordinate the execution so that they work together to solve the same problem. If I'm alone, it's easy to do this. But there's 64 of us and we start coordinating all of us and, trickier yet, we don't do only right to left. All of the sudden I decide to just throw one in the middle, and somebody needs to be kind of ready to get it and then throw it to somebody else and such. So this is how it is. And let me give you an example here. Here, just to put things in perspective, is one problem, to make a point, that is solved on four cores. So four players are going to look at the motion of a collection of balls, okay? And they are going to work together to simulate their behavior. And this is what this animation shows. And each color represents balls that are handled by a core. And you notice that some balls start, for instance, green but then they become red and then they become yellow or brown or whatever and they end up being green again. That's because it's just like that -- or whatever. It was with me. I pass it over. Somebody picks it up. Eventually it can come back to me, maybe. Maybe not. But this is what happens when you have a bunch of players working together as a team to solve one problem. Now, this is for four cores. There are challenges because you have bodies that, as you can see here, there are gray bodies at the boundary. Those are bodies that kind of live in that subdomain but they touch bodies that are in the other subdomain. So those are like special bodies. You have to deal with them in a different way. Let's move on. This is just a scaling analysis. It's half a million bodies and it uses 64 cores. And it's basically the same idea. You have 64 guys working together to see this problem through. More interesting is if you have a rover and you put it on granular terrain like that and you look at the motion of the rover. Here, for instance, when you move the rover you can only see the footprints. If you care to see, just like a ghost rover. But you can do all sorts of fun things at that point once you have the simulation run. Okay, pre and post processing. So what's the idea here? The idea is that if you run a simulation on a computer, all that you get is just a bunch of numbers that mean nothing. There is a journey from data to information to knowledge. And especially when you have problems that have millions of bodies, what do you plot? What do you look at? You really need to see the evolution of this system to gain insight. The quote here that I use is this guy, Jean-Luc Godard, so that all you need to make a movie is a girl and a gun. And it turns out that there's a little bit more than that. If you want to do a movie out of a computer simulation, sometimes it takes more to make the movie than to run the simulation because you have one million bodies and you have shades and colors and one shows up and disappears and all that stuff. So, what we have, or what we're working on because it's not 100% ready yet, a student is working on creating what we call a rendering pipeline where, effectively, you push through a description of the model, the geometry and how it looks, and the results of simulation, and at the other end of the pipe comes a movie that shows you what happened. And it has to be able to deal not only with particles but you want to have fluids and render those. Here, this is basically a fluid interaction, a floating --. But you can see that you can see through that fluid. And it has shades and all that stuff. And you want to be able to render something like that. You have very large problems, medium elements, you want to be able to render various materials, and you want to make the rendering process suitable to be used by engineers, like me, which is very challenging, let me tell you, because I don't know much about computer graphics. So it should be simple, and it should be fast. So, you start from data, you have a supercomputer, you launch the job on the supercomputer, and out should come a movie. That's the idea. We used for rendering a product called a RenderMan, PRMan. This is what Pixar is using to make movies, like Toy Story and such. That's the idea. And we have 320 licenses of this. So, in theory, if we have a supercomputer, you can generate 320 frames of the movie, the next 320 frames, the next 320 frames, and then we stitch them together and you have you movie. Basically, as I told you, the idea is have data and especially the model, and then generate results. This is something that was rendered using RenderMan. It's like the mobility of a rover. I'm going to skip over that. This is a tire. We're interested in, we do a lot of simulations for industry and collaborating with people from the army. Okay, very quickly in two minutes, let me tell you one thing about something that doesn't have to do with supercomputers. But the question is, you have all these supercomputers and you produce numbers and such, and what do they mean? Do they come anywhere close to what happens in reality, or are we talking about Toy Story 4? I don't know which one is next in the sequence. But do they mean something? So I like this quote of the statistician Deming who said, "In God we trust; all others must bring data." So we're looking at the validation of these simulations. And in engineering what you do, you model something, you simulate, then you validate. Now, once you gain confidence in your model, maybe you have to do less validation. So, in the beginning, you smash some cars into the wall, you simulate, after a while the simulation and the physical experiment, they produce the same results, then you say, okay, I'm not going to smash cars anymore but I'm going to run simulations and smash at various speeds, angles, obstacles, and such. So about validation, this is actually a physical experiment. It's a -- test, and it's not a simulation. And one poor graduate student, in this case was from Italy, if you look, he put a layer of yellow and then a yellow of a darker particles. And to do it twice is very expensive and hard because you have to separate them and do it again. If you do it in simulation, it's very easy. I can run it many times, and, basically, it's as simple as pressing a button. And the question is, how close do they come? And if you look at them at various instances in time, left is experiment, right is simulation, you see the patterns are the same. But if you want to go beyond patterns, what we did is we look at the flow of material and we try to do something like here is the flow. You just drop granular material on a scale. Here is the scale, the electronic scale there. It measures weight as a function of time. And what you change here is that opening of the gap. If you open a lot, then the material flows fast. If you open a little bit, it trickles slowly. And what you measure is the weight of that pile of particles as a function of time. And you don't see it here, but red on that lot is simulation and on top of it is the experimental data. The experimental data is noisy because you repeat it many times. Maybe the humidity changes and such. But you can see that the results of the simulation and the experimental data are on top of each other. And for various gap openings, the results follow it very well. We did something where we looked at the flow of particles, I'm not going to get into that because it is already past one hour. But the idea is that we get really good correlation between, not always but in many cases between the simulation and the experimental results. Now, if you don't get good correlation, what does that mean? Most often, most often it means that your model is bad. So remember what I told you. You start with the model, you get some equations, you solve the equations, and then look at the results. Most often the model is not good enough. Sometimes you have a good model but the solution, the mathematical solution is not that good. It doesn't happen too often, but it happens. And if that's the case, you have to go back to the drawing board and try to understand what's in the physics of this problem that I missed and what sort of equations do I have to formulate to better capture the physics of interest? Okay, so I'm at the end of my presentation here. The -- perspective on things. I hope that I convinced you that computers are incredibly fast, and, if anything, they're going to get faster fast. And they are under utilized. The reason being, the software always plays a catch up game with the hardware. Software is hard to produce. It takes a modeling stage. It takes a mathematical solution. It takes validation. It takes time. And the last bullet there, in the lab we're working on producing the software that puts these computers to good use. And these are some people, more senior people in the lab. A postdoc, myself, and a senior scientist. And these are some of the students working the lab. Two more joined at the beginning of the semester, and I still need to come up with their pictures. But this is the crew. Closing remarks, we are physics based simulation and we want to solve real world problems. We want to put computers and good ideas to work, and we are very interested in working with people from industry because they always present us with very challenging problems and meaningful problems. Whatever we do, whatever software produced, it's open, and anybody who can put it to good use is more than welcome to do so. And with this, I want to thank you for your time, and I hope that I didn't bore you too much with my story about computers and how they are used in mechanical engineering. Okay, thank you.
APPLAUSE
Search University Place Episodes
Related Stories from PBS Wisconsin's Blog
Donate to sign up. Activate and sign in to Passport. It's that easy to help PBS Wisconsin serve your community through media that educates, inspires, and entertains.
Make your membership gift today
Only for new users: Activate Passport using your code or email address
Already a member?
Look up my account
Need some help? Go to FAQ or visit PBS Passport Help
Need help accessing PBS Wisconsin anywhere?
Online Access | Platform & Device Access | Cable or Satellite Access | Over-The-Air Access
Visit Access Guide
Need help accessing PBS Wisconsin anywhere?
Visit Our
Live TV Access Guide
Online AccessPlatform & Device Access
Cable or Satellite Access
Over-The-Air Access
Visit Access Guide
Passport













Follow Us