Hi-Powered Computers Improve Truck Design
10/13/15 | 1h 0m 53s | Rating: TV-G
Arman Pazouki, Postdoctoral Scholar, Mechanical Engineering, UW-Madison, explores the use of computer generated mathematical models to design and build trucks that can drive over sand, pebbles, gravel and water.
Copy and Paste the Following Code to Embed this Video:
Hi-Powered Computers Improve Truck Design
Welcome, everyone, to We dnesday Nite at the Lab. I'm Tom Zinnen. I work here at the UW Madison Biotechnology Center. I also work for UW-Extension Cooperative Extension, and on behalf of those of folks and our other co-organizers, Wisconsin Public Television, the Wisconsin Alumni Association, and the UW-Madison Science Alliance. Thanks again for coming to We dnesday Nite at the Lab. We do this every Wednesday night, 50 times a year. Tonight, it's my pleasure to introduce to you Arman Pazouki. He was born in Tehran, Iran, where he got his undergraduate degree and his first master's degree. That was in mechanical engineering. In 2009, he came to Madison to study at the University of Wisconsin-Madison where he got a master's degree in engineering mechanics, and then, recently, his PhD in mechanical engineering. He's now an assistant scientist in the laboratory of Professor Dan Negrut. Tonight, he's going to talk with us about high powered computation to speed truck design, which, in a state like Wisconsin where we design and build really, really big trucks, I think it's going to be an interesting story to hear how fast computers help speed truck design. Please join me in welcoming Arman Pazouki to Wednesday Nite at the Lab. (applause) I would like to thank Tom for providing me with this opportunity to come here and talk about my research in high performance computing, which is going to be useful for truck design and many applications involving multi-body dynamics, which I will describe what are those. And, also, I would like to thank all of you for taking the time and attending this presentation. So, here I would like to talk about and share ideas about how fast computers are going to be useful for engineering applications, in general, and in multi-body dynamic simulations, truck design, granular material, and problems like that in particular. My name is Arman Pazouki. I'm an assistant scientist at the Simulation Based Engineering Laboratory at the University of Wisconsin-Madison. So, in my talk I will, first, I will describe a little bit of high performance computing, fast computers and such, and how they are relevant to engineering applications and how they are relevant in today's world in general. And then, particularly since we do a lot of dynamics, I will talk about applications in dynamics and about the algorithms that are required to join these. Further, I will talk about engineering applications using these algorithms and such and about the validation, which is an important part of our work. So, I would like to admit two facts at the beginning. One, so my talk includes two key words; computation and truck design. So, when I decided to choose a major in my undergrad, I just made sure that I chose a major that didn't have anything to do with computation. And now here I am.
laughter
And the second thing is truck design. So, truck design, I have to talk about a lot of components until I reach that point, which is going to be the end of my talk, close to the end of my talk, and I have to speed up somewhere. So when I speed up, my Tehran accent kicks in, and I hope you guys don't have any problem in that regard. So, at the beginning I would like to thank the work of my colleagues, and many people who have contributed to the results I'm going to show today. Many of them are my colleagues at Simulation Based Engineering Laboratory, and many of them have already graduated, got job, went to industry. There are some collaborators from overseas. University of Parma in Italy, Alessandro and other collaborators in the US in industries and academia and such. So, to open up my talk, I would like to show this picture. Many of you probably know what is this. This shows actually a five megabyte hard drive in 1956. To give you an idea, like, what does that mean? So, you see here, this small guy here? It's a four terabyte hard drive. We can buy it now online for $120 probably, and it's 80,000 times larger than the hard drive you see here. What interests me here is, well, everything has developed. We cannot say, well, this is super amazing. But one thing that amazes me here is you see here the jeep. It looks kind of similar these days. You see airplane. We have airplane. People dress up kind of similarly. But one thing has changed a lot, and it's this hard drive. Well, it's not very professional to open up my talk about high performance computing by talking about hard drive because hard drive is not the main component of computation. The main components are processors, and how we want to compute or how fast we want to process mathematical application or numbers and such. So, we want to do calculation fast. What does calculation mean in terms of machine language? Everything that is composed of, like addition, multiplication, division, these are basic operations. Other calculations rely on these types of basic applications. Like if you want to sort an array, if you want to do a logarithm and such, at the end of the day you have to kind of split it not as a programmer but the machine, the compiler, kind of splits it into basic operations. And then we like to know how much would you have to pay to do about one million calculations in one second. So, it turns out that in 1961 one required 17 million IBM machines to do one million operations per second. Usually, in terms of computational language, people use the term floating point operation per second or flop per second. So one million flops would require 17 million IBM machines. And taking care of the inflation rate and such, that would probably be about $8.3 trillion in today's money. In 2000, you only needed to spend like $1,000 to do the same amount of calculation. And in today's world, in 2015, it only takes eight seconds of computing to do the same amount of calculation. So everything is changing super fast. That needs to be translated into programming. How to use the hardware, the available hardware. So, the reason of this fast change is the change in the microprocessor or transistors industry. Transistors have become-- engineers have managed to put more and more transistors into a unit area. And this is actually in accordance to a Moore's Law. Scientist Gordon Moore, more than 50 years ago, discovered a rule that says every two years the number of transistors you can fit into the same area doubles up. So this is kind of suggested exponential rate. So it means in 2011 the transistors characteristic line was 32 nanometer, and then in 2013 22 nanometer, and this trend is going to continue further and further until it hits the physical limit of the Moore's law. And the physical limit is, well, we cannot go smaller than an atom used for computation. That's the physical limit. So, is it going to happen in 2020? I mean, that physical limit is very small. It's like 0.1 nanometer. It's suggested in the paper. We have time to reach that, but in practice we may reach some physical limit earlier than that. So, but, well, there are some good news that people are working on other alternative architecture to replace that sort of trend. So after we hit the physical limit, some other architecture will come in. So this means two things. First thing is, well, you can include more and more transistors in the same area, you can have faster computers, or you can have the same amount of performance but reduce the size of the processor and make a smaller and smaller processor. So, what does increasing the size of the processor buy you? So, it gives us more computational power. We can process more complicated problems. Back in the day, we wouldn't be able to imagine solving a problem involving lots of moving bodies and interacting with each other and such, but today that opens up an opportunity to revisit those sorts of problems. And then, for instance, today we can have processor that have 5.6 million processors, 5.6 billion transistors and the characteristic length of that processor is 22 nanometer, but if we want a project Moore's law, which has maintained it's accuracy for more than 50 years very interestingly, if we want a project that into the future in 2017, we will have 50 cores as opposed to 18 cores. 2021 we will have about like 200 cores. And, therefore, this is an opportunity. We have to look in advance. And then, on the other side, we can just have the same power. For example, the high end processor I showed in the previous slide with more than five billion transistors has a characteristic, it's length is about like 12 millimeters. For example, we can imagine it as a square of 12 millimeters each side. But later on, in 2021, that length will decrease, and it will be about like five millimeter each side. So, we can use the highest end processor today in a cell phone in 2021, and we can do computation we are able to do today on a machine, a desktop machine, we can do it on a cell phone or on a watch. So, on the other side, there's another trend. This shrinking of the size of the problems have resulted in some other trends. So, back in the day, people would use graphics card in a computer. Graphics card is a unit, in addition to whatever computer brings to the picture, is a unit to process the graphics. For example, if we want to play a game, computer game, graphics card takes care of refreshing the frames. Each graphic card has a lot of cores. These cores, each one of them would take care of one pixel on the screen. It would update it, give data back to the machine, or calculate something to update the screen. But now it turns out that we can use this graphics card to do computation. And now scientists are using that. They make the problem kind of simple, composed of simple building blocks, like they can decompose the problem they have into basic operations like addition, multiplication, and such, and then launch these problems onto graphics card, and graphics card can do these sorts of computation very fast. That's what, for example, we do as part of our high performance computing. And I will show some results how these sorts of accelerators can help us. How about the memory? I talked about hard drive, I talked about processor, but what about memory? Actually, it turned out that most of the applications are bound by the memory transaction. So if we want to do a computation, we have to grab data. If we want to add two numbers, we have to first grab them and then add them and return the result back to the user. This transaction is sometimes way more expensive than doing the computation. So to do one GPU global memory access, it takes 400 cycles. Like, 400, like we have to wait about 400 seconds as a characteristic time scale. But if you want to do, for example, 32 add/multiply operations, we can do it in one cycle. So this also suggests something for the engineers. Engineers need to grab data and do as much calculation they can do on that data, and then return the data. We cannot access the data wherever we want whenever we want. It has to be kind of pattern. We grab two numbers, do 10 operations, more than 10, and then we return it to the user. The good news is same as the processing speed, the memory access speed is also increasing. It's not as fast as processing speed, but it's going well. But the good news is also there is a concept of 3D memory which is coming very quickly, very fast in the near future, and then it speeds up the memory transaction a lot. So, where do we stand in this high performance computing arena as a lab in mechanical engineering? So we have a computer cluster. It is on the fourth floor of the mechanical engineering building. It's composed of a lot of CPU nodes. And by node I mean a computer box that includes everything you need to calculate things. And then we have a lot of them. Like 14 of them. So if we have a problem we can either use one of those or we can decompose the problem. Each one of those can compute part of the problem, and then grab the data attached to them and send it back. And the computer cluster also has a lot of GPU nodes. They had a graphic card and then we can use them as an accelerator to further speed up the simulation. Among these cores, two of them are really powerful. I will talk about one of them, shown here, later. The entire thing is called Euler. This one powerful node is called Lagrange, and I will describe it further. So, in general, our cluster has about 50,000 GPU scalar processors, about 1200 CPU cores, and 2.7 terabyte of RAM, and it gives us about 40 teraflops double precision computational speed. 40 teraflops. So to see, what's that? So to see where does this stand, I will give you an example. So, before that, so let me talk about this powerful node I mentioned earlier. So it's one node is like the computer you might put under your desk. This is the same size. It has three GPU cards. It has one Intel Xeon Phi, and then it has two Intel processors. So it is Xeon Phi and the GPU cards are accelerators. They are not essential to the system. They are complementary, but we can use them to accelerate the system. And two Intel Xeon processors. And then 64 gigabytes of RAM. And the theoretical peak flop rate is about seven million megaflops, or seven teraflops. It cost us about $20,000 to build it in 2013. 2013. So two numbers are important; seven million megaflops or seven teraflops and 20,000 and 2013. So, here is an IBM BlueGene/L set up by Argonne National Lab. In 2007, it would give about 5.7 teraflop. A little bit less than what I showed in the previous slide. And, in 2007, it cost $1.4 million. So, what somebody couldn't use in 2007, that would be privilege of only a few research labs, and now a lab like us can afford it with $20,000. The price would be a lot cheaper probably in today's price. And we can do the same computation, we can get the same performance as 2007 IBM BlueGene/L. Argonne National Lab didn't stop there. They set up another supercomputer. It's called MIRA or BlueGene/Q. It's about 1,000 times faster, and it is the fifth fastest supercomputer in the world. So I don't have the price for this one. But it can do eight million billion operations in one second. So here is how the supercomputers are changing. This kind of exponential rate, so because this is, the vertical access is 10, each grid becomes 10 times larger. So they are kind of increasing faster and faster because of the Moore's law. Every two years the power becomes twice as big as the previous two years. So here are the five fastest supercomputers listed here. And, well, we can expect this trend goes on and on for a couple of years until we hit the physical limit of Moore's law. So, in terms of how the supercomputers are placed in today's world. Well, mostly the people who use high performance computing, they rely on supercomputer, like Euler I showed earlier. Supercomputer that has many nodes separated and the nodes communicate with each other, transfer data back and forth. A portion of the supercomputers are those so-called massively parallel processing, or MPP. And they are one big unit. It's not composed of several units. It's one big unit has everything you want. So, it's only unit. There is not much of communication with other units. Therefore, you can expect that it shares more performance. It performs much better. But, again, having those types of supercomputers would be a lot more expensive. In terms of who uses supercomputers. So, the big three players are academia, national labs, and industry, and it turns out that industry owns a lot of supercomputers. In terms of performance share, in the left plot, so research labs share more performance. They own only 26%, but they share 50%. Industry only 20%, although they own like 42% of the supercomputers. And then academia about the same. So, just two things. The first one would be problems in industry are complicated. You can have supercomputer, but it's a little hard to use it efficiently. The second thing would be in industry there is a gap between like supercomputing knowledge and harder resources. So this suggests some opportunity for jobs in the future. So there would be a lot of opportunity in supercomputing because those industries need people to make this 20% portion a lot larger and kind of compatible with what they own. So, how are supercomputers changing? So, within the past couple of years it seems the emergence of accelerators, like graphics processing unit, the graphics processing unit share more and more of the computing power. So people started to look into that a lot more. So a lot of computations are now performed on graphics processing unit. So, what will happen when the Moore's law fails, or doesn't fail, I mean hits the physical limit? So one opportunity, first off all, this is not going to happen in like two or three years. It's going to be at least a couple of years, and by that point engineers will be able to solve many, many problems that they couldn't imagine. But parallelism provides and opportunity to use the resources, the existing resources, without any improvement in the hardware, using them efficiently to solve new problems. This is one thing, and this trend will continue until we hit the power wall. So, we have the same, we have the constant hardware, but we are going to add to that. We can add as much as we can supply power to that. And this trend will continue. People need to rewrite software to efficiently use those hardware. And then, further into the future, there is the prospect of using tunnel transistors and photonics and quantum computing. So, long story short, the future looks bright. I would like to quote this frame of Stanley Kubrick's
movie 2001
A Space Odyssey. This shows a monolith. It came, it changed the world, whenever it appeared there was a new generation and such. I would like to, some people might said this might be a TV. I would like to think that's a supercomputer. It's like every generation it's going to improve the way people can do computation and such. So, from now on I'm going to talk about multi-body dynamics, which is the main component of our work. So, to start, first of all I would like to describe what does multi-body dynamics mean. So it means we have dynamics, we have something that are moving, and they are connected to each other. So, for example, this tire is connected to the chassis with some joint or CV joints, axle and such, and everything is put into one picture to solve a meaningful problem. And everything needed to solve that problem was discovered more than 300 years ago by Newton and Euler. It's called Newton-Euler Equations of Motion. So, equations are there. So, what can be our contribution? It has been there for 300 years. So, is it going to be hard to have a contribution? Can we change the equations? We are not going to change the equations. The equations have proved correct for years and years and years in this scales that we are working on. The thing is to use those equations sometimes we have to simplify the problem. For example, if we have a revolver joint, we are going to neglect the geometry of the joint. If we have like a ball joint, for example, here, we are going to just assume it doesn't have any geometry. It's some relation between two points. A point of like this axle and of a point of the other axle. Geometry is neglected. But geometry is important in many applications. Like, for example, you see the Mars Rover here. The way the wheels are designed and the geometry of the entire object or the geometry of the grains themselves, are they going to be spherical, are they going to be elliptical, are they going to be like rocky grains and such? All of them are important. And the total operations depends totally on all sorts of small, not very small details. So, how does it translate? How does geometry translate in terms of equations? So whenever we have something, some problems like in the previous slide, whenever we have to objects, the two objects can have contact with each other. And it matters where do they contact. So if, for example, I have this object, it matters if I push it from the end or I push it from the beginning or I push it from the middle. So it totally matters that where does the force apply because at the end of the day we need to calculate the contact force and send this contact force to the engine, find other forces, and find the motion of the object. Well, it turns out that commercial softwares fail to do these sorts of applications. A lot of questions have been there for 300 years, not actually the contact, but two solve a meaningful problem with contact, commercial softwares fail. Here you can see one commercial application. It's a bucket. It's about like 30 balls without any specific geometry, just rounded ball with one radius each, and they fall into a bucket. We want to model three seconds of this process, and to do that it turns out we need to do the computation for about 100 seconds. It takes 100 seconds of the computer to do three seconds of the actual process. And not only that, if we want to have a meaningful problem like the Mars Rover I showed earlier, the Mars Rover, to give you some scale of how many particles we need to simulate those sorts of problems, one cubic meter of sand includes one billion particles. Here this software solves 30 in one second. If this software wants to use one million, one billion is, say, is impossible. This software wants to do one million. One million takes this about 5,000 years, if you want to use this software. So, it's totally impossible. No matter how much we wait for the Moore's Law to double up every year, it's not going to happen in the foreseeable future. And there are other open problems in dynamics that need some extra effort in terms of engineering. So there are applications in granular dynamics. Problems are becoming more like many-body dynamics instead of multi-body dynamics. There are some applications that bodies have frictional contact. They interact with each other. They impact with each other, like sand. There are other bodies that have flexibility or they deform, like, for example, we have beans. Everything is not perfectly rigid. And then we might have interaction with a fluid. They fall into the water. How are we going to solve that problem with direct numerical simulation? And in terms of application, the list goes on and on. Here, you can see a lot of application. In farming, there is many applications of multi-body dynamics in only this picture. The truck, how it interacts with this granular bed or rocky bed and how the farming material comes into the truck. Are they designed good enough so that they don't clog? And in pharmaceutical industry there are applications. If we want to go further, it can be used to simulate avalanche dynamics or landslide and a lot. So that's the beauty of hypervised computing. If you make sure that you can solve some of these problems, then you can look at, okay, what types of problems are there that we haven't looked at in the past? So, we can just define problems that we will never think of. So we can just push the boundaries of the computation. So, the common sense says most of the problems in today's world have a lot of degrees of freedom. So if we have like one million objects, so each object can move in three directions in space. So it needs three components. It can also rotate in three directions. Six components. If we have one million, we probably have four million contact. Each contact needs three components. So, at the end of the day, one million bodies needs about 20 million, 30 million degrees of freedom to describe the entire system. So problems of one million degrees of freedom have 20 million degrees of freedom, degrees of freedom to solve. So, these types of problems oftentimes require a lot of compute power. That's how the computers become relevant.# Okay, we have computer, we have dynamic, we have problem, how are we going to connect them? So, say assuming we have this problem, multi-body dynamics, not many-body dynamics. It includes only less than 60 components, say. So, if you want to formulate this problem, one way is to do pen and pencil. But is it really feasible to do pen and pencil? We want to write one equation for each joint. Like we say, okay, tire one is interacting with the chassis because it has a revolver joint over there. We need one equation, and we need another equation. We have to write, okay, one equation for one and do one equation. We cannot do that with pen and pencil. We need to figure out some way so that we define a problem, everything is done automatically under the hood. This is more important when we have millions of degrees of freedom. Nobody can write them with pen and pencil. Everything needs to be done automatically. So, this is how it defines our understanding in today's computation world. We are working on an open source software called Chrono, which is a research grade computational software to do multi-physics modeling, many-body dynamics, and so on and so forth. The goal is to advanced the state of art in the simulation in many-body dynamics, and then use the emerging hardware technology, we don't know in two years what kind of CPU is going to be out there, what kind of memory model will be out there, so the business of us as a developer is to just be up to date with the hardware grab them efficiently. And then after we've developed this software, we want to attack, like open meaningful problems in engineering, like granular dynamics. Like, say, for example, truck design, which I will show later. Chrono relies on several foundations. It has advanced modeling part to relate the equations of motion and define them so that a computer can solve them. It has proximity computation, which I'll describe later, and then solution algorithms, how we want to define the problems, how we want to improve the problem, how we want to solve the problem. And then the main composition, if we have a big problem like truck that goes through the water and it interacts with like, say, for example, rocks at the basin, at the bottom of the sea, they are different problems. We can send part of the problem to one computer and part of the problem to one computer. Not only that, each sub-physics can be split into different domains in computational world, and then the results can be put together. So, what does modeling mean? The first component of Chrono. So, it means we have the problem, in an ideal world everything is composed of molecules. Can we model everything? Can we model, for example, this wall with a bunch of molecules. It's in physical because it's composed of a lot of molecules. If we could, we could describe most of the phenomena with just molecules and solve it. But we have to go on a scale up. We have to, for example, if we have water, we need some model to describe the motion of water. If we have rigid body dynamics, we need some model to describe that. If we have flexible body, we need some model to describe that. So this is where the modeling comes. Modeling is good as long as it provides us a solution in a physical time. We want to solve the problem in less than, say, 12 hours. We want to launch a problem and get the results the next day. We don't want to wait forever. I should mention that we follow and open source philosophy. Whatever we develop, it's going to be free for everybody under BSD3 license, which is one of the most permissive license. Everybody is welcome to use it and we actually encourage everybody to use it. They can use it the way they want. They can just even neglect crediting us and such. So, talking about modeling, one would be, for example, having a bunch of rigid bodies. Say, for example, this selective laser centering machine. So, in selective laser centering, there is powder, a lot of powder involved. Powder is composed of grains. Each grain is a rigid body. And then the machine comes, split the powder in a surface, and then some rays heats the grain, melts some of the grain, so it creates one layer of the object. And then again another layer powder, another laser beam, and then another layer of the object. And this process is done and done and done until the entire object is built. So, we want to model this process using Chrono capability. Here is some of the results. So, we have this roller. It comes and splits up the powder. You want to have a smooth surface so that when we heat it with rays we don't get unwanted effect. So, it is one sort of modeling. Then we can, modeling that we can measure the smoothness of the surface and such. There is another application in 3D printing. So, somebody came to us, a professor in polymer engineering, Professor Oswald, came to us and said we want to model, we want to 3D print a dress, and the dress is huge. It doesn't fit into the 3D printing machine. So what can we do? We can do inverse engineering. So we can, first, model that. We can throw it into a bucket that fits into the machine. We can 3D print only this bucket, and then we can give the data to the printer. The printer prints this, and then when we open it up, it will be the dress. The only problem would be, okay, these components may stick to each other when we model that. So that's not a big deal in modeling because if these are the real particles we can, in the numerical world, we can expand them a little bit so they are this. And then, so when we model them, we make sure that there's always some gap between the particles so they won't attach. So once we model this, we give the data, we print it, there won't be an issue. Another physical modeling would be, okay, what if we have, say, cohesive material, like a snowball. Like you can see here. It looks like wet snow. We can also have different types of cohesion in the material. Like more stronger cohesion. Like here or here. So this is another part of the modeling, and it turned out, actually, that using the model we're actually implementing right now it won't be hard to have this extra feature. So we were using that. Or we want to extrude some foam into a bucket, like what you see here. Again, this is simple rules of multi-body dynamics without one small factor we added called cohesion or addition between the particles, and now we can have foam. Here is another example. A truck comes and cuts through, say, for example, snow and such. We use both multi-body dynamics and cohesive material. It looks more like moving the knife over the butter, but it's probably the same principle but we can have, we can simulate even that process. There might be other applications such as, other modeling components such as deformable bodies. You can see here a ball falls onto the grass, for example. The grass is a bit like flexible bodies. They are not rigid anymore. They are not described by molar equation in their basic form. So they are described, actually, by these sorts of equations. And this is another components of the simulation engineering. We can have both deformable bodies and some constrain. For example, ball joint. They're attached there. We can have them. We can have, say, for example, if we want to model a tire, maybe these beams are not enough. We have to look into some other modeling techniques. So, we decompose the tire with a bunch of other elements, deformable elements. These are the components I'm describing because all of these components eventually will be useful in designing a complicated problem like truck. What if we have fluid? Again, we want to have rigid bodies, like this floating object you see here, and we want them to be pushed by the fluid. They get influenced from the fluid. And not only that, they also push back the fluid. They influence back the fluid. If we have a large rigid body here, we want them to clog the fluid or prevent the motion of the fluid. We want this two-way coupling between them. So, again, it turns out, okay, these equations invented more than 300 years ago. These are different sets of equations needed to solve for the fluid, and we need to kind of, these equations are also invented a long time ago. But a little bit of work has only been done to couple them and solve meaningful problems involving both of them. So here shows an application. For example, we have a bunch of rigid body fluid flexible bodies. The fluid is cut to show the rigid body motion; the rigid body motion are also cut to show the motion of the flexible bodies. They interact. All of them together impact each other, and they impact other things. So, we do discrete simulation. Our simulations are composed of all them, most of them are in terms of discrete systems. We have a problem. We have a granular material. It's composed of a lot of grains. So, we need to find their interaction. First of all, we need to know who interacts with what. So, knowledge is sometimes taken for granted. We have to kind of translate it into the machine language. For example, if you go to the library and want to find a book, one way is, okay, I have the name of the book. I can go shelf by shelf, one by one, look at each title, and find the book I want. But, well, this is not what we do. We go, okay, we see the code of the book, or the number. We go, okay, is between these two numbers. We go to that shelf, and then it is between these two numbers. It is in that shelf, and then we just look all the books in this one, not all of them. So, similarly, we need to kind of translate it into machine language when we want to find a near particle. So if we have a particle, we want to find the nearest one. We can, the same idea, we can decompose the system, the space, and then look only the nearby spaces. It turns out that, okay, this is the plot I showed earlier, composed of one million bodies. Using this trick I showed in the previous slide, we can get some linear trend. So we can expect to solve a one million problem in less than a day or two days. Much faster. So, we have to look at, like we have the supercomputing power, but we also have to look at algorithms. We have to figure out the ways to use the supercomputing power. One step further, we all solve this problem on a GPU, graphics processing unit. So if we do that, what's the speed we can get compared to solving this problem on your machine? Your machine is like a block, say, for example, this big, and it's composed of one GPU like this big inside it. And then using that small unit, we can get, if we use it efficiently, we can get up to, for this specific problem we can get it up to 160 times faster speed. So a problem that would take 160 days, now it takes like one day to solve. So, the next component is algorithms. How efficiently to leverage computational resources. The key word here is efficiency. So we can leverage computational power, but we want to use it efficiently. The key word is important because of another reason too because efficiency, so just like optimization, we want to do it in a faster time. The fastest speed. And it turns out that our problems translate into kind of a minimization problem. I hate to show this slide. It has a lot of equations. I won't go through them. Personally, this is my job to look at this equation, but whenever I look at that, I don't feel as good as looking at that animation. So the bottom line here is, okay, we have those equations from 300 years ago, but we have also equations for these joints and such, and we have a couple of equations which rely on the geometry of the objects, with contact and such. So it turns out that those ugly looking equations, at the end of the day they will be some sort of simple nice looking optimization problem. It's a quadratic optimization in terms of engineering, and apparently it is one of the easiest problems to solve. So at the end of the day, instead of solving that problem, we have to solve this optimization problem. By optimization I mean finding the minimum of this function. And wherever that minimum happens, the valley of that function happens, that's where we have the other solution. So, finding the valley of a function. We are walking on a hilly mountain. The idea is to find the valley of that hilly mountain. One way to find it is to have a simple rule. I move, each time I want to move, I move towards the direction that decreases my altitude. I go a little further down. That is the simplest rule. Well, we can do it a little bit, so that may result in some sort of trial and error. So this is the contours of the elevation. If I want to do that step, I have to go a little bit each time. I have to do a lot of steps. But it turns out that with some other more advanced algorithms I can do that much faster, and it results in a lot better results. So this is how we want to minimize the function with blindly stepping through the mountainous area. If I do it with a little bit of trick, I can decrease the valley of the function much faster. So instead of doing, say, for example, 400 iterations to find a solution, I need to only do 20 iterations to find a solution. And that's where I want to have my solution. And with that, we can solve now a lot of good problems. So, for example, if we have granular material, there is an experiment, it's a famous experiment. If you shake them, at the end of the day some patterns appear on the granular material. It's kind of honeycomb. And we did the same experiment in computational world, and you can see some patterns appear and such. So, another application is like we have a couple of other problems, a lot of rigid bodies, different shapes. It's like blocks compared to that commercial solution which was only like spheres. Here we have a bunch of blocks or a bunch of spheres or ellipsoids and such. They flow over this rocky area, and they kind of want to go through the holes and such and fill this in and interact with each other. And then, here is another application. I will skip to that sort of in the interest of time. So I'm getting closer to our real application, which is truck design. Here is truck vehicle moving over granular material. So, to save computer power, we generate and throw away unessential parts of the domain. The track shoes have some weird geometry but the software is designed in a way that can handle any geometry. It can handle (mumbles) and such. So, here you can see a problem composed of one million objects. Remember that number I gave at the beginning, like 5,000 years. Now we manage to solve that problem using the technique and the using the pilot programming and the graphics processing unit in less than two days. Now here is another problem. Now this is another level of parallelism. Each sub-domain colored here, each one of them is processed by a different segment of the computer cluster. And at the end of the day, they stick to each other, they communicate with each others, and they solve this problem. Now, let's go to truck design and Chrono in vehicle mobility and terramechanics. So, vehicle mobility is an open problem. There are lots of problems in this area that haven't been discovered yet. And then the vehicle must be designed so that if the conditions or if the scenario changes, we can find a solution. We cannot do these sorts of things in experiments. Experiments are expensive. Setting up an experiment, if I want to give you a number, there was somebody in oil industry, they said, oh, I want to move a ship through ice, through a basin of ice. Modeling that in a scaled down basin, just setting up the simulation, it would take about a day, or something like that, and it would cost $50,000 for one experiment. So now, oh, we figured out we did a mistake in some parts of the experiment. We didn't measure some number. If we want to do that, another $50,000. But if you have a framework to do this stuff, a lot of them we can do, we can save a lot of money and we can do lots of search for best design and optimal design and such. So, one of the vertical components relying on the Chrono components is Chrono vehicle. It is some, it's composed of some subcomponent designed so that we can solve problems like this. A vehicle that goes over a rocky mountain, a vehicle that goes through water or on sand or something like that. So, it's part of the Chrono toolkit designed for wheeled and tracked vehicle, like the track shoes I showed earlier. It's a middleware, which means it's not a problem, it's not kind of a software that everybody can pick up and use. Some sort of knowledge is required, we can pick up and solve it somehow. It's, again, open source. We stick to the open source philosophy. Everybody can use it, if they can do so, and they are welcome. And we kind of, again, encourage everybody and we help everybody to use that. And all of the dependencies are, so staying in the open source area requires that everything that you rely on should be open source too. If you use one component that is not open source, if the vendor of that component says I cannot release that, then you can not release the entire core. So the Chrono vehicle includes a lot of components to solve the problem. It has a different suspension system. They are already there. If we want to solve the problem, we just need to pick up and change a little bit. That subcomponent, for example the suspension, and then if we want to do steering a subsystem, driving can be four-wheel drive or two-wheel drive. Like the brakes and wheels, the wheels can be deformable, they can be rigid, and such. The graphics or the visualization is also something like this where a programmer uses it. You see something like that. It's a fast visualization. Here is another alternative. Both of them, the first one and this one are both of them open source. Everybody can use it or deliver it in their software. Here is some example of using this Chrono vehicle framework. It shows a truck, a vehicle that is moving over a pass, and then it breaks. You see what kind of forces or acceleration or what is the maximum speed it attains. You can measure all the forces you want. You can do this scenario in different other ways. For example, if there's a double lane change, in this problem it goes, it changes the lane, it goes back to the lane. We want to see what types of force that causes. For example, here, you see the change in the forces and such. So, the user can define scenario. The framework is there. Here is another scenario. The vehicle goes over a non-smooth pass. It's kind of a hilly mountain. You can see, okay, what is the right comfort. How much motion the driver sees in this scenario or what type of force the driver sees and things like that. Then, these were all on hard floor. A lot of interest in off-road mobility. Like, for example, we want to see how this vehicle moves on a slope composed of rocky material. So if you change the slope, you go like different distance. Your speed changes. Your power changes. The forces change and such. Here I talked about like the cohesive material. So we can have that, and we can have the vehicle. The components are there. Some engineer just needs to put them together and solve this problem over some sort of material that are kind of muddy material and see different forces. Here is like how the vehicle moves over sand and such. So, apparently, it doesn't go as fast as it used to go on the hard floor. It may even get stuck in this sand and, like, granular pebble material. Here is another source of visualization. Now, if we have data, we can extract any sort of information we want from that. We can color them based on the height of the grain. We can color them based on the stress. We can color them based on any other information that already exists. That's another part of simulation. We have data. Data are just numbers, and we have to kind of use high performance computing to process them to see what can we get out of that. Here is a similar scenario. The vehicle goes over granular material and goes on a slope. It's a kind of sandy mountain, for example. And, in this case, it's gets stuck here. Now we can have smooth, we can have kind of textured granular terrain, and then we can measure the acceleration of the chassis and such. There might be other scenario involving not only rigid body, there might be fluid, like the vehicle wants to go through the water. This is how we solve the water, through a lot of smaller domains, and then the vehicle goes through them. Here the power was not enough. It gets stuck into the water. We can have flow of water in one direction to see if they change the motion of the vehicle and such. And then, another component is how we're going to automate the simulation. So, here, via collaboration with another open source software company called Open Source Robotics Foundation, we just stick the two software to each other to solve a problem. This vehicle just follows this red path. It is designed to do so. It captures the view, and then it processes it. It sees, okay, I'm a little off, now I need a little bit of steering toward the right, for example, to stay on the path and move on. So, we do a lot of work on the validation side. So I just showed you engineering application, but they are meaningless as long as we prove they give us good numbers. Sometimes we can validate the real problem. If some collaborator comes, says, okay, I have data for this vehicle. We use that, we simulate that, we say these match. Sometimes we have to go like split the problems into subdomains. One of the validation cases we had was opening up a gap. Like five micron glass beads come out. We measure the flow rate here in the experiment, and we measure the flow rate here in the simulation, and we see how well they match. Here is the flow rate measured. They kind of match in a very good way. Another validation we performed was cratering. If there is an object hitting the granular bed, how much does that go into the granular material depending on the size of the object, depending on the speed of the object, friction, type of the material and so on and so forth. Here you see a couple of materials. One of the students in the lab shows wood, Teflon, so on and so forth. For each of them, we get a different trend. And, apparently, the distance that it goes into is kind of a weird equation of diameter and speed and such. We managed to reproduce the same equation A little bit off in terms of value, but the trend and the dependency of the variables I interestingly similar. Another type of validation is we want to measure if we have a bunch of sand, they don't flow, like they don't spread entirely. They just form a cone, and, therefore, we want to measure something similarly. We want to measure, like if we have a bunch of soil samples if we push it and pull in the other direction, how much force we need depending on the pushing force. So, again, here are the experimental results. Here are the numerical results produced here. The other famous experiment is cone penetrometer. And so people like drop this cone and they see how much it penetrates into the granular material. And we simulated it with similar simulation framework. Another test is tri axle test. A bunch of granular material, we push it from one side and then there is pressure from the other side and we measure stress, how much force are between the particles and such. Here is another one. Kind of a real problem. This excavator digs into granular material, and then, so we are regenerating the actual process here. Then, at the end of the day, we want to see how much force, for example, we can measure here with sensors or with, like, computation with our software. And, so, again, here are the plots. If we use different numerical approaches and they match with each other. In other types of simulation we are moving a wheel over granular material. We want to capture the motion under the wheel and so on. Here I show some plots, and here is the simulation. Here is the experiment, and this was done in collaboration with another group of researchers. And depending on if the wheel is like kind of toward you, you see some pattern formation. If it is like zero slip wheel, it's a different pattern, and then if it is positive slip or driven wheel, it's a different pattern. So, again, these are the problems we are working on. In the fluid side, similarly we have a bunch of validations. I will skip through that in the interest of time. Here we have suspension going through the channel. As they go through the channel they try to spread into the channel. Some of the particles go toward the wall, some of the particles go toward the center, and we manage to get similar results with experiment here. So, I would like to conclude my talk by acknowledging the work of my colleagues. I showed a lot of their works today. Here is current colleagues of mine in Simulation Based Engineering Laboratory. Other collaborators from University of Parma, from University of Iowa. And then, to summarize my talk, we are committed to open source philosophy. We like to generate software that is useful for everybody. We encourage that. We help people to use that. Some of the applications you already saw here. There are other applications we haven't think of. People are welcome to use that. And then we are committed to, a lot of our effort goes to validation. We are committed to provide software that provides meaningful results, not just a bunch of numbers and nice animations. And, with that, I would like to conclude my talk, and thanks again for your attendance, and I wish you good night tonight.
applause
Search University Place Episodes
Related Stories from PBS Wisconsin's Blog
Donate to sign up. Activate and sign in to Passport. It's that easy to help PBS Wisconsin serve your community through media that educates, inspires, and entertains.
Make your membership gift today
Only for new users: Activate Passport using your code or email address
Already a member?
Look up my account
Need some help? Go to FAQ or visit PBS Passport Help
Need help accessing PBS Wisconsin anywhere?
Online Access | Platform & Device Access | Cable or Satellite Access | Over-The-Air Access
Visit Access Guide
Need help accessing PBS Wisconsin anywhere?
Visit Our
Live TV Access Guide
Online AccessPlatform & Device Access
Cable or Satellite Access
Over-The-Air Access
Visit Access Guide
Passport













Follow Us