Steve Brunton: "Introduction to Fluid Mechanics"

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right thank you so it is my pleasure to be here to close out the tutorial sessions I'm sure everyone is ready 25 hours of tutorials that's a pet sink in for for a minute 25 hours maybe it's 20 but still it's more than all of the Star Wars movies or Lord of the Rings movies back-to-back so pretty impressed that everyone is still interested in hearing more so anyone who knows me knows that I love fluids I love the fluids you breathe that you drink I think after this this this week everyone is ready for a cold fluid so I'm gonna try to end on time yeah I mean fluids are incredibly important for a number of reasons so this might seem a little bit out of place in this kind of set of tutorials on machine learning things like dynamical systems but in some sense fluid dynamics are some of the original it's one of the original fields of big data so big data was was a reality and fluids for decades before it was a reality everywhere else and a lot of the computational architectures storage and transfer algorithms used to process data were developed to handle fluid dynamic systems and so I'm going to kind of try to give you a mix of some of the foundations of why I think fluids are interesting why they're challenging and important and that there's basically a ton of open problems for us all to hopefully start start solving okay so in one of the things I want to point out that I just found kind of interesting and fun is I mean how many of you have heard of the eigenfaces and pretty much almost everyone it's this idea that if you take a big library of human faces you just take all the Facebook data and you crop and align everyone's face you can compute the singular value decomposition of that matrix and you get these eigen faces and we're each kind of a unique fingerprint of those eigen faces that make up our individual faces so instead of being represented by a million pixels in a modern camera image you can be represented by a few hundred coefficients of these eigen faces now interestingly that first eigen faced paper written by serie vich was written in the same year that he did the same thing to fluid dynamic systems so he was clearly thinking about low dimensional structure and patterns in fluid dynamics so serie jd7 he came out with icon faces and the snapshot P OD for fluids which transformed both of these fields tremendously okay so really these fields have grown together for a lot of the history so I wasn't exactly sure where to begin I'll be honest I could have started with an equation I could start with movies I could start with history what I want to do to begin with is start with what I'm gonna call complexity because fluid dynamics are especially rich and striking in their complexity and multiscale nature and so I'm gonna I'm going to start by just giving you an idea of what I mean by complexity in the equations where it comes from and then I'm going to show you movies and pictures of this can complexity playing out in real life okay so these are the navier's stokes equations where u is the velocity field of a fluid in space and time so it's a it's a function of space and time P is a pressure and essentially the Navy or Stokes equations are derived through mass and momentum conservation so all of that vector calculus that you learned that you thought you might never use again you use it to derive these equations so mass conservation for an incompressible flow momentum conservation for that flow and essentially all of the complexity in my mind enters in through this Reynolds number parameter here 1 over the Reynolds number and roughly speaking the Reynolds number goes up when the flow velocity goes up or when the scale the size scale of the fluid you're considering goes up so if I have a slow fluid over a small object that's a low Reynolds number if I have a fast flow over a big object like a mountain that's a high Reynolds number and it's what tells you how much viscosity and diffusion you have in this in this problem okay so this is a diagram taken out of the Fineman lectures that basically shows the kinds of behaviors you can get in fluid flow past a circular cylinder as you increase this Reynolds number or this complexity parameter so you go from a really simple steady solution to a steady solution with some separation so there's some interesting separated recirculating flow here at a critical Reynolds number your flow goes through a bifurcation it turns out to be a Hopf bifurcation and you start getting periodic vortex shedding this is the Karman vortex treat and as you increase the Reynolds number further you get turbulence and all kinds of interesting phenomena and of course it becomes 3-dimensional at some point between here and here and it's pretty interesting how this has an effect on the drag so this is the drag coefficient as you increase the Reynolds number and you actually see that there's a dramatic change in this drag coefficient across this this wide range of Reynolds numbers so if you drop a ball from the Empire State Building it's gonna go through a lot of different drag regimes on its way to the ground so again back to our discussion about Galileo and if you only collected data from from spherical balls dropping you know it's actually not that obvious that there is a gravitational constant unless you do your experiments very carefully because there's a really interesting rich physics going on here corresponding to these kind of multi scale complexity due to this this Reynolds number this is a picture I took from a review paper that Sam Tyra wrote and I really really like this this is this is a recent paper so this is the fluid flow past a cylinder at Reynolds number 100 this is approximately the same size scale as the flow over a fruit flies wing so that's how small and slow this flow is and this is the flow of clouds over r-e shiri island in japan and you can see the striking resemblance between the Karman vortex street in this low Reynolds number low complexity flow and this extremely high Reynolds number atmospheric flow and so this is really interesting that often times even you change this complexity parameter and you get extrude so there is a lot of complexity happening here if you zoomed in these clouds would be doing very interesting dynamics but there's this large-scale emergent behavior these dominant patterns exist even in very very complex flow fields so I remember one of my fluids professors Lex Smits telling us that one of the ways one of the things that characterizes complexity influences is in turbulence is this presence of multiple scales in space and time okay and so he actually said if you watch a movie a low-budget movie you can tell how low-budget it is based on how big their explosions are and how multi scale the the different explosions are so if you take something like a volcano and you zoom into a patch you can see that there is actually a lot of scale wispy you know Multi multi scale structure here there's maybe a better image I'm going to pick one and blow it up and you can picture that if this camera got closer and zoomed in there would be even more roiling structure in these clouds so multi scale is what happens when you increase that Reynolds number you might have these dominant coherent structures this dominant vortex street but on top of that you're gonna get this massively multi scale in space and in time behavior okay so I'm going to show some videos now you see a lot of interesting multi scale behavior in geophysical flows so this is the high calm dataset you can download this jerod downloaded this and made this movie and you can see this is just for the Gulf of Mexico you can download this data for the entire globe okay I can't remember exactly what resolution in time this is I'm gonna guess an hour daily okay so so I mean this is even just daily daily measurements I believe this is satellite data yeah that's that's my understanding and this is so Jarrod knows actually all about these data sets so he would be a great resource and you can get you access to these so this is a simulation by a European group ocean next I believe this is hourly data and what they're plotting here is just meters per second so this is kind of flow intensity or current strengths and you can see that again this is an extremely high dimensional multi scale rich flow but there are some pretty dominant patterns here you know that you emerge you know that you see yeah exactly so it's something crazy is happening up here so okay one of my favorite ones this one just blows me away every time I watch it from turbulent steam so this is a Rayleigh Bernard convection flow and in fact this is the basic flow that gave rise to the Lorentz model that very simple three state Lorentz model that we all use was derived from a simpler version of this is Rayleigh Bernard convection and so what you have on the bottom is a hot plate and the top is a cold plate and just these kind of thermal instabilities are driving these rich convective structures so these kind of Mushroomhead looking things are going to go from both sides and eventually this is going to become completely mixed mixed flow and remember in when I started the movie there is this very very very very fine fine structure and that's giving rise to this kind of these agglomerating large-scale structures now it's kind of interesting I'm showing this is a 2-d simulation in general turbulence is characterized by three-dimensional three-dimensional flows and in fact some people would say you can't have turbulence in 2d in reality it's fundamentally three dimensional things are stretching in three dimensions and so if you have a big vortex that is stretching in three dimensions it's becoming more intense just like figure skater doing a spin but you do see some interesting things in into this kind of two-dimensional turbulence and one of the things you'll notice is that the very very very small structures are grouping up into these larger and larger structures so the havior of turbulence in 3d big vortices break down into little vortices and littler at little learn littler and something at infinitum and in two dimensional turbulence like this little vortices add up and add up a nap and build up into bigger and bigger and bigger so they're completely different phenomena and in fact hurricanes on earth is essentially an approximation of a two dimensional turbulent system so you have you have a relatively thin fluid and so it looks to be approximately two dimensional and so instead of the normal three dimensional breakdown of vortices you actually have this kind of adding up and conglomeration of vortices that you see in 2 D turbulence so they're very different phenomena ok but beautiful to watch here's a video by by the group at kth in Stockholm Slatter Chevallier lock and Henningsen and what they're gonna show is what was the state of the art about five years ago of turbulence simulations on a boundary layer so this is imagine this is kind of what the your large 18-wheeler cargo truck might be experiencing on its roof or on your wing as you fly through the air ok there's all of these and these are just very very very small structures right on that surface and so you can see the onset of instability here amplifying into these what are known as hairpin vortices and those hairpin vortices amplify and develop into these larger and larger scale structures as you go through this developing boundary layer and so the range of scales and complexity here is truly staggering and this is you know this is kind of a state-of-the-art or approximately the state-of-the-art in in simulations and of course what we would be interested in a simulation like this would be things like drag what would be the the drag of the fluid on the wing or the truck or heat transfer maybe this is a hot wing and I want to know how fast this this flow is transferring heat from this hot wing out into the cold atmosphere okay things like that and so a quote that I think of a lot when I think of fluids in fact this is actually the very last set of paragraphs on the last page of the second set of Fineman lectures and I'm not gonna expect you to read everything but to summarize he basically says from the equations you would never know the richness that is exhibited when you simulate them and observe them in the real world from the Navy or Stokes equations you would not necessarily predict a hurricane or any of the other interesting things that you actually see in the real world so he says something about you know in our description of the equations for the Sun you don't necessarily see this rice grain surface structure and one of the ones I really like is the next Great Awakening of human intellect may well produce a method of understanding the qualitative nature of equations not just how to simulate them and how to write them down which is a triumph but if I give you a set of equations understanding what the content could be of those equations and so this is a big problem is that the Navy or Stokes equations are extremely large in the space of what they can describe all of the flows essentially that I've talked about and all the flows you can imagine are described by these relatively simple equations and we have to find some way of making sense of the physics here and so generally what we do is we take a data-driven approach we collect a bunch of data through simulations and experiments and we analyze that and try to pull out patterns and understand how these patterns evolve together okay and one of the tools that we have is to essentially look at canonical flows so I showed you these really really complex flow scenarios maybe I care about things like flow in the heart or flow inside the internal combustion engine like these complex real-world flows but instead what we often do is we simplify to these canonical kind of prototypical flows and we try to understand what their behavior is and then generalize to situations that are more complicated okay and I've listed just the ones that I could think of off the top of my head so people spend their whole careers on you know individual aspects of these canonical flows and there's this beautiful book by Milton van Dyck I highly recommend this for anyone who likes art and science it essentially has these great pictures of fluid flows it's great coffee table book okay and so I'll let you pick which of these flows is not from Milton Van Dyke's coffee table book so one of the canonical types of flows are wake flows we care a lot about wake flows this is my simulation of a fluid flow past that that Reynolds number 100 cylinder down below this is flow visualization smoke visualization of a spinning baseball this is a bullet and some kind of a rocket projectile in supersonic flow okay and so you have this wide range of phenomena basically wake flows deal with free bodies moving through a fluid so think of a bullet a baseball a truck a car very relevant because a huge amount of our energy that we spend in transportation is in moving objects shaped like this through fluid flows okay I think transportation trucks trains cars planes shipping containers all of our transportation essentially is based on these these wake flows and if for some if somehow we understood how to decrease these wakes or shape them into better better streamlined shapes either through passive or active control and optimization you would save a tremendous amount globally okay another really interesting set of flows are these fluid mixing layers so these are are just first of all very cool to look at but you also see these a lot when you have two fluids that slip against each other okay if you have so you see these in the atmosphere a lot when you have different fronts hitting each other and you have these kind of roll-up phenomena very very important for any kind of two-phase flow or if you have a fast flow and a slow flow and another canonical class of flows are Jets so Jets are really interesting this one is supersonic and you can see the shock wave pattern and then it dissipates into into turbulence this one I really really like this is I believe it must be Ashley R and image so it's like you hold a nice edge up and you bend the light around that knife's edge and you can see small fluctuations of the density of the fluid so you can visualize compressible flows and what you're seeing here are acoustic waves being scattered off of this or being generated from this jet okay so jets are extremely noisy and for anyone who's ever lived near an airport reducing or designing these jets so that you minimize the far-field acoustic radiation is a major major modern challenge in fluid mechanics so a tremendous number of people in the world live in the noise pollution zone of a major airport okay so flows are complex multi scale in space and time we have canonical flows that we get inspiration from and that we study to understand basic things like how is drag generated where is heat transfer well how do you improve or decrease heat transfer and one of the things that's happening increasingly in fluids is that there are these massive open datasets and challenge datasets for people to to work with so I think this is hopefully of a ton of interest for people in this room this is the Johns Hopkins turbulence database and they've been maintaining this database for a long time it's very well maintained professional database with lots of different canonical fluid flows of different complexities and sizes and exhibiting different different aspects I think they have eight listed on their main page Johns Hopkins turbulence database pretty sure Stanford has some turbulence databases as well and this is one of the videos from this database just again looking at kind of this multi scale structure that you have in for tissa t-this is a big simulation of some some vortical flow things are swirling around and doing those flows to you and what they're gonna do is zoom into one big bundle this is just one kind of filament of vorticity but inside there are many many more interacting filaments of vorticity that are themselves doing all kinds of interesting interesting fluency things okay these are Pruss so I think these are probably isosurfaces of something called the queue criterion which is a measure of kind of how much it's it's swirling or the vorticity that's my guess because that's often what people use to visualize these flows maybe they say probably queue criterion that's kind of a standard boy Larry and measure of vorticity yeah so that's a great question is how sensitive are the results to how the flow was initialized so there is a lot of very careful work that goes into how you set up these simulations how you validate how you initialize the inflow conditions and and how you start the perturbations to get the the flows triggered there are lots of things about the flow where we only care about the statistical properties like drag is somehow very much averaged over lots and lots and lots of those little structures moving very quickly and so in some sense like that boundary layer video with the drag that's probably pretty insensitive to to how you initialize other things like the hurricane yeah gonna be very very so individual flow structures are highly sensitive to initial conditions these flows are very chaotic they often simple positively alpha of exponents meaning they are stretching in tons of different unstable directions so an individual simulation if I start at epsilon away those two simulations will diverge but their statistical properties will often capture the same statistical properties yeah so so generally so the answer is yes to both I think that the most common trick people use is just to run one simulation for a really really long time and hope and assume that your system is ergodic and that you will essentially get close to every flow state statistically in time so this goes actually this goes back to the Boltzmann hypothesis and the Koopman theory and ergodic dynamical systems and it's been a really good approximation for these mixing systems so so in general yes you can just take that boundary layer simulate it forever and this is what we do in wind tunnels right so I in a wind tunnel I don't have the luxury of initializing an ensemble but it is really easy for me to turn my wind tunnel on and measure for an hour and average the results so it's a really you're relying on this organic hypothesis yep great question yeah yeah so for weather predictions they absolutely do ensemble ensemble simulations because that one they care about the instantaneous structure evolving rather than the long time statistics and so they'll actually get these kind of you know the cones of uncertainty for a hurricane that one is absolutely ensemble yeah yeah so let me see if I can go back to this one so in this in this boundary layer for example you know if I if I initialize this one way or another way you're absolutely right the instantaneous position of all of these these flow structures in the flow yes the instantaneous realization of all of these flow structures is highly dependent on how I initialize this and how I simulate it in all of that but of course the long time long time statistics have very little to do with that in the weather prediction case I I'm not an expert and I would assume that they just try to measure as much as possible and get as close to as possible and then run ensembles and and somehow bound down that uncertainty but I don't know you're right it's a super chaotic system and small changes are initially will add up to big changes later not sure that doesn't probably answer your question but I don't think I have one yeah it's a very good question now and I don't think you know in principle I don't think there is an easy solution to that one yeah distribution I would make a lot of sense yeah okay so the discussion of ergodicity and mixing and dynamical systems is perfect to talk about another really really important aspect of fluids which is that they mix okay so here what I'm showing is a plate a flat metal plate heaving up and down in a viscous fluid and what we're seeing are in orange are regions where the flow is separating a lot in in in a backward time so if you take two particles and integrate them backward on either side of these orange curves they would have come from very different places okay so this is a way that we measure mixing and highlight regions of the flow that are highly mixed or not very mixed okay and so if you've ever seen a taffy pulling machine and how many of you seen taffy pulling okay hello taffy taffy pulling machines are great examples of mixing you can start with a big hunk of taffy that half of it is white and half of it is red and you start beating this thing into submission and it turns pink very quickly okay that's kind of what these these so smooth mixing is an inherent property that happens in fluids fluids mix but it's also something that has a ton of engineering objectives so if you want to make aspirin and you're putting all these ingredients together in a vat you want to mix them as efficiently as possible okay if you want to burn fuel you want to mix it as efficiently as possible so a lot of fluids research goes into understanding the mixing properties of fluids this is a code that was developed by Sam Tyra and Tim Colonia so actually Sam is here at UCL a great fluid mechanician and essentially what we're doing is we're solving these incompressible navier stokes equations with boundaries so these are basically wings they're moving up and down or pitching and plunging and you can also use these kind of regions of high mixing these blue regions these are ridges of finite time we often of exponent in case you're interested see where the flow is separated the most so there's a little bit of separation here there's a ton of separation here and so separation is really bad for things like lift and drag you get a lot of drag and your lift might crash depending on the separation profile of these air foils so engineers care a lot about these kinds of pictures because it helps them design good maneuvers and good wings for you know high lift and low drag and things like that and also to understand how biology works so those movies I've been showing of these ridges essentially what you do is you drop a bunch of particles and you integrate those particles either forward in time or backward in time along these unsteady velocity fields and you literally just compare little neighborhoods of particles and see did they stretch a lot or not and what I'm color coding here these red areas are where the particles stretched a lot in backward time so it means that things are essentially coming together on these red invariant manifolds in forward time so in backward time they stretched a lot and essentially what that means is that these red curves are separate races between the outer flow and the inner flow okay so things that start inside of here stay inside of here things that start outside of here stay outside okay and they're essentially the time varying analogues of stable and unstable manifolds from dynamical systems very expensive to compute and just in the last 10 15 years people have come up with lots and lots of efficient ways of computing these finite time we often have exponents but they're very useful for characterizing sensitivities in your in your flows and dynamical systems in general people are starting to apply these two systems in neuroscience and other other systems okay good one of my favorite examples of this is by Shannon de birria and ours den when they were at Caltech in 2006 and so what we're watching here is a video of a jellyfish eating or moving or both and what they did was they actually collected velocity field measurements using a very clever apparatus behind this jellyfish and they've computed those special separate races I was telling you about these Lagrangian coherent structures which are outlined in red and then they color-coded kind of the interior and exterior regions as this periodic locomotion is taking place and so what they find is that through each period the flow the blue particles stay blue and they stay on one side and the green particles stay green but essentially you can see this green particles eventually get entrained into the inner the inner jellyfish well I think it's called through this periodic motion and so this study was really really interesting because it showed how these jellyfish simultaneously eat as they locomote is that a word as they propel themselves through the water periodically and essentially just counting particles and where they go through the velocity field that's that's still one of our go-to methods and analyzing fluids and imagine if the velocity fields I showed you were very high dimensional and complex this adds a whole nother level of complexity now I have to track particles through those velocity fields so it's a lot more data and a lot more computation and in a piece of follow-up work that I really like so John de Bary worked with kit parker collaborated with Kip Parker to then build a kind of robot version of this so this is I believe a rat heart tissue or mouse heart tissue with little pacemaker cells put on these eight pieces of the Octagon and they put this into a basically a salt solution and they start hitting it with pacing and you can see that they have actually built a little robe of jellyfish which I find fantastically cool and here's the video of the actual jellyfish just a wonderful train of research from kind of conception and under standing how these things work to actually building it I love this work okay unfortunately a lot of what we know about fluid mixing is due to pollution okay so when I was learning about this a lot of what had come out about these finite time Lyapunov exponents and coherent structures was studied in the context of where to dump pollution in Monterey Bay which is shocking if any of you been to Monterey Bay it's absolutely beautiful and they shouldn't be dumping any pollution anywhere but there are of course certain areas that are much worse to dump pollution because they just recirculate and recirculate and recirculate forever and if you wanted to dump it in Monterey Bay so it leaves Monterey Bay there are certain places that would be better than others yeah it's terrible and of course you can apply these on a more global scale flows now and with increasing computational power these are now becoming relatively tractable and reasonable computations that you can do to understand where lots of mixing happens where it doesn't happen and where instabilities will occur okay any questions before I go on so I talked about complexity canonical flows mixing I'm basically just showing you a bunch of stuff that fluids are cool and complicated there's a lot of data there yeah oh it went right here in this dark spot so there is a lot of very interesting work on using these kinds of forecasting techniques almost exactly what I'm showing here but for shorter times exactly for things like search and rescue off of coast lines so search and rescue is a huge one and then also contaminant release so if someone you know releases some unwanted contaminant in a city you'd want to know how it's going to spread in in intensity through that city and so George Haller who's one of the pioneers of these lagrangian methods he's at ETH in Zurich they've been doing a lot of good work on understanding these search-and-rescue strategies based on these short time forecasting predictions and I would say yes now you have a much better chance of finding someone than you would have without these methods yeah other other questions sensitive to initial condition that's a good question I so the the things that we measure we are measuring quite a lot but I don't know if it's enough but but I know I know that these techniques are being are being used and explored and they do give you know windows of useful predictions which which can make the difference okay good so I'm gonna kind of switch gears a little bit I'm gonna talk about increasingly the role of machine learning at data science and fluid mechanics because there's a lot of interesting applications and this is kind of just a buffet line of interesting things that I find interesting okay so in general this is a diagram of kind of how all these pieces work together my first diagram didn't have this little middle gear and the gears wouldn't turn and it drove me crazy so you need fluids in the middle for this to turn and the basic idea is that like almost all other fields historically fluids has been you know there's a theoretical component and an experimental component in the last 50 years a massive simulation component has has also come online and these at least these two generate a tremendous amount of flow data which has given us a lot of insight and actually helped develop a lot of these tools in machine learning and pattern extraction and so the goals that are particularly interesting as far as I'm concerned are modeling things about the fluid flow this could be statistical properties of the you know ensemble evolution of fluid flows it could be things like how do the coherent structures in the flow of all the things that carry a lot of energy and are important for you know for my designs how those evolve things like optimizing if I want to pick a wing shape how would you do that based on all of the data that you have on what wing shapes you've already tested in the wind tunnel reducing the dimensionality of these billions and trillions of degrees of freedom simulations down to less than numbers that you have to think about and ultimately we might want to control the fluid for some engineering objective okay so I would I would posit that almost every science fiction view version of the future you can think of we would flow control will be at least part of those enabling technologies we can talk about that later so I recently wrote a review paper on this with Barrett Nowak and Petros cumin stockist I'm just flashing this up here in case you're interested there's a ton of stuff I'm gonna gloss over and how you use machine learning for these different tasks but a lot more of it is fleshed out here with a lot more references that could be relevant one of the pieces of history that I think is really really fascinating here and I learned about this from from Behrendt and from Petrus who know this history better than I do a lot of the history of evolutionary algorithms and these kind of non-linear optimization and genetic genetic programming things like that actually were pioneered in fluid mechanics in at t-- berlin in the 60s and 70s by Eichenberg and just rifle and so what they did here it's a little hard to see but they had this experimental apparatus and a wind tunnel consisting of five plates and what they wanted to do is move the angles of those plates to make the drag as small or as large as possible or make the lift as large or as small as possible so they could basically create a shape and they wanted it to have a small of a drag or as large of a drag they wanted to change its properties and so what they did without exactly knowing the formally form formalism was they developed kind of a stochastic gradient algorithm they built this thing called a Dalton Bret it's a golden board it's like Plinko from I forget which TV show which one price is right it's like Plinko from the prices right you drop a ball and you get a Gaussian distribution of probabilities and the bottom something like that and they use this random random Gaussian selection sampling method to perturb these angles and they did some kind of a mixture of gradient descent and stochastically perturbing it and actually very nice solutions of their of their their five plate model now they were not appreciated as geniuses at the time they were almost run out of the Institute this was you know the head of the Institute basically said this is not fluid mechanics and they were fortunate to get shelter from another kind of senior person who protected them long enough that they could get their their footing another interesting piece of history how many of you have heard of Sir James light Hill great maybe James isn't his first name sir light Hill he's a great great fluid mechanician so anyone who knows fluids in the audience knows light Hill is one of the great great figures of fluid mechanics of the last century English fluid mechanism and he was very much responsible for the AI winter in the 70s where the UK government stopped funding AI research entirely and this caused a huge drop in in research in AI this is him essentially holding three leading researchers on trial in front of their you know National Academy and the irony this is something I found incredibly ironic and and I Petros pointed this out to me is that we're watching this YouTube video and it's been automatically automatically subtitled by a machine learning okay so it's not perfect and and to his credit he there was massive of a promising that was happening at the time they I mean if you go back and look at what what the AI climate was like so for those of you who might think that AI is overhyped now watch these trials it's incredibly interesting I think probably he had a point and I had a point but but it is interesting to note just kind of one night can kill an entire field of research for decades so interesting bit of history he is kind of this this huge figure in fluid mechanics hated machine learning okay so now I'm just gonna start going through different areas that I think are interesting and right for advances in machine learning something I didn't point out remember that gears picture where you had simulations theory and experiments and then you had all these goals what I would point out is that almost all of the objectives we have in fluid dynamics and in dynamical systems in general can be phrased as optimization problems does I mean a control law is an optimization problem finding the best reduced order model that fits your data with the fewest terms as an optimization problem optimizing the shape of a wing is an optimization problem and so at least for me as a practical engineer I think of machine learning as a set of emerging techniques in data-driven optimization they are better modern optimization techniques that leverage the vast and increasing amounts of data we have access to and I'm lumping regression in there too okay but so these fit very well together a lot of what I'm going to show you are essentially optimization tasks that we perform every day that we all as a community can help improve with data okay so I'm going to start with experimental measurements so measuring fluids is a very rich and interesting field and has been revolutionized by the advent of the laser so before the laser is very hard to see inside of a flow you would you know smoke a cigarette put it in the wind tunnel and you would see how the smoke of all that was kind of the state-of-the-art I'm exaggerating okay particle image velocimetry is this groundbreaking technique that was developed after the laser where essentially what you do is you have a very very brightly illuminated sheet aluminum illuminated by a laser and you seed the flow with these approximately neutrally buoyant particles these little glass beads or tiny little beads that sew in water they're these little neutrally buoyant glass beads in air you make this popcorn oil stuff that you see it into the air and then you take these high-speed camera images of this flow of these particles in the flow illuminated by this laser sheet and you use essentially digital image correlation to approximate the flow of particles from one frame into the next it kind of makes sense you're taking these high-speed photos of particles that you've stood in a darkroom whereas where light is coming in through a window and you see all the the dust motes moving around it's basically that's done professionally in a controlled setting okay and you take high-speed video of that and based on how those dust motes move from one frame to the other you can impute a velocity field and use that for all of the things you would want to do I'm sorry usually it's a 2d slice through a 3d volume there are all kinds of tricks if you have one camera you take one picture at one angle of a 2d slice through a 3d volume if you have two cameras you can get a stereo picture and you can get the third component of the velocity field out of the plane now there's techniques called Tomo PIV where you can kind of sweep through very rapidly and fill a volume there's all kinds of emerging 3d techniques now okay and one of the things I think is really interesting about these these PIB movies is that they're really noisy a lot of the time so if you see a movie of this it looks like a flow field with a bunch of television static on top of it when I say television static my young students don't know what I'm talking about because they've been raised where there's no such thing as television static it's just blue and so I don't get to connect with them anymore so what I'm going to tell you about is some recent work I find very interesting using kind of robust to statistics so this robust principal components analysis of kent/des all to clean flow fields obtained experimentally ok so how many of you have heard of our PCA okay in this idea of robust statistics so it's basically doing all the things you normally do in statistics like regression and PCA and fitting and you add some regularizing term to kill outliers so that you don't favor outliers as much as you normally do in a least-squares regression and so if you have a big library of faces so in this case I have I think 36 people and for each of those 36 people I have 64 images and different lighting conditions I have a big eigen face library this is from the Yale faces database then what you can do is essentially using this robust principal components analysis I can take an image with a big bad occlusion where some part of it is blocked and I can essentially infer what that image would have been like under the mustache and subtract the mustache off so that's what this robust principal components does is and it you couldn't do this if all I gave you was this image of course you need all of this data to know what is statistically likely to see so based on the things I can see which are you know eyes and nose and things like that you can infer probably what would be under the mustache using this robust principal components analysis and it's literally just PCA with with an extra regularization term okay so again this isn't exactly machine learning but I think it's super important for all of us to know that what we do in machine learning is again it's optimization and regression and regularization is how we bake in our prior knowledge okay so if we want to do physics and for machine learning where we have prior knowledge in the form of physics regularized and robust statistics is a great place to look for for methods that already do this no I think it is machine learning I absolutely think it's machine learning I'm sorry sometimes people look at me funny when I say this is machine learning I think it certainly is absolutely regularized optimization is the bread and butter of machine learning so so this was a PhD student Isabelle Sherrill in my lab generated this this example so this is a highly corrupted movie of fluid flow past a cylinder and this is the training data this is all the training data that's all she sees is this this movie but using this simple regularized principal components analysis with this l1 penalty that's supposed to de-emphasize large outliers you're able to separate it into the low ranked component the structure containing component and all of the crap and it's remarkable I mean I'm gonna play it again because I just love this movie pretty striking now this is simulated data and we added a bunch of corruption to it so that was cheating but then we got data from our collaborator Jess Shang and let's hope this is a movie that plays this is the actual experimental PIB measurements that came out of the water channel when she was a PhD student and you can see what I'm saying about the TV static right like there's clearly structure here but there's all of this TV static on top of that but with these robust techniques you can pull out in lots of cases structure and try to pull out a lot of the noise and corruption from that to this sorry lambda is lambda is the strength of my regularizing term so lambda infinity means I don't do anything to this I don't move anything into these outliers lambda 0 means everything as an outlier and so I'm trying to find a sweet spot this kind of Goldilocks lambda where I get keep as much structure this might be the opposite of what it says in the equation infinite lambda yeah this is 1 over lambda sorry absolutely 1 over lambda but you get the idea if you take your favorite optimization techniques and you add regularization you can handle these outliers which is a major major problem in experimental measurement techniques ok another area that I think is extremely right for for just transferring knowledge from from machine learning into fluid mechanics is in super resolution this is just a beautiful image I don't know if that's real or not I doubt it is so I'm gonna tell you kind of two stories of super resolution one by Sam Tyra again who is faculty here at UCLA and one by Ben Erikson who's in the audience so this is the basic pipeline of super resolution for fluid mechanics okay so you have some input which is massively downsampled and maybe you have a movie of this and what you want to do is infer what the high resolution version of that flow field would have been that's most consistent based on you know what my neighbors are and what I know about the flow field and so of course you'd have to have some training data where you'd see things like this and I think this must be a real data because this doesn't look perfect but it is approximately like the actual reference trajectory that is clean everyone get the basic idea and this kind of makes sense because remember I said these fluid flows are super multi scales super high dimensional oftentimes I only have access to low dimensional representations in the climate simulations and in satellite measurements I don't get to measure at every scale I want to or simulate at every scale I want to so sometimes you have movies that look like this and you want to look under the hood and see what the structures inside the boxes are yeah yeah that's a great question so you absolutely could exploit coherence in time to do a better job I don't know for a fact that that's what they're doing here or not but there's absolutely information that you could use for such super resolution yeah so in principle you you can definitely so a a really really rough sketch of how this super resolution would work is that if you know you're looking at something like human eyes and I see these nine pixels together I'm probably looking at that region of a cornea or whatever and so you know it's based on neighbors and libraries and things like that and so you don't just have to look at your nearest neighbors in space but you could also look at your nearest neighbors in time and if I see a structure doing this in time that gives me additional information again I don't know if that's what they're doing here maybe it's on this next slide so they use a particular a particular architecture to do this and they get really really interesting results I would point you to this paper if you're interested in super resolution for physics applications because they do a very careful job of looking at kind of how the input/output performance varies with things like downsampling resolution filter sizes network sizes noise amount I mean it's really really a very carefully done paper the work I'm more familiar with is is been Erickson's work and you can ask him questions if you have questions about this where he has built a shallow decoder Network to basically take either few measurements or coarse resolution measurements and infer what the high resolution flow field would be a from training data this is kind of the picture here this is his low resolution data and through the shallow decoder you recover the high resolution data that's related to this the truth on the left okay now this is pretty impressive performance but I think there's also a really important caveat that we learned throughout this process so to some extent if it seems too good to be true it is a little bit and there's a really really important distinction between whether or not you're training your test image is an interpolation of the data you trained on or an extrapolation of the data you trained on so I'm gonna explore this a little bit more okay in the image net data set that data was complete enough that most of the images you would see afterward can be composites of the image net data set it's an interpolation task but in physical systems like fluid flows that are evolving in time what happens in the future might not be that similar to what happened in the past unless I collect an astronomical amount of data because these are so complex okay and so what we're showing here which is this very good super resolution reconstruction from this image is what happens if your training data where these dark grey areas and your testing data are kind of in-between this is the time evolution of the flow and what that does is that guarantees that the test images are close to things you've seen in the past in the training okay but that's not necessarily the realistic scenario when you're dealing with climate simulations I have really really great data for let's say the last you know decades few decades but I actually don't know exactly what I'm gonna see twenty years from now that's more like this picture here where you train on the past and you're trying to predict something as you go forward into the future okay and so we learned a lot about the kind of fundamental limitations of using the shame learning on systems that are evolving in time because now when you do that exact same reconstruction as you go farther and farther away from the interpolation regime from the regime where you collected the data here it's not too bad but it gets worse and worse and worse as you go farther away from from where you collected your data did I make that point clearly enough so it's a huge caution and it tells you like again these things are fantastic for interpolation tasks and if you have collected enough data in this case they're actually probably our databases large enough that we can train and be confident that we've seen everything but in the climate we can be quite sure that our past data is not sufficiently large that we've seen everything and so this is something that Jared Callaham has been looking at yeah okay I completely sorry maybe I didn't say that right like the climate is much more this picture we as a community and fluids are starting to create databases that are large enough that for some of those canonical flows I showed you will have this much data that we can actually interpolate anything you'll see in the future in terms of things we've seen in the past so there are two regimes this one's harder and this is more realistic for really really hard flows but there are probably databases now that allow us to do interpolation for simple flows it's not obvious so it's just I would say this is a case where you the second problem is actually the problem you want it solved and you have a model which isn't quite able to do it for very long right right look here it's a little bit of an important work right this is a very common thing we see is that you know as you go a little bit further like the model extract only it's a little bit a little bit a little bit Danang falls it falls off right and that was just a question is where you can now sort of frame the question in a right way it's not really it's not clear what the answer yeah and and okay so I'll the next example I'll show you is that linear models don't extrapolate but nonlinear models may vary or convolutional models may very well extrapolate so it's a good point interested to see exactly yeah so so this is a set of data that Jared Callahan has been analyzing kind of just how much data is enough to for the future to for the past to be representative of the future that's I guess the question that we want to ask is how much data do you need in fluids or in a complex system for the past to be useful for predicting the future okay and so these flows are pretty interesting so this one is simple and purely periodic this one is a sea surface temperature and this one is actually pretty periodic if you do DMD on this you get a handful a small handful of of modes that describe almost all of this periodic behavior because it's seasonally forced this one becomes pretty difficult so it's linear here quasi periodic here and full-blown chaotic as you go downstream so this exhibits an entire range of complexity and this one is is like the most complicated of these flows okay and so there's a couple of notions here one is if I just showed you a picture of the flow here would it uniquely determine what you would be likely to see here and the answer in this case is no this region is somehow decorrelated from this region here I can see the same vortex here on Monday and Tuesday and see something completely different down here okay so that's kind of those are some of the things you would you would be thinking about whereas up here if I see this thing right here I know everything about the flow because it has one behavior okay and so what Jared did in addition yes marina yeah so this one is much more unstable this flow there is a fast flow going over here and a slow flow this one is just much more unstable and chaotic this one is periodic this one's chaotic so we've got past more bifurcations and we have more unstable directions in this high dimensional system than here so this has just gone through one bifurcation so it's periodic this has gone through the whole range of bifurcations and it's full-blown chaos down here and it gets more complex as it evolved spatially yeah so that's in that one complexity parameter and in the reynolds number that gives rise to all these unstable directions so in addition to many other things that Jared explored in this paper one of the things he looked at was essentially just the simple singular value distribution or the how much energy or information is captured in the different singular vectors or principal components and then how representative is the past in reconstructing a snapshot in the future and this can be very useful so you see for example the Gulf of Mexico and the mixing layer have these really really slow drop-offs and so it essentially takes a ton of principal components to describe this data and then if you take a test vector that's not in the training set and project it onto the subspace it only explains a small fraction of that data so that doesn't generalize in a linear subspace now the hope is that with nonlinear embeddings auto-encoders and and better techniques you can start to get more compact representations of these flows that do generalize better okay so I want to talk a little bit more about this issue of multi scale nature in fluid flows this is the Kolmogorov energy cascade and so a lot of you have heard of Kolmogorov in other contexts so he was a pioneer of statistical learning okay so this is another one of those interesting fluids and machine learning connections is that Kolmogorov was also a pioneer of turbulence Theory dynamical systems and statistical statistical learning and so what I'm showing you here is a really interesting plot where we're basically looking at how much energy is contained in vertical structures of different spatial wave numbers okay so my low wave numbers correspond to big vortices my high wave numbers correspond to little wave numbers everything is on log coordinates and here I'm just looking at how much energy are in each of these scales in a typical flow and I'm not going to tell you what flow this is this is a this is a cartoon okay and the basic idea is that you have these these kind of different qualitative regimes where different physics and different balances are governing how these vortices are interacting and forming and and destroying so you have production range that's feeding a lot of energy into these relatively large coherent structures so think about that fluid flow past the mountain where you had that Karman vortex street those are these really big energetic structures then what happens is those structures eventually start to break down and split into smaller and smaller structures and along this kind of what's called the inertial range and there's a slope that Kolmogorov predicted using basic scaling analysis and then eventually once you get small enough the dissipation rate gets greater and eventually your Eddie's get smaller and smaller and smaller until they go into diffusion scales and they actually start to diffuse with their surroundings okay and this is where eventually your fluid warms up because of turbulence and you can't reverse the process so in principle things are more or less reversible up until here okay kind of interesting very very useful from a modeling perspective okay so the way I think about this these colors are a little horrible is you can't resolve all of these scales in a real flow so this is on a log scale let's say a modern submarine you might have 10 orders of magnitude scale separation between the biggest and smallest scale a volcano erupting an atmospheric boundary layer you might have ten orders of magnitude of scale separation so you can't resolve all the scales and you don't want to what you actually want to do is you want to understand these large energy containing scales because those are what you actually can control and that matter for your engineering objectives and so I don't care about all of these pink variables but I very much care about these purple variable or no these blue variables and also such a bad color scheme sorry and so the basic idea here is that we're going to ignore everything high frequency and try to build a model for everything low frequency and so what you're gonna do and I'm oversimplifying massively here we're basically going to try to find an approximation of how these high frequency effects depend on the low frequency structures this might be statistically this might be in an average sense and then I'm gonna substitute that into my low frequency equation to get an equation that only depends on on these these large energy containing structures that I actually care about okay and that is the essence of what we mean by a closure model that's you know in a nutshell what you're trying to do at Boeing or in a climate model or in lots of these systems where there are so many scales you can't keep track of them all we're trying to find an approximation for how these high-frequency fluctuations depend on the low frequency stuff and substitute it back in now often you have to look at a time history or like some some some time integral of these to approximate this that would be the maureesa wonsik approximation and there's many many other approximations in the literature but this is one of the biggest areas where machine learning stands to help is in building these these are essentially arbitrarily nasty and complex functions that we have tools that can handle that now okay so this is a fantastic review paper by by Duras ami ya Kirino and and hung shout turbulence modeling in the age of data where they explore basically how methods for machine learning are being used to solve these various closure problems in in fluid mechanics and I'm gonna break the types of closure models into two I'm oversimplifying of course into two huge categories Rand's models Reynolds average navier-stokes and le s models large Eddy simulation models okay Rand's models are basically doing a time average you have a bunch of stuff that's fluctuating quickly and I'm gonna average over that in time to get some notion of what's happening on an what's happening in an average sense on large-scale structures like what is my average drag on a truck I'm gonna average over all the fast scales that's what Ran's does large Eddy simulation does much more what I showed in the first picture where it's actually chopping off high spatial modes and approximating their effect on the large energy containing modes and one of the papers I think is the most interesting in this this area is this one by Ling at all in jfm 2016 where she essentially build a custom deep learning architecture that has some kind of interesting auxilary layers in the last layer that allow her to include Galilean invariance which is a known property of these the Reynolds stress tensor that she's trying to approximate into her into her neural network architecture so I don't have time to go into into too much depth here essentially there are physical properties we know about these Reynolds stress tensors we're trying to approximate in the rands models and they were able to cook up a special architecture that enforces those invariances by construction so these are somehow more physical by construction than if you just used a naive deep learning architecture yes yeah so it should in principle also be be more more stable and more physical and there's actually so I'll make a philosophical point because I'm not really I don't have time to describe kind of in the guts of what this is doing but I'll make a philosophical point about what this is doing in fluid mechanics and in lots of other fields of physics there were these absolutely kind of luminary thinkers before us before the age of computation in this case I'm trying to think of who the paper that they're basing this off of and I'm blanking on the name but there is a paper where all of these Reynolds stress tensors were derived beautifully from the navier's stokes equations and then they get to a point where the tools at the time just weren't good enough to go any further and they make an approximation and that's where all of the theory for the last 50 years has gone in that direction based on that one assumption because the tools were not good enough after that point in that seminal paper and so what Ling and and co-workers have done is reread the literature and gone back to that weak spot where they made an assumption because the tools weren't good and they're using better tools and that's a huge opportunity for us I mean there are these classical ways of thinking and it's amazing what people did before computations and and and simulations and at some point you're gonna find an approximation and they're brilliant approximations but they're probably a little bit wrong and we can do better now okay and I have a lot more slides but I have two more minutes and so instead of getting into your fluid drinking time I'm gonna stop here and maybe take a couple more questions you [Applause]
Info
Channel: Institute for Pure & Applied Mathematics (IPAM)
Views: 12,921
Rating: 4.9821029 out of 5
Keywords: ipam, math, mathematics, ucla, steve brunton, machine learning, fluid mechanics
Id: DQ7rAG_gBEA
Channel Id: undefined
Length: 72min 2sec (4322 seconds)
Published: Mon Dec 16 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.