Machine Learning ROMs

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] I want to finish chapter 12 by talking about some more modern implementations of reduced order models so this is out of the book data-driven science and engineering Brian Cutts here's the website and what I want to talk about specifically is to think about in the model reduction we've done so far what we've been thinking a lot about is doing a reduction taking some PDE looking at the low dimensional subspace building a model there but a lot of PDS have parametric dependencies another this isn't parameters that are varying and so when you build that subspace the subspace itself has to be changed so you have to keep switching reduced order model you keep rebuilding them so part of what I want to talk about now is using some more ideas that are common to machine learning towards this reduced order modeling infrastructure okay so that's what we want to go after and we don't think really in context of parametric parts of differential equations in other words a PDE that has this nice evolution but there's a parameter there so your subspace that works over here does not work over here it doesn't work over here but each one has its own sub space so what do you do with that it's not like do you just keep rebuilding models or is there a way to more effectively use machine learning to do this alright remember that the architecture is here so we have some PDE that we discretize to generate some high dimensional OD system linear part and on your part and really what we should think about now is that there's some parameter dependence here like there's some parameter beta that everything depends upon and as beta changes this is what's problematic is beta changes the slow dimensional subspace we built here by taking SVD of the data is different for all the different beta regimes that we might have so as I change parameter space the subspace that I've built here no longer works in another regime and that is pretty common and also highlights all the dangers of machine learning type architectures wherever you trained a model that's where it works the minute you walk out of that space it's it's not clear your model will work anymore so right you're gonna start trying to extrapolate into a new space interpolation works great terpil ation is so much easier than extrapolation I just can't emphasize that enough so the minute you extrapolate is a minute that you're probably gonna have mathematical challenges at that point in your your models are gonna break down okay and that's actually how you know when things don't work that's why you're trying to extrapolate and if things working ly well you're probably interpolating okay I'm almost put money down on it okay but this is the idea that this basis set I have to as problematic have to keep changing it and I can still do the reduction like this and now we've learned that I can reduce and I know how to compute this effectively and some interpolation space this is what we've been spending most of chapter 12 talking about how do we do this interpolation nicely but then if I have to change subspaces then I have to keep redoing this process of learning a new basis relearning a new sparse sampling algis structure for that basis okay so how are we gonna capitalize on that remember that's part of our problem now but part of our problem now is these have parametric dependencies maybe or could also come into linear term but let's just you know generally I have parametric dependency in this so what I want to start talking about is this concept that really comes out of machine learning which is hey well let's build libraries of modes so this is sort of this construct that we might have so suppose that PDE let me go back to it suppose there's some variability suppose that this thing depends on some parameter mu suppose that mu in fact it's either a bifurcation parameter it doesn't have to even bifurcate just a parametrization of this problem where as mute changes your subspace changes so what we want to do is capitalize on that and say well that's fine suppose I take some snapshots for some given mu and I find a low dimensional embedding so now that I find some p OD modes what i want to do with that is then take those Pugh demos remember it's a small number of columns and our rank approximation of the data and I'm gonna put this in a library in others I'm gonna keep that subspace over in a library and if I find a new simulations with some different view and I find some different modes that are like Oh over here here is the subspace that seems to represent this parameter regime let's take that subspace put it in my library and then when I go over here here's a different subspace it represents my damage so take that subspace put it in my library so the library is going to contain collections of subspaces now a couple things to note each subspace right that it pull out these modes are orthonormal so there are thuggin all to each other and with unit length but when I pull out another set of modes the two sets of library modes are not or they're worth orthonormal with within their group not across their groups just keep that in mind okay so I'm just going to build up a collection so it could be that in fact you have the ability to initially do some high-end simulations in different parameter genes you know that matter and for each one of those regimes you say okay in each regime I will pull out the dominant beauty modes that are there are rank truncation that work in each one and I'll put those all into a library okay because normally what happens with parametric variability is the parameter might be wandering around in time so you know it starts here in this domain and wanders to this domain and then later on I wonder if this domain and so part of what you could imagine doing is if I have this library structure I'd say well while I'm here I use the modes that I learned from here when I get over here I use those modes and when I get over to here I use these modes so the idea is to recycle what you've done and if you've already learned these modes and you've already learned how to interpolate the non-linearity don't redo that work you already have it filled a library learn how to use it there so this is what we want to do this is sort of using this idea of sparse sampling again and how to do this and really what we want to do is not only learn how to reuse those modes but remember we've been spending a lot of time talking about interpolation and from a small number of points I guess one of the questions you could ask is not only could I reconstruct but what I really like to do is pick a small number of points that allow me to reconstruct but also work in these different regimes and maybe even allow me the idea of being able to take those small number of measurements and classify which regime I'm in in the first place so in other words if I take our measurements could I tell which library I should be using and then do it do my sparse approximation in that library so we're gonna start changing this idea of the interpolation points we're using for dime or Q dime or I'm or gap II methods in general and we're not only we're not going to just say you're only being used for interpolation we're also gonna say that these measurement points or interpolation points are also going to be used for doing a classification task to tell me which library elements should I be using in the first place because I have a library of a bunch of different possibilities which ones do I go after so again accurate reconstruct ins what we want and here's what we want to do now this is going to be how we think about our reconstruction measurements notice how we're going to change this approximation so before when we did this approximation we only had one set of modes now I have a whole library so I in other words my solution now I'm gonna say my u state space is gonna be approximated projected into my library modes there's some a projection there's u tilde so this is interesting architecture right because now it's not just projecting to these set of modes it's projecting to a library by the way I don't want to use all the library I only want to use the grouping I need and so the idea is to use sparsity promotion techniques versus with the way we've done it before is just simply l2 you know these squares now I want to start saying let's start using some of our sparse optimization tools available to us and what I want to start doing is start actually using something like equivalent a little bit to an l1 norm to sparsely select the smallest number of modes I need to approximate my solution because remember I have a bottle a lot of modes in that library and I wanted to take out the collection that matters I want every all other coefficients to be 0 ok so once I build the library I want to classify which one of the modes matter then reconstruct in those modes that's another way to say it and I'm gonna start using l1 optimization to do this to classify this so how do we do it well so here's how you might do it I have the solution I have some data and what I want to do really is to say ok when I do this regression and I promote sparsity that in fact what it finds us oh you know for the data you have in the projection matrix this again this is a we'll talk about this P matrix a little bit more that in fact the l1 optimization promotes parsley tries to make most of it zero and it says Oh for the data you have in the regime you're in here are the nonzero elements I try to make everybody zero but these are the non zero and look at highlight some library modes in fact usually it's gonna highlight a group of modes that come out of it which says oh these modes matter everybody else is zero so pull those out and then now I can do the reconstruction in those modes so I keep this library which is over complete there's a lot of modes in there and the sparsity promotion says let me which one of these best fit the data now if you're in a specific regime if you're in this regime you already know that those modes sit in there so when you do the sparse regression it should say you guys don't matter these are the smallest number of modes I can use to do a great job everybody else you go to zero and that's exactly what this sort of shows you so it does a classification task right by identifying which grouping of modes it should be using and then reconstructing in those modes so this is a great way to use this because now what's allowing you to do is to say I can now do parametric PD ease by solving all over these regimes and then the thing that this will do is it will go figure out which regime I'm in pull the modes from there do reconstruction and I can keep all the modes jointly together into this library all right so let me give you an example of this and so the example I'm gonna give you this flow around the cylinder so you know you can solve these equations with some cylinder inside of the fluid flow itself and you flow you know you have your flow field go past this and this is a classic problem and I want to just show you what this looks like in particular this is showing you four different configurations or parametric configurations this is rental number forty Reynolds number 150 300 and 1000 so what I've shown you on the top is the pressure field around the cylinder as a function of time so when you at reynolds number forty notice that the pressure field just looks there's nothing going on and and you get the bifurcation you get these oscillations in the pressure field around the cylinder and then they change a little bit in structure specifically the frequency and shape a bit and so those are the dynamics from the full simulation and what I've done here in these pictures is plotted for you the p OD modes on a circular basis so there's the cylinder in the middle and the p OD modes mode one is yellow mode two is the magenta mode 3 is the cyan so those are the dominant three modes for Reynolds number forty one fifty three hundred and a thousand the most important thing you see is that the dynamics is very low ranked but the modes are changing significantly from Reynolds number forty to one thousand in other words look at the Reynolds number one thousand mode look at its structure compared to here's run one fifty or forty so the point about this is it would be very hard to represent the dynamics of Reynolds number one thousand using the Reynolds to forty modes and vice versa but I can collect these modes and put them all into a library and say here's my Reynolds number forty modes my neighborhood's Reynolds number one fifty modes all the weight up to Reynolds number a thousand and I have these in my library and so now I can look at the my I look at this sparse number of measurements now there was my interpolation points here's here's more the picture of it so now I look at my points let's say selected by dime or I'm or or random whatever I want to do I take pressure measurements around the cylinder and the first thing I do is I do the classification task I say if I see this what I want to do is do a classification task and what I want to look at is say which which Reynolds number is it so I'm gonna use these measurements first of all with this l1 pollenization to classify and tell me from the measurements which modes should I be using which groups of modes so I classify the reynolds number so you might find Oh for the flow fetal you currently have it thinks you're at rental number 300 okay so now that you're at 300 you select those modes out and now you can do a reconstruction like you normally would and now you're using those modes and you're using the sparse sensor locations for in the standard way of I'm dying or queued I'm and so here for instance is the flow field the true flow field the reconstructed flow field using a small number of sensors with this library type structure around the cylinder so it's kind of a nice way to think about using and combining machine learning architectures and you can even have something like this this is some work with Ito bright and Guang Lin to couple papers from 2013 and 16 as you change the Reynolds number of the flow field you have the you have your interpolation points which are like sensor points now first do me a classification task so the first thing that happens is as a run on number changes this so what I'm showing is it first ramps up and the yellow dots are the classification what it thought the Reynolds number was and you can see it can make mistakes right around transition points for the part you you really nail the the Reynolds number so it classifies correctly and then you can reconstruct in these different domains so as your systems Reynolds number is changing you are changing out library terms and changing out your reduce order model effectively and so the interpolation points are not only acting as sensors to classify the right dynamics but acting as sensors to interpolate correctly the non-linearity as well so those are some just ideas a lot of people are starting to work in this area it's it's it's an it's a it's a big growth area to have machine learning architectures with reduced order models I've just highlighted very simple ideas in this last section where you can exploit these kind of structures and libraries to build much more to enrich what you can do with reduced order modeling everything can be found here databook u-dub comm and the PDF is there all the codes everything I've done for chapter 12 is in there and so chapter 11 and 12 together will hopefully help you learn quite a bit about how to do reduced order modeling and how to implement it in practice
Info
Channel: Nathan Kutz
Views: 3,185
Rating: undefined out of 5
Keywords: gappy POD, reduced order modeling, machine learning, ROM, POD, Kutz, Brunton, parametric ROMs
Id: 84vQKdEtMIc
Channel Id: undefined
Length: 17min 13sec (1033 seconds)
Published: Mon Jun 29 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.