Reduced-Order Modeling for Aerodynamic Applications and MDO (Dr. Stefan Görtz)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
uh good morning everybody um this talk also would have fitted um the session yesterday i believe and this talk you don't expect a lot of equations i'm really trying to show you some applications um but the talks that you've seen throughout the week i think are um a good preparation for this talk and so i will not have to repeat what you've seen in in previous days all right so this is a talk about we used other modelling activities that we perform at the german aerospace center uh this is not my work this is work that my colleagues have done and and i'm acknowledging here and just before i start i'm i'm from the german aerospace center we are the national research center for aeronautics and space don't maybe you would not be here if it's where you work that's right we are also the largest project management agency in germany and we are the german space agency currently we have about 9 000 employees and we are still growing to about 50 institutes throughout germany and i'm from the institute of ergonomics and flow technology and there i'm the department head for the case center case stands for center for computer applications in aerospace science and engineering where we perform where we develop numerical methods and apply them to different challenging aerospace problems including our acoustics and aerodynamics um one of my groups is dealing with some surrogate modeling use other modeling uncertainty modification uh robust design and mdo and we're also more and more using machine learning methods when this course was first set up i think it was more focusing on the reduced order modelling aspects which my talk is about but i'm also trying to touch upon um the machine learning part and i think the course has evolved in that direction a little bit so i hope this suits on the idea of this lecture series all right so what i'm going to talk about is first of all a small motivation why we're looking at these methods and i will talk about what we call the virtual aircraft use case which is challenging we i will talk i will try to connect these and reduce automatic messes to the principles of machine learning and show you some of the methods and how i classify them before then delving into a quick introduction to our reduced auto modeling approach and the main focus really will be on applications um on the one hand for steady flow problems and on the other hand for uncertain four problems if i have time and this we will see um i will also talk about data fusion applications which i think is really interesting to combine the best of two worlds be it fly test data and numerical data or expand the data from the wind tunnel or all three of them all right what is the motivation in terms of the virtual aircraft use case if you want to design and certify an aircraft you need lots of aerodynamic data right and this data is needed throughout the entire flight end of the aircraft right so at crews where you're zipping your coffee um things are not very exciting and this is um we have a good uh we can take good care of this um with the today's methods however um you have to prove to the authorities that the aircraft doesn't break throughout the entire flight envelope and you encounter much more challenging flow phenomena be separation buffet buffeting flutter you name it and all of this has to be taken care of and in particular this has to be taken care of also in terms of the structural sizing of the aqua structure why the aircraft shouldn't break during flight to do that you need a lot of dynamic data you need integral coefficients you need distributed quantities such as pressure or shear stress distributions both steady and unsteady and all of this is coupled right this is a coupled system i'm dealing with the economic part but we're coupling into the mass model we're coupling to the flight control system and in particular we're coupling with the structural model and there's some estimates to compute all of the different load cases and the data you need for a performance evaluation of the aircraft and for handling qualities may sum up to the order of between 1 and 10 million different cases which you have to compute both steady and unsteady stemming from different flight conditions right different mass configurations different maneuvers and gusts gusts and what people call um turbulence or what people call air pockets this is in truth we call this gusts and different control laws you have to that you have to consider so there's a large number of cases which you have to compute for each and every configuration so if the design is evolving during the design process you have to repeat these computations several times and to tackle all of these cases with high fidelity cfd or cfd csm is on the horizon but it's not feasible and that's i think the motivation for us to look into faster methods on the one hand so things like double left methods but on the other hand also trying to build reduced order models out of high fidelity data um and this the challenge is to compute as little data as as needed as possible all right and the goal is really then to predict all these aerodynamic quantities based on the parametric generated high fidelity data stemming from cfd or cfdcs and computations of course hoping that these models which we are trying to build have lower evaluation time and lower memory storage than the original cfd and the quantities of interest here are again um pressure and shear stress distributions over the aircraft surface in particular but we also may be interested in volume quantities and so the primitive variables which we're also dealing with in our cfd code be a pressure velocities or temperatures what parameters are we interested in on the one hand we are of course interested in flow parameters such as the mach number the angle of attack and as shown here different angles of attack for a profile but we're also dealing with some different geometries right in the design the geometry is evolving and we need to parametrize this geometry and ideally we have a model for all of these parametric changes which is really challenging so the typical process is to start um defining a design of experiment to to vary these parameters in a systematic way to compute these snapshots i'm sure you've talked about snapshots during this week which are typically full order solutions to the high velocity problem and then the challenge is to build a reduced auto model whatever that is which is a low dimensional description of the dynamics of the fluid flow around the aircraft which of course has a restricted range of validity right because we have to accept we are talking about a prediction in terms of an approximation of the full order system and this thing can be used to predict at different parameters ideally of course parameters that you haven't used to train the model in this case an in-between angle of attack for example and um the what we're really aiming for here is we're not always interested in really saving computational time it may even be that we're spending more cpu hours than by just computing everything with hyphew cfd um we're trying to take advantage of the offline and online phase as we call it right so you may be able to start computing data parametrically on a big hpc cluster offline long before you actually know what you want to compute sounds sounds strange but this is called i mean industry that's called an out of cycle design right you're kind of collecting data you're trying to build up your database and when not yet knowing what you actually want to design right and once your objective function for example becomes more clear you just have to cast the data into that problem formulation and you can also reuse the data in case the objective function of your design is changing right that's one of the ideas this we do offline so have the data ready once we want to quickly evaluate the resource model during design in an online phase and ideally this can be done on a desktop or in flight on the flight computer on the computer that you take into the aircraft to make self-learning organic models and here's an example of a real-time prediction why this is not very fancy it's only 2d airfoil but this um this is some the grid that you see so it's a rand's computation including viscous effects and trying sonic effects and we're changing different parameters we're also integrating the um aerodynamic coefficients online in real time here right so this is not too difficult and in 2d the in 3d is more difficult to get your real-time capability but i think this is quite impressing because we're also predicting the geometry deformations and the grid deformations uh simultaneously all right what is the connection to machine learning and machine learning principles and reduce auto modeling right so this is my classification this may not be agreed upon with everybody here so a lot of times um people say and we talk about artificial intelligence and when we talk about artificial intelligent deal are we typically mean machine learning there are of course other branches like robotics which we're also dealing with but here i'm talking about machine learning and um i think there's two main categories of interest for us and they're supervised machine learning so predict um predictive models right based on input and output relations and there's unsupervised learning which i classify as an internal presentation representation only using inputs and um there are different techniques um classification regression model selection clustering and in particular which i'm just doing dimensionality reduction and these are the methods that we use um so the the simplest one which we've always used for a long time is a classification message which is called nearest neighbor and i'm i still think it's funny um that one of the most successful machine learning algorithms that netflix is using is um nearest neighbor right we are also interested very much in regression models right so predicting um values um and methods we use our gaussian process models we're using bayesian regression very powerful tool um different regression techniques rich lesser kernelized regression and more recently we're using different artificial neural networks and i will touch upon that later and support vector regression we're combining this with dimensional reduction methods in particular pud which you've heard about here we're not dealing with dmd yet and i'm interested in that we are looking at iso maps and non-linear dimensional reduction methods we are doing manifold learning and interpolation we're using auto encoders and lowering approximation and hyper reduction if you combine all this with clustering then you arrive at a message like in cluster pod iceweb is also a clustering technique at the same time we're using k-means and spectral clustering and affinity propagation algorithms to train all these different parameters which then arise in selecting these different models and methods we're also using model selection algorithms such as grid search which is very expensive and of course classical things like cross-validation and hyper-permanent tuning also again one of the most important features of what google is doing is to efficiently and train hyper parameters right and we're doing adaptive sampling and again you may you may um argue about this classification but this is my classification and just to give you an overview of the different methods we're using and which i think are connected at least to what people today call artificial intelligence all right and all of these methods are implemented in our own toolbox um and this is a modular object-oriented python package for rapidly predicting lots of data based on high-fidelity cfd the methods some of the key features are that we can do these enough experiments and we can do adaptive sampling um we do circuit modeling so classical stroke and modeling but also in artificial neural networks where appropriate very interesting is verb fidelity modeling by having data of different quality of different origin of different sources to combine that may be numerical data of different origin or maybe measured and numerical data very important for today's talk is dimensionality reduction and reduced auto modeling and all of these methods are part of um our flow simulator environment and which is some data backbone so to speak to exchange data between different plugins and i will talk today about um our flow solver plugin the tau code which is the ryan's code and the smarty plug-in which is the method or the toolbox to deal with um release order modeling and machine learning techniques what is interesting is that um due to the fact that we have the simulation core which is running in parallel we have also parallelized our machine learning and we used automotive um software where necessary for example the pud part is paralyzed and the residual evaluation which we'll talk about is paralyzed the code is partially differentiated and which is nice if you do optimization we're using of course established libraries like mkl we have interface to cycle learning very powerful and of course tensorflow is being used for artificial neural networks we are currently working on the gpu version a lot of these methods lend themselves to running them on the gpu and you can do very powerful things on an aircraft in-flight if you're taking a gpu with you and we also have standalone versions and use case-specific adaptations applications that we have used this toolbox for um are related to optimization so surrogate-based optimization and uncertainty quantification which together enable you to do robust design or inverse design today i'm going to talk about mainly two things um if i have time i'm fusing measuring data and cfd data the next two talks will be related to data-driven terms modeling so i will probably not talk about that only if i have time i will more talk about how to rapidly produce the dynamic data for the virtual aircraft use case that i introduced you to um and i will not talk today about um other important things like an optimal sensor placement right so using the snapshots you have to derive a modal basis and then find optimal sensor locations where you apply greedy algorithms on the modes i'm not talking about wind tunnel corrections which is very exciting again combining two worlds measurements and cfd but all of this is of course related to what we call watch or flight testing you're of course trying to fly the aircraft in the computer all right um again in a second classification now talking about used order models in particular um i will show you steady rom applications and ancillary applications and have classified the steady roms into non-physics based or non-intrusive use other models right there we're using dimensionality reduction methods together with interpolators or regression models or with artificial neural networks and i'm going to talk about a lot today i'm not going to talk about manifold manipulation which is very exciting um but um yeah probably times too short to talk about this and i will talk about physics-based and intrusive models i'm not talking about the working models um i will talk about our alternative approach which is which is based on residual minimization in terms of unsteady roms we can we have again non-physics-based drums system identification tools are being used interpolators and artificial networks not today's topics i will focus on what my favorite topic which is physics based unsteady with other models where we use unsteady residual minimization and and very exciting hyper reduction methods based on the dye method and the mpe which all of you know but to apply this to a real large test case i think is what i want to show here because that's also posing a challenge in itself um the most logical approach in reuse auto modeling is to apply login projection to linear systems and we also have some reduced order model of our linear frequency domain solver which i'm not going to show you today also it's of course interesting to combine this with manifold interpolation just in case you're interested these tools tools are already in industry right so i think we're doing a lot of interesting research here all together and industry is really interested in applying this because they have really challenging problems and ahead of them where we can help all right so in a nutshell how do we construct used order models um we're first doing a step which is called dimensional reduction and i will here focus on pod which you've learned about before so this is just some repetition we're computing snapshots with the cfd code for different parameters input parameters x and get these snapshots y we build a snapshot matrix capital y and then the objective is to find a low dimensional subspace that um best approximates um the snapshots and this is sometimes pictured like this right you have a cloud of data points you're trying to find principal components practically this looks like um this you have your snapshot matrix coming from different cfd computations or n different computations you're applying an svd or an evt to this um and of course trying to exploit certain um features um to reduce the problem um the computation time for the problem and then you arrive at the pud modes um these single the single values and the right signal values and this um is of course then giving you what is called the pot modes and i've just pictured some of them here and this is also in some cases landing um they're lending themselves also to some physical interpretation it's not always easy and i think you've talked about this during the week but it's not always that easy to interpret what you're actually seeing there here we're of course seeing different shock locations and this is one of the drawbacks of purity right we're basically getting in transonic flows we're getting different modes for different discrete shock locations which you have to try to combine to find a new approximate solution and that's one of the drawbacks so every snapshot that you computed can again of course be represented by the pod basis in such a manner right you have pod coefficients a you're combining them with the important modes and you can do that of course for all the different plot modes and then of course the obvious idea is to try to find models be it interpolation models regression models or whatever to predict um the part coefficients um to then use the the given point modes to arrive at a new prediction and that's what we show here the easiest way is to just interpolate for these coefficients a and then use a circuit model and for an unknown for an unknown for an unknown input parameter set x star you're trying to make a prediction y hat by plugging this x star in your interpolator and then you get new coefficients a which you can then can combine with your or multiply with your thought basis modes to arrive at a prediction y hat i'm sure you have talked about truncation before right so of course typically the idea is to also truncate the modal basis and it's not always clear how to truncate it but there's some heuristic cut to do that and after this really brief and short introduction to the approach i want to delve into applications which is the main focus here and i'm talking first talking about romans 4 steady flow problems so in a nutshell what we're doing is we have different parameters be it mach number and angle of attack um we define some training points where systematically using different methods like latin habitual sampling optimized latin hypercube and different methods compute our training data was a cfd code to arrive at different snapshots and then we applied the dimension to reduction step and i've just shown you pod but we're also using iso map and auto encoders to perform this step to then try to make predictions for the pod coefficients or for the weighting of the isomap vectors or we're using residual minimization in the pd subspace or in the isomap subspace or we combine these different approaches to find the model coefficients to predict then a flow can flow a solution on the surface or the volume the full volume solution at an untried flow condition of course here's an example of um where we combine this idea with machine learning techniques on top right so here the idea was to look at a full aircraft configuration where we have computed high-fidelity cfd snapshots um for four different design parameters so the aircraft is changing geometrically but we also have five different flight parameters so flow condition promises like mach number angular for attack angle of side slip and using an uncompressible inviscid flow solver we computed some together with airbus close to 8000 different snapshots and each snapshot um on the surface corresponds to ten thousand grid points right so here we have the solution um at ten thousand different grid points and the first idea that we had together with airbus ten years ago was to build a surrogate model for every grid point may not be a clever idea but this is what we tried and this is a nice test case for evaluating machine learning algorithms and the idea was that we had to build 10 000 individual gaussian process models and it turned out to be infeasible to do that because you have to tune uh the hyperparameters of these models um for yeah you have to tune thousands you have to tune them hyperbolas for ten thousand different models just infeasible also we already um have um a very efficient gaussian process implementation where we have the compute intensive part in sizing right for different correlation kernels however this was still not efficient enough and instead of we also went to the gpu still then helps instead what we did is we looked at clustering right so um having the different flow solutions um at all the different difference we were looking at for similarities in the flow solutions on the surface and um to cluster the surface points to reduce the cost of high profile optimization we looked at spectral clustering methods and affinity propagation algorithms and then what is interesting is to see that um this even physically makes sense and you can interpret what's happening so if you choose three clusters and just apply these dumb machine learning algorithms to the data um yeah you find these three clusters right now this to the nemesis this makes sense you seem to find a cluster on the wing and partly which is partly extending to the fuselage right which is the part of interaction between the wing and the fuselage there seems to be a fuselage cluster um and there seems to be probably something on the engine so this this can be interpreted right so this is great um when you go to ten clusters well still there's still some features though the the few stars seems to be very insensitive and we're very connected in terms of the data all makes sense going through 300 clusters as we did in the end um it's not cannot be easily interpreted anymore and that's i think this is the power of machine learning right um you're throwing data at the methods um and to some extent you get insight and you can still understand what's happening but if you go to very large data as a human you are just impossible to find correlations right and this i think is the strength of these methods they're skepticists to just believe in these methods but i think i was very impressed by mit colleagues who came up with new antibiotics right and given the current epidemio or pandemia that we're facing i think finding new new vaccines new new antibiotics um with machine learning is great this is helping humanity and then i'm not questioning what's happening there okay um this is just an example of um where we applying the clustering techniques um and um in the end it turned out to take and to take strength 80 clusters was the best choice here this just i'm showing you um here the um the wing the solution the surface pressure distribution on the wing for for three different approaches and we have um the reference full order model in red in terms of the pressure distributions at four different section cuts here through the wing from top to bottom so the outer wing to the inner ring and we're comparing this reference solution in red with some classic trading that i just introduced you to and we're comparing that with some pod with interpolation so where we really build one model at every grid point so 10 000 different models and the first thing we see is that um the two different we use auto models which are shown here in green and blue um seem to give very similar solutions there's a small problem here on the lower surface so this line here is connected to the lower side of the wing and this is basically where the clusters need right there something missing in terms of boundary condition or an interface condition between the different clusters otherwise this looks pretty good and we're saving lots of computation time however and that's the main drawback of pud here the real shock location which is um shown here in red and which is very it's a very strong shock very far off on the wing as you can see here it's not being predicted by any of these models right so that that's not that good right so what's happening there so let's look into this here and let's see what remedies we have for this problem and one of the remedies is we believe for example iso map is a message which was forecasting published in nature and it's stemming from the image processing community right so they were interested in um well um in face recognition um and um yeah you have very sharp contrast you may have very sharp contrast in in pictures and this is kind of how you identify different people and this is very similar to um finding shock waves um in the flow solution right and we thought okay this may work here as well and so what you do is um in a nutshell you're computing solutions of the full auto model in your frantic space be it angle of attack and mach number for example in our case with the tile code and then you can um yeah these solutions lie on a manifold and in some solution space right so this is the solution space here and you can see the solution manifold here this is the so-called swiss roll example just to illustrate things and every point on this cloud is one cfd solution one snapshot right and what is interesting is um let's let's take a look at this um flow solution and this flow solution if you are computing occluding distance between these solutions this would be um following this this path here right which is of course not lying on the solution manifold however if you're taking um genetic distances which you can compute um then you you're able to um follow this um to follow um the solution manifold and how it evolves and this you can take advantage of this isomer to then find a low dimensional embedding of all of the snapshots uh in which you can then have to progress through in which you can then interpolate right so you kind of try and connect your input parameters to the null-dimensional embedding of this data and once you've found that and we can then make a prediction for a new point in the space and do a back mapping um back to the um full order solution space and um this is called an non-linear minus a non-linear dimensionality reduction method and what's happening here what how does it work we've applied this to again a wing test case this is i believe the land wing and this is some euler computations i'm sorry invested computation we've performed all together 25 computations for changes in mach number language of attack um more or less random do e and the idea was then to predict um for this point here shown in red and um on the one hand we used all of the black samples all of the black data snapshots all 25 and build a pd-based reduced order model and on the other hand we try to use iso-map to find the seven nearest neighbor on the solution manifold right and use these for the prediction and the first thing that is interesting to see here is that the nearest neighbors that were selected by isomap are not the nearest neighbors in the mag alpha space by the nearest neighbors on the solution manifold and um again to the physicist to the nemesis this seems to make sense right um they know that and yeah it's not the flow physics evolve nonlinear in the mach alpha space so you can you can i would i wouldn't have guessed which ones to take but kind of what you see afterwards and it seems to make sense and what is also interesting is that when using these seven snapshots um they're much more related with the um solution you want to look for then for example the snapshot down here right this this snapshot down here has nothing to do with the prediction here and this is also what the method tells you automatically to not include um data which is not relevant for your prediction because it completely messes up your prediction all right so we have done this prediction here based on these two methods and we're combining pd isomap with interpolation first so pud interpolation of information of the part coefficients here and weighting of the isomap snapshots the selected snapshots to make predictions um and here i'm showing the the surface prediction but it's more interesting to look at three different section cuts um and they're shown here the cp distribution or where the court length and again here we show in uh in black this time the reference solution and the very strong shock at this locations here and then pud combines interpolation is giving you the green the green line here and you can see that upstream of the shock we have a way to strong increase of way to strong decrease of pressure and also we have a step function built into the prediction which is very typical of pud because it's a linear subspace method however um the isomer method is capable of predicting the shock location and the shock shock strength pretty well because it's only trying to use data which has to do with the prediction zooming in a little bit into the stroke location you can see this much more clear right and the shock is well predicted in terms of strength and location and we can then also plug this solution into the solver right so we we're trying to now also optimize the port coefficients rather than interpolate them by plugging the solution into the flow solver minimize residual by varying the pud coefficients and then you can further improve the solution this is shown here right you're getting an even better prediction in terms of um the shock strength and location with the same pure d service the same isomap snapshots you can do the same of course with pud i skipped the runtime here this just to show you um building the model is rather cheap making a prediction with suppose isomap and pod is even cheaper and compared to a full order cfd solution this is on the order of um yeah four orders of magnitudes um cheaper all right next example um is and we're of course also interested not only in changing flow parameters such as mark and alpha we're also interested in changing the geometry right to arrive at some optimization capability so here i have a full full 3d aircraft configuration and we're looking at the wing and we are twisting the wings kind of at different section cuts we're twisting the wing to find the best twist distribution which is aerodynamically very important to get good performance and at five different sections where we're changing the twist and then the idea is to build a reused order model for the aerodynamics duty germans due to these geometry variations at again a transonic mach number which is our favorite hobby to predict transonic flows again we're trying to come to to compare pud and isomap here and this time we are adaptively sampling the snapshots based on the iso map embedding right so this is also an interesting idea just go back one slide maybe so if you have computed this embedding right you can also look at where are the largest holes in this embedding right and the idea here is to compute additional snapshots you
Info
Channel: von Karman Institute for Fluid Dynamics
Views: 2,162
Rating: undefined out of 5
Keywords: Big data, machine learing, data-driven, von Karman Institute, Université libre de Bruxelles, VKI, ULB, model order reduction, system identification, flow control, machine vision, pattern recognition, artificial intelligence, mathematics, mathematical tools, discrete LTI systems, Stefan Grötz, reduced-order modeling, ROMs, proper orthogonal decomposition, POD, multidisciplinary design optimization, MDO
Id: JUqNMjVCR_k
Channel Id: undefined
Length: 33min 0sec (1980 seconds)
Published: Fri May 29 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.