Spectral1

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
we're gonna shift gears today I'm gonna talk about finite difference so far and we're now with spit switched over to our health special methods and and some can sense special methods are basically getting based on the fast Fourier transform which we've already talked about which is gonna already gonna be party a homework but in some times fast for these spectral methods are a more general class of solving problems the ideas they call it spectral methods because you basically switch from looking at the solution in the regular domain that you're looking in you know for instance in time or space so you know if you have some space what we've been doing so far is saying okay let's just chop this up into a bunch of points and when we think about evolving this with derivatives and so forth we say okay take this point and it depends upon its neighbors so whenever we approximate the solutions this thing we're really doing sort of a local approximation this point relies on what's going on in its neighborhood it only sees the effect of things way over here because the next point lies on this thing input next point honest neighborhood all the way across and so it connects them all together through a chain to this big matrix a so this is not directly connected to this sort of indirectly okay for the most part your approximation then can be considered local I care only about my neighbors so that's what the finite difference doesn't based completely on these Taylor expansions now what we're going to talk about is the spectral method which is instead of characterizing things on local expenses we're gonna do it like basically global expansion instead expanding with Taylor series what we're gonna do is expand in terms of cosines and sines which are actually global functions over the whole range you're looking at it okay now we've already defined a fourth fast Fourier transform pair or a Fourier transform pair let me do it again here before you transform is nothing more than you've shifted over here's one possible definition of the Fourier transform and in all the definitions out there the only difference is is sort of in terms of where you put the 2pi and weight if you have that minus or plus okay so then your fast Fourier transform pair you take a function you want to transform it all you have to do is multiply it evening it I KX integrate from negative infinity when you've essentially done is now easy I KX is nothing more than sines and cosines so what what your debt basically doing is projecting this function onto this oscillatory behavior okay rooted in Fourier series or the idea for you a what you said look I can take any function you give me and I current represent that function as a sum of cosines and sines essentially what this is but now you're integrating over the whole space and writing terms of things in terms of f of K so in other words it's in the wave number space in other words e to the ikx this is like cosine KX what is the K control the frequency so you integrating basically writing everything now in terms of a sum this infinite continuum of frequencies okay and then when you want to come back out from this Fourier domain which is in frequency space why we call it spectral space spectrum usually refers to the frequency components you just multiply by now positive e to the ikx integrate over all possible frequency components and you get back out your function space domain so this transformation really does take you away from this picture here where you saying I'm looking at this thing as a function of X now you're saying by Fourier transform I'm gonna go look at this thing as a function of frequency I'm gonna do all I manipulation in terms of this thing in the frequency space and then when I'm done with it I'm gonna go back okay now the big idea for Fourier transforms we're simply rooted in what happens to the derivative Fourier transform of a derivative and we derive this in class using chain rule which is if I take the Fourier transform of the derivative of the function I simply get I okay to the nth power or II transform other function this is what allowed us then to start thinking about hey I could solve problems with this now because now this gives me this specific relationship between how derivatives and the function itself interact in a very simple manner I can use this now to solve them saint-antoine okay it's no different than when you did finite difference the key there with Taylor series is it gave you a relationship between how this point you're trying to solve for relates to its neighbors once you have that you can write down this big matrix a for it since it does differentiation and here this is the equivalent expression okay questions everybody good all right now the big deal here is that this is on an interval negative infinity and infinity but we know that on the computer you're never gonna have negative infinity to infinity you always have to work on a finite domain can be large but has to be finite homework 1 infinity was like - okay and that was a pretty good approximation so what infinity means is kind of relative but you have to work on a finite domain and so when you go to a finite domain the definition of the Fourier transform changes and in fact if now we go to finite domain you actually get the following definitions of the Fourier transform and in fact these are the definitions right out of MATLAB if you look at the help you go help FFT this is will pop up write it down and this is a Fourier transform pair out of MATLAB and generically most out met most algorithms give you this okay so this is for K between 1 and n and this is for n between so here's the basically what happens is now that we go to a finite domain two things happen first this integral becomes a sum because now we're only going to add up certain frequency components which frequency components we'll talk about that in just a second and also our box size is now going to be limited in fact what we've done here is we've taken a box size of size 2 pi we've scaled the whole problem down whatever your domain happens to be shrinking down to pi that's when these guys are periodic these are nothing more than cosines and sines we're going to shrink everything down to 2 pi and then work on that interval ok so that's the fundamental period now if you remember from the what we did in class remember how we defined our K vector the K vector had this two PI over L times then I constructed the vector this is their scaling down to 2 pi okay so let's talk about actually what this means in terms of what we're gonna expand in we're gonna be expanding now on a domain of size two pi and our basic function is this e to the 1 I 2 pi K minus 1 n minus 1 over N now if you were to write out the first few modes of this things okay for different K and n values what you'd find the basic solutions are I mean this thing here is nothing more than what is it it's cosine of 2 pi K minus 1 n minus 1 over n plus I sine 2 pi K minus 1 and a nice one so for instance what if I pick K to be one end to be one where we get here well sorry of zero and there's an X excellent good day okay now find yo drops out what's cosine of zero constant it's one just one of my basis functions and then I can build up different basis functions from here the next one's gonna be a cosine there goes this a sign it does this what does my sine wanna do this all these functions are generated from picking different integers here and the whole idea is I can say on a domain of between 2 pi I take any function and I'm going to add up these functions together in a certain way to represent that okay the only thing I need to know then is what are the coefficients how do I add them up to get these right so if I take a generic function on an interval 0 to 2pi my claim is that function there is nothing more than a sum of cosines and sines and all I have to do is figure out how do i weight them correctly how much of this one how much is that one how much of this one do i add up that's the whole process before you transform and the weighting of these things is given by these coefficients here ok so that's really what the process of the Fourier transform does is it basically says look let's go ahead and expand in terms of these basis functions the only thing I've got to figure out is how much do I out of each basis function how much cosine of 1x how much cosine of 2x how much cosine of 3x and the reason now that you're actually working with finite sum is because now you are not allowing things like how about if I do a cosine that looks like this you only want periodic solutions on a that are periodic over 2pi this is not periodic it's not a lab function there's only a finite number of values here for ennum for these guys that are going to allow things to fit in this box okay now in addition to the Fourier transform the generic Fourier transform which is as complexed piece here you could also say well suppose I actually want to think about maybe doing a cosine transform or sine transform what if I just want to keep one or the other because you can't do this okay now when you do that what you're essentially doing is saying okay I want to do a cosine transform which by the way MATLAB is DCT of some vector discrete cosine transform it used to be the DCT used to be in your standard package it's no longer there it's part of one of the toolboxes I believe the optimization or signal processing so they've shipped it off but what it did is said okay how about a lawyer solution to just be written as sum of cosines this with the discrete cosine transform allows you to do and what the discrete cosine transform algorithm did for you is essentially calculate all these events now the price you pay in doing it this so this is this is the key for Fourier transforms is how long does it take you to calculate these events or you come back to the regular cost regular for a transfer how long does it take you to calculate these things at first sight at first glance let's talk about it I have to cut I have to do this sum from 1 to n for every single K value and there's none of them so for every N value have to do n operations if you have this all up on the surf it looks like it cost you this much to do a Fourier transform and that's so that's the price you pay if you were to just generically do something like this the point is and what we're going to talk about sort of in the second half as we get along in here is that the fast Fourier transform allows you to actually calculate those things and login okay but at least on the surface it looks like this we'll talk about what goes like makes it go like that all right so back here so now cosine transform will do something like this and in fact let's look at what some of the modes of this thing might look like so basically you're doing is gonna do an expansion in terms of these cosines and here's what they look like here's the first one it's 1/2 the cosine period I'm sorry the first one is this just a constant cosine of 0 X the next one is like that the next one like this and you see there's this whole hierarchy of cosines that you can put up here now interesting instantly enough take a look at these functions here kind of boundaries so if you look at this you're starting off at one between you know sometimes it's negative one and then one it switches between these two big deal but the important thing is what's the slope here on all of it for all this for the discrete cosine transform you have that the derivative zero so one of the reasons you might want to use the cosine transform is if you have a problem boundary conditions are that at the boundaries the derivatives 0 because in fact this thing automatically satisfies that which is perfect so if you want to take a function and you have these like these boundary conditions for your problem that are of this type you just say use a discrete cosine transform because it automatically satisfies the boundary condition so I can just take any generic unit you know data and expand it out in terms of these things and it satisfies it no that's not so so did for it's gonna it's a little bit different expansion than I wrote down here so now it's for a hat you can still do this with basically the way the cosine transform works it with half periods and then allows you to go and then the next one instead of doing this it's like that so this doesn't kind of nice so something to keep in mind generically what we're gonna use is the fast Fourier transform one because it's in MATLAB we don't have to use a tool box which is ridiculous that if you have to get this through a tool box but and to these are very specific boundary conditions and a lot of stuff we're doing it doesn't have that boundary condition so okay so that's the cosine transform now let's talk about the sine transform so sine transform does is something very similar the basic idea is to say okay for the sine transform its DST discrete sine transform and what it does same kind of thing now expanding in terms of sines again half periods allowed so that would be your first one your second mode third vote and so forth okay and what do you notice about the boundaries here the boundaries here sign is always 0 0 & 2 pi so basically you're saying ok for something like this I'm basically working with functions that are pinned at the boundaries so if you have a problem we have where you're pinning things this might be the appropriate thing to use so the discrete sine transform pin boundaries Street cosine transform goes on expansion with zero boundary conditions I mean zero derivative boundary conditions and then finally this one here which is the generic Fourier transform which expands in terms of both sines and cosines periodic boundary conditions okay so those are your three flavors of spectral transforms that you've got or that we're gonna consider okay now I'm gonna make a comment again when you take a generic function like this remember with this when you're using your finite difference to screw is a ssin you take a point and what it cares about are is its neighbors it's a little cool approximation notice the difference here when you do this thing here it says take this whole thing any point you want to flow 12 you want to note the value is at that point right there or its derivative well you've got to do is add up a bunch of cosines and sines then expand over the whole interval it's like a global approximation okay what it does is in to say all I care about is what's going on in my neighbors know what it does it says actually if I want this point here I've got to add up a bunch of functions that are valid over the whole domain I'm working on so it's like a global function so it makes it very different and it's one of the great things about spectral methods accuracy is has exponet on an orders exponential accuracy so when you do finite difference here order delta T squared your air goes like overdubs T squared basically when you do for a special expansion like this like for you methods is beyond any algebraic power of accuracy so the accuracy is amazing in comparison to to find out different methods okay so there's two really big wins here one accuracy to is speed because we're going to talk about is this algorithm takes this operation count from order N squared down to and log in an order N squared is about as fast as you're going to do it for if you discretize write and you write a big ax equal to B matrix solve problem that's as fast as you're going to really be able to get as order N squared guaranteed and here n log n so pretty much if you can use spectral method and you do that's the generic rule alright always use it if you can okay so now let's start on the theoretical ideas of what goes on with this spectral transform so let me erase this stuff all right so here we go the fat now we're going to move towards understanding the fast Fourier transform so the passport so filter all we talk about is generic properties of Fourier transform and now I'm going to talk about what brings it down to n login and mostly what I'm gonna do is sketch for you the idea I've given some references in there if you want to go it just takes a long time to actually work everything out some highlight for you the results ok versus actually bore you with them you may be bored anyway but at least it only be bored for you know 20 minutes as opposed to three lectures okay alright the fast Fourier transform here's the observation if you remember in my definition of the fast Fourier transform for instance from that lab you get something like this so this piece here is the key it's the Scylla Terry it's the expansion part it's the cosines and sines carries all the information so one of the things you can notice from this is I can actually write this then as f of K and I can take this piece here and I can write it as Omega and K where the Omega NK is just this exponential so really in the end you can say well okay so if I want the whole function that I'm looking at the app that you know this is just one component that can't component if I want the whole vector of it I have to sum over all of these I think it's something like this so this is the idea so you're really working with this is the basis expansion piece that function right there okay you're trying to calculate this piece this piece here is already known but these are the important basis functions and these are your this is how long is it gonna take me to calculate that that's what you need for a given function okay so the way the FFT works and two Kia the guys they came up with this algorithm for doing it again will basically summarize it so they came up with this algorithm somewhere in the mid 60s and it really revolutionized things partly because at this point in history if you did anything it's gonna cost you order n-squared guaranteed all the sudden they came up with his algorithm they dropped first time you drop something down order n log n that's amazing and improvement in speed put yourself in the 60s well I was in the late 60s anyway 60 they just made the cutoff you know not too far but if you if you if you put yourself back then you look at computational power okay computational power what's a day after computers back then well pretty much things that took up a whole room right they mentioned the computer was 50s and he had about 15 years of things going on even the simple odious all that we did in class for homework one for instance shooting across once how long would that take you to do on a computer in 1968 how long would your homework want to take to do just part a on the computer you're talking about days your computer now just goes right cranked it out and less than a minute often you're doing days of work okay so if you can get a scheme to drop some order n-squared down the end log in huge savings then because they're just you're just so much slower now even if you're using the slowest thing out there okay so I wait two minutes instead of one often times for a lot of problems okay but then this was huge improvement and so for them coming up with this algorithm they did it just simply to try to figure out hey is there any way to make this faster it was one like they're trying to do math research they were working a company I believe they were at at IBM perhaps and so they were trying to figure out an about a better way to do this so here's the idea consider this matrix here matrix F of n which is an N by n matrix I'm already boring saying I was hoping at least get past the hour you know good thing you're only twenty minutes of this stuff boy I'd put you to sleep otherwise but this is a matrix and essentially what this is gonna be this is gonna be equivalent somehow to our a matrix before our a matrix did all the differentiation and now this F ven matrix is gonna have all our Fourier components in it and it's going to calculate these Fourier coefficients by doing a solve we have to solve an ax equal V problem okay now what are the components of this matrix the I throw J column has this in it which I will rewrite in a little bit simplified form from before as this Fourier coefficient or oscillatory piece there e to the I J throw kick column so this thing fills out this matrix which has all these frequencies in it okay and the size of the matrix n by the way determines the highest frequency you want to consider when you you have to cut off this expansion at some point right you say I'm expanding cosines well you could in theory expand from cosine of n PI X where n goes from 0 to infinity well in practice you have to cut it off at some point this was determines your matrix size okay so this is the expansion now here's the key observation that's going to make everything work for us in what follows the key observation is this that it's actually very simple observation to make what is omega to the 2n squared well that is e to the I 2 pi and here I'm just letting J and K float around for now so I could put J KS on these things but on the bottom I have 2 n because it's the 2-inch component and then e to the I 2 pi 2 and K but it's two in of those two of those so there's half of that one half of that one that is equal to the e to the I 2 pi and J ok this full one that's just again so really this is like a simple observation that we have here the consequence of this however is enormous I haven't used that exact word I think in my notes it's very it's very far-reaching what it says is there's this very fundamental relationship between elements of this matrix okay so as you go down as you start going down these matrix components a lot of the matrix of components are related to each other through this formula okay one's the square of the other so here's what allows you to do and again I'm not gonna go into detail of it so I'm gonna sketch over it hope you'll be ok with that I'm just gonna give you highlights this is like SportsCenter or Math Center you know not Sports Center but no I don't want to say Math Center I want it to be like cool Math Center we had logos and everything like you know last night's action here it is so here's what happens the consequence of this the consequence is this is the following when you have this matrix problem to solve for your Fourier components the coefficients of each you know expansion piece you have to solve something that looks like this here is your ax equal to B okay so this is your lower your X e to the B piece X contains all my Fourier coefficients Mikey y is what the function I'm expanding is actually is and this is your your basis functions so essentially what this results in is trying to you know do a representation like this and solve for these components the results in that big ax equal B now order N squared operations but because of that fundamental relationship it turns out you can split this problem up to two pieces and so I'll do through this relationship holding what happens then you say okay let's split it up into what it's called an even part where what I do is take every other components on part every other component as well okay told end components and it turns out that that system that we wrote down there can be written down as basically two decoupled systems they're basically here's the relationship the relationship holds because this is now F of M where m is and over to tap the sides what allows us to do this is because of this relationship between the forty modes this F of M basic is related to F of n in such a way that now we can basically take the problem break it up into two pieces now look at your operation count to solve one problem this one over here cost you or ran squared because that's the size of this matrix how much does it cost you to do this now over here you've broken it up to two problems well this one here is order M so this is orange sorry it's size M so it's order M squared but what's n squared it's n over 2 right so each one of these problems is something like this but I have two of them now so if two times this so if you have two times that you get order N squared over two so you see in this simple procedure going from here which cost your order N squared to here which is two problems half the size of it each one I just cut my work in half okay key question is though can i hydrate from these two separate problems how do I get back to my original problem because once I solve for these right I've got to figure out how do I put it back into my original solution and this is where a lot of algebra comes in and this is why I'm skipping it you find the following the relationship back to the original problem is that your y-components which is what your okay the Y components the first one through and over two minus one of them are given by you take this even part add it to this coefficient times y not zero and then your now you get me other half of them are given by and over to my wife here's the other half so you see what happens is to get back all your components you're adding in and then you're subtracting these and notice how it separates it first end components the next sorry first in over two components next n over two components by the way this is the genisis for this takes these problems split them into two split them into two splits them in the two this is what's gonna happen next which is this is a generic algorithm for basically taking this problem and breaking it down into two smaller problems but notice did we use anything specific here about this F of M we said there's a very specific relation between F of N and F of M where this is if you use the fact that this thing had this structure where you Square one you get back to the other one so that there's a specific relation stream ship between F of N and this but now we can just take these problems and split them up why not because now this is f of n you can imagine there's a nice relationship even F of n over 2 to F n over 4 the matrix of half its size and then you can imagine it keeps going down 2 and over 8 and so forth in fact this is the cascade that makes the whole Fourier transform work you take a big problem you split it into two have to work now I'm gonna take this problem here that we just did I'm gonna split it into two again how much work that's gonna have to work again so now if I do that it's gonna give me order N squared over four is my operation count I split it in half again and squared over 8 this is why you want to use 2 to the N 40 modes because you keep splitting in half and half and half until you can't split it anymore until this becomes F of n over N and then when it ends up here this is just a one by one problem you can't go any further down than that one equation one unknown so we started off with n equations n unknowns so we split it into two problems of n over two equations and over two unknowns with two problems of it and then we split that problem to four problems of size n over four and then we split that into eight problems of size n a rate all the way down to end problems of size one by one and when you do this the ultimate thing is you reach is and log in in that algorithmic structure so what you do is you solve the nth 1x1 problems and then you just follow these formulas all the way back up it's right down your solution did I make that so clear it's just oh yeah I always love this lecture because if you really want to get into the heart of this it takes a long time but I'm just skimming the surface just to give you an idea okay and hopefully that's okay it's okay with me so ultimately I think that's all it matters all right so graphically let me show you what happens and it's let's take a problem where I take an equal to 8 and I've shown you this classic picture on there you start off with a collection of points then you're trying to solve for eight of them let's say because has to be some factor of 2 to 2 is 2 to 3 what do you do we're gonna break it up into two problems well how does the two how did the two problems map over well look at right here and how they map over the first gentleman next the net first end over to the next n over 2 and they do so in such a way that's too even on map over and sort of this fashion they crisscross notice this is a problem 8x8 now you've broken it down to a problem notice that these four are gonna be now one problem these four are gonna be another problem you've broken the problem into two parts eight by eight problem now down to a 4 by 4 and a 4 by 4 now this 4 by 4 problem can be broken up with two two by twos so when I'm what happens and you do the crisscross here this 4 by 4 can be done the same thing you break it up into two two by twos this 2 by 2 can be both enough of the two 1 by 1's two 1 by 1's two 1 by 1's 91 by ones so this is often called the butterfly scheme in the literature and what it does is you take it eight by eight two four by fours four two by twos eight one by ones here's what's interesting take a bump like that what happens to the points or maybe so let's take this point right here where does it end up okay actually what I'm saying bad example I just want to do this illustration where does why not end up follow this path over here when you break it down to these you know even on problems this point ends up here this point ends up here this point ends up here this point ends up here what happened to this whole thing it got switched over like this this whole piece looked over here this whole piece look over there so when you take the Fourier transform remember of a Gaussian you end up with that right and this is why because it has this butterfly structure in it the reason they leave it in the butterfly structure is because it's part of the algorithm makes it go fast you don't want to shift it out of here the reason it's going fast is because it has a structure to it so you don't want to get down here switch it back out I mean I have to switch it back just to go back to the to the Fourier transform this is what's making it fast but this is also why you see this happening it's related to this breaking up of the problem into smaller and smaller and smaller bits and every time you do that you have two problems you're basically cutting a problem the operation count in half every single time every single time and then that's gets you down to the N log n that you need all right we're not gonna do much with this partly and that's why I figured it's something that if you want to learn more about you can you should at least be educated in the sense of knowing at least if someone asks you what's the main idea behind a fast Fourier transform you should at least be able to say well basically what the FFT does is it a factors of problem of size n into a bunch of smaller problems and keeps breaking it down to yet and one by ones and this is why you get n log n you should be able to say that and here's how it does it now we're just going to use it hooray okay all right see you guys Wednesday
Info
Channel: Nathan Kutz
Views: 3,370
Rating: 5 out of 5
Keywords: Kutz, Nathan Kutz, spectral methods, FFT, PDEs, scientific computing
Id: YKDptSCuQGY
Channel Id: undefined
Length: 48min 31sec (2911 seconds)
Published: Fri Mar 08 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.