Maria Charina: Algebraic multigrid and subdivision

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
thank you for introduction good morning everybody so I will present a joint work with Marco Donatelli Ruchi Romani and Valentina trotting our goal was to relate the method from computer animation in other words subdivision with algebraic multigrid which is a prominent solver for linear system of equations so before we understand the connection between the two methods subdivision and algebraic multigrid let us first try to understand how the algebraic multi-bit functions and what is it so the problem is we have a symmetric positive-definite sparse and new condition matrix a and we want to solve the corresponding linear system of equations the two the numerical tool for solving this linear system of equations is the algebraic multi-rate so the main ingredient of algebraic multigrid is a simple fixed point iteration which for example can be a simple iterative solver such as gauss seidel or weighted Jacobi method or Richardson iteration yeah the ill conditioning of the matrix a forces the slow convergence of such simple iterative solver so indeed if we have the minimum eigen value of the matrix a close to zero then the maximum eigen value of the iteration matrix here will be close to 1 if the method is convergent so the error in this case the error of this simple solver such as weighted Jacobi decreases very slowly the standard expression for the error in this case so the difference between the true solution of the linear system of equations and our iterative solution has the following representation in terms of the eigenvalues lambda J and the eigenvectors V J of the iteration matrix so if one of the eigenvalues is close to one then it is clear that the error will decrease very slowly and this is what you see on this picture and this behavior of the error is very characteristic for the whole class of the matrices that we consider so symmetric positive-definite and you conditioned so and what we see here is that the error after the two steps of the weighted Jacobi with the weight one half and the error after two hundred steps of the weighted Jacobi they don't differ much so it doesn't really make sense to keep on iterating so what the multigrid really does it tries to keep the solver simple the iterative solver simple and tries to accelerate the convergence of this iterate simple iterative solver by the so called multi-level error correction so the error correction this term is probably familiar to many of you from the numerical analysis so all one does is one tries to improve the approximation by adding to it improve the quality of the approximation by adding to it further vectors which are solutions of some residual linear system of equations so what is the multi level error correction here the idea is very simple I means based on the very simple observation that you can write n as an infinite series of this form and half plus and fourth plus one over eight and so on n over eight and so on so this very simple identity will allow us to come buuuut or to improve the approximation in just linear in linear time which is also the computational cost of those simple iterative solvers that we are using so the weighted the cost of the weighted Jacobi is big oven and we try to do the multi-level correct error correction without increasing the computational cost of the simple solver by much so what we'll do what we'll see now is that to determine a 1 we are allowed to do and over half operations to determine it - we will use an over for computational operations and so on ok so let's look now how the errors eg are being determined so what you see here is just one step of the multi grid and then we'll iterate this step so we'll start with the as with what we saw before in the previous slide of the equals to which you obtain after the two steps of the weighted Jacobi for example and then you will add 2x to the vector Z J J runs from 1 to K so for example to determine a 1 we'll start with the system matrix a and reduce the size of the system matrix a by defining a 1 by pre and post multiplying by the matrices P 1 transpose and P 1 so once is if we want to reduce the size the original size was n if we want to reduce the size of the system matrix the matrices should be rectangular so PJ is long tall and slim and P J transpose are fat and short so what are the properties why do we define the matrices AJ in this form in other words what are the properties of the PG and AJ that we want so we if you remember the matrix the system matrix of our linear system of equations was symmetric so if we define a J's in this form then all of the matrices AJ will be also symmetric the matrix AE was positive definite so if we choose all of the pjs to be full rank then all of the matrices AJ they will be also positive definite and then the matrix a was sparse so we will need to choose PJ also sparse so that the matrices AJ that we are computing in this way stay sparse after we do that after we define the matrices AJ you just solve the system linear system of equation that has some residual on the right hand side so how you solve it if the matrices AJ still sparse so not too small then you can use an iterative solver otherwise you can solve it directly if the sparseness is lost after you solve it the solution is this AJ tilde which is certainly of a smaller size than x2 so you need to somehow refine or enlarge the size of EJ tilde and you do it by multiplying a G tilde by those rectangular matrices PJ - p1 so you can see this process as multiplication by PJ is a refinement of a vector and multiplication by PJ transpose the coarsening of the vector or making it's more so algebraic multigrid so you might ask yourself where are the grids because the name suggests that we have to look at some grids the name is a bit misleading in this way so it was derived from its close relative geometric multigrid which was relying on the hierarchy of grids but the algebraic multi-tiered really doesn't have any greed in background it's really more an algebraic multi-level method and multigrid mess so the those matrices PJ should be defined just from a matrix a from the proper depend on the properties of the matrix a so they say that the algebraic multigrid is operator dependent so the matrices pj should be defined in such a way that they reflect the properties of a but there are no grids coming from the original PDE or something as an input for the algebraic multi-bit so I said before that this is just one iteration of algebraic multigrid doing this multi-level error correction we have the complexity Big O of n for one iteration of the multi grid and now you can imagine how you can iterate this procedure so after you run this iteration once you would get ax JK and you can use it as a new x2 as a new starting vector for the next iteration and keep iterating so what do we want really now what is the goal of my talk is to show you how to define those matrices PJ starting from just the system matrix a with certain properties in such a way that the corresponding algebraic multigrid will converge and it will be optimal which would mean that the number of iterations you need so this is one iteration but the number of iterations you need to achieve a certain tolerance is independent of n ok and I repeat so we said we need the rectangular matrices full rank and sparse to be able to ensure certain properties of the matrices AG that we want okay so now the question is where does the subdivision come into play so let's first see what are the hints on possibility of connecting the two area subdivision an algebraic multigrid this picture we've seen before already but it tells us one more thing that the error X - x - after two iterations already of the weighted Jacobi becomes smooth what does it mean in this case is just a vector so if you plot the components of the vector and connect them linearly then it looks like a smooth curve or smooth function and smooth is one of the favorite words of everybody who does subdivision so it's the first reason why the subdivision might be and the background of this whole thing and I think when you look at the way the approximation to the X looks like so the X - X 2 will be smooth so the X 2 is the oscillatory part of the solution all the details so if you have background and wavelet of frame decomposition it looks like something similar is going on you split your approximation into the details and the smooth part and everybody knows or those who are familiar with wavelet frame decomposition that to determine the smooth part of your signal or data you would do up sampling and convolution in the reconstruction step and this is subdivision so there are lots of hints on the subdivision entering the game at some point but let's see what else can hint on the usefulness of subdivision for multigrid if we look at the matrices that the system matrices again which now come from the applications and elliptic PDEs after discretization of elliptic PDEs for example the 2-dimensional apply Sein using finite differences or the one-dimensional laplacian using the iso geometric approach with the muth order B splines and I just keep you very give you very simple examples of F you see that the matrix depends on a so-called symbol this F trigonometric polynomial F and as in subdivision all properties of the matrix a I encoded this symbol F so let's look again what properties we had compared to the properties of this trigonometric polynomial that we are considering so it is real it is a real trigonometric polynomial so the chocolate sub octuplets matrix that you're gonna get is symmetric and sparse the all of the symbols have one common property that they are 0 at 0 and otherwise strictly positive so if you think of the circulant matrix with such a symbol it will be having eigenvalues 0 it's the value of f at 0 and all other icon values will be greater than 0 and the circulant matrices when n is getting large when the size of the matrix is getting large would in certain sense approximate the tuplets matrices well so that's why we're getting the smallest eigen value very close to 0 because of this property of the symbol and the other eigen values will be greater than 0 so this is the reason for having positive definite and ill-conditioned matrix in this case ok so now what have we seen so far so the x is the approximation to the solution splits into two parts so the weighted Jacobi determines the oscillatory part of the solution and this is true for the whole class of the matrices that we are considering due to Brunt if we just take the weighted Jacobi with the weight 1/2 they're similar results hold also for just positive or other simple iterative solvers like gauss seidel and Richardson iteration and then what we are looking for is determining the smooth part so to say the smooth errors and the task of the subdivision would be to refine the potentially smooth small vector AJ tilde by multiplying by these rectangular matrices refine it to the size of the x2 so that we can add x2 and ages and what I'll show you is that the convergence and optimality of multigrid Oh show you how the convergence and optimality of multigrid are influenced by the properties of the soft subdivision just the optimality to remind you I haven't defined it before but we already mentioned we want that the number of iterations of the multi grid to reach a certain tolerance that does not depend on the size of the system matrix okay so I'll give you just a vague definition of subdivision because really it's not that it will suffice for our purposes of understanding the connection between subdivision and multi grid so subdivision is an iterative method for measuring fineman's the applications are in computer animation so I'll define it in the following way so the vector C contains all the vertices of the triangle a all of the mesh which is given so it has n 0 rows and 3 columns for the x y&z coordinates of the points of the vertices and then the subdivision can be seen as an application of a certain sparse matrix P J to the vector C then you get refined mesh where the vertices of the refined mesh and now contained in this matrix and you keep on going so that at the end you get the smooth and finite number of steps you get a mesh which looks smooth enough already to I'm like the limit surface of subdivision so we say the subdivision is a local mesh refinement so the matrices that we use for mesh refinement are definitely sparse and there is a smoothing effect which we all need to produce the smooth errors for multigrid so when the matrices pj can be seen as sparse full ranked sub blocks of this rectangular matrices calligraphic pj that i don't define here but i hope it's to describe the subdivision in this way so now let's look at the example I will show you how those matrices PJ's can look like so if we start with the laplacian and discretized so I'll give you all examples and results in the univariate setting just for simplicity of presentation but you will see that the generalization is very simple so we discretize the laplacian this is the symbol we will use second order differences to discretize the laplacian the associated tuplets matrix looks in the following way the subdivision step or the I said before consists of up sampling and convolution in this case as we do with vectors up sampling is nothing else but multiplication by the matrix of this form which consists of ones and zeroes so it does nothing else but introduces inserts zeros at the even positions of the vector if you multiply it by a vector this is due to those zero rows and the matrix and then the convolution can be seen as a multiplication by certain tuplets matrix which also depends on a symbol P and so you see that now we have two symbols F and P appearing so F encodes all the properties of the underlying system matrix and P here in the Tirpitz matrix in this convolution part encodes the properties of the subdivision so when our goal is nothing else but to match F and P this is what we want to understand how to match these two symbols to define a convergent and optimal multigrid method so the task is now simpler so one half one has two trigonometric polynomials and one has to decide if they are matching in a sense that the corresponding multigrid will be convergent and optimal just to show you that you really need to be careful mate in those two symbols it's not always possible for an a given F to just pick any P and get a convergent and optimal multigrid and this we'll see on the following example so now we are take by harmonic problems for the plus N squared we define the PJ matrices that I showed you before the please matrix which depends is defined by a certain symbol P times this upsampling matrix those names for those unfamiliar with subdivision you just imagine that behind every name there is a different trigonometric polynomial P using that P we define the PJs and we see that in some cases for the linear B spline for example it still converges so the algebraic multi-tiered will still converge but the number of iterations is too high and it does change with n as when we try some other piece which go under the names cubic b-spline on interpretor a 6-point scheme you see that it looks optimal in a sense of that the number of iteration doesn't depend on n and you definitely get the convergence this how many iterations you need to achieve a certain tolerance and the question is now ok one can just try to play and the hope that one gets like key or one wants to explain really why that's this happens and how to choose given F how to choose the appropriate P so that the algebraic multi-tiered is convergent and optimal so we derived the following simple to check sufficient conditions on the symbol P that would guarantee us the convergence and optimality of the corresponding algebraic multigrid method so what do we have here so we have a system matrix the tuplets system matrix a that depends on the symbol F the symbol can have a simple 0 at 0 or maybe multiple zeros as we seen on the before the bi-harmonic problem we have a double zero at zero and we assume that F is non zero everywhere else which was natural we've seen before when I talked about the matrices arising from applications the F was greater than zero everywhere else but at zero and all you have to check to ensure that P matches F is that the this trigonometric polynomial P now at PI has certain zeros of order zero of a certain order and that the absolute value of P is greater than zero for this interval so I guess the conditions come very familiar look very familiar to many of you so the first one is nothing else but polynomial generation describes the Paulino characterizes polynomial generation for the appropriate subdivisions for the convergent subdivision scheme and the second one is the variation of the coins condition and it will imply stability of any kind so LP stability because they're all equivalent with when we have trigonometric polynomials so actually you can reformulate this result is that giving trigonometric polynomial F with certain properties all you have to check is that your subdivision scheme has a polynomial generation of certain order or degree and that the shifts the linear are the shifts of the integer shifts of the basic limit functions are stable okay so now let's see what went wrong for the linear b-spline how can we explain it so if we look at the trigonometric polynomial f it had a double 0 at 0 but if you look now at the corresponding trigonometric polynomial that is going under the name linear b-spline you see that it only has a simple zero at PI so that's why it would not match the sufficient so it's only sufficient conditions but stills and explanation why things can go wrong in this case the rest of them the cubic this plan Interpol to a six-point scheme they would satisfy the sufficient conditions that I just given to him so now it is good yeah I'm just giving several we tried several ones I'm just giving the ones that look better so to say a lot better okay so the question was if one can try any other in terms of division scheming yes you can as long as you have in this case at least you generate linear polynomials and you have stable integer shifts of the basic limit function so what else can one do here I mean here so far we've seen so all those schemes what unites them the dilation factor is two you know the question is is there any use for subdivision schemes that have dilation factor three four and so on so this depends on how many zeros your symbol F hands so far I convinced you that F can only have a 0 at 0 and it's greater than 0 otherwise but we'll see on the next slide that it can happen that there are numerical zeros at some other points on the interval from 0 to 2pi and for example here we assume that F is not only 0 at 0 but can be also 0 at PI with a certain order a certain ah multiplicity and otherwise is greater than 0 so what do you do then you see you just have to change the dilation factor and require that you have zeros at an appropriate points which correspond to polynomial generation of ternary subdivision schemes and then the second condition also changes appropriately but it is still polynomial generation and stability which is appearing here and now let's look at the example so what what can happen why all of a sudden the symbols might have numerical zero at some other point so I'm I will not explain to you the details how those symbols look like you can just ask Carlo henrique so the symbol that you do that will result when you try to disk as a one-dimensional laplacian using the muth order b-splines would have the following structure and the property of h mu is that when mu goes to infinity it will tend to zero at five so we have two zeros one is the zero at zero and the other one is numerical zero at PI and now let's look how the binary or the scheme's with a dilation factor to perform compared to these schemes with a dilation factor three so you see when mu is equal to three this singularity is not yet pronounced so we don't have really the zero numerical zero at PI so the schemes with the dilation factor to do better because they really don't lose too much information when you think of course an Inc you would coarsen he or throw away less at each step and the ternary would Corson the reduce the sizes of the matrices faster so it will divide by three the sizes of the matrices so you can think of it of you're losing too much information so the binary would do better than ternary but when you get to the MU equals to 16 you see already the difference and this is really because the symbol F at PI would have a numerical zero no so what how can you understand why the ternary schemes are better here they just try to remove the singularity at PI because we don't have a zero at fine if we go back for him we don't have this condition that P is zero at PI anymore we removed it by using the dilation factor three instead of two and the binary schemes would have an zero at Phi which would even worsen the situation okay so now let me try to give you an insight into the proof and show you where those conditions come from why the polynomial generation and the stability of the scheme is important for the convergence and optimality of the multigrid so the first observation is that although the implementation is done with tuplets matrices and those are the matrices one obtains when when discretize is the PD's the analysis is usually done with the circulant of block circulant matrices for the following simple reasons so if you compute the a J's by multiplying the system matrix by P 1 transpose P 1 you will not get the tuplets matrix anymore so they don't form an algebra so here you want to keep the structure so all the matrices all the smaller matrices should be circulant then the circulant matrices have another nice property that you can diagonalize them using the Fourier transformation so actually you see better the connection later on between the properties of those F and P appearing in the definition of the circulant matrices AJ and PJ and the circulant matrices approximate the tuplets matrices well it means that when n is really large then the eigenvalues let's say of the circulant matrix are close to the eigenvalues of the tablets matrices or the other way around so let's see how does this help us to understand what are the properties of P and F that we want to choose so I'll look first at the very simple case so we don't do the key multilevel error correction steps we do only one which is so called algebraic to grid method so you've seen this iteration before already so instead of that loop that we had before that was running from 1 to K we just reduce the size of a ones solve the system of the equations explicitly and then refine the vector e 1 tilde via subdivision meaning by application of multiplication by this matrix P so now what is the iteration matrix I showed you the iteration matrix of the weighted Jacobi but I never told you how the iteration matrix of the algebraic multi-rib looks like and it's also an iterative method so when you think of the convergence it's very natural just to ask oneself how it is if the system matrix in a certain norm head is less than 1 or not yeah if the norm of the system matrix is less than 1 so but what is the system matrix here so let's look at the following so what are we trying to do so I call it e on purpose because now I have an equality here between the true solution X and the x2 plus E and above we had an approximation only so what does one know what one can see from this representation where does it come from so if you look at the structure of this equals this expression so the a x2 appears also here a x2 you multiply minus a x 2 by P transpose this is the p transpose here then you have a minus 1 to the negative 1 this is the 1 and then the P comes from the projecting up now so this is the steps this is nothing else so that minus P a 1 to the minus 1 zone we here with the plasma doesn't matter so multiply because I have minus e instead of adding an e sold multiplied by X 2 is nothing else but this part of the computation and now on the other hand side we have X minus X 2 so that identity times X minus X 2 and the other part is nothing else but the - II here ok and so now we want to convince ourselves that we get an exact solution if and only if the difference is in the range of P okay so one direction so if we have the exact solution I hope one can see easily that then the difference should be in the range of P so if this is zero okay then the difference should be in the range of P so that really made sense to compute the error in this form the other direction one has have to give it a little bit of thought but if you're in the range of P then this is nothing else is P times y some other vector and now when you put the P times y here you see P transpose a P is nothing else is a 1 a 1 will cancel with a 1 to the negative 1 and you still have py left here and then you have py minus py so we will get a true solution ok so that just tells us that this is nothing else but the matrix behind this iteration and refining so to see what else what other information do we get from this expression so the best approach so one can see that this is the best possible choice of doing of computing the error in this case and this is due to Volga Steuben and this is nothing else but the best approximation in equality so you can also convince all yourselves that this is true now the iteration matrix is then one step of the jacobi times the miss orthogonal projection and to match F&P we have to find a constant C such that in a special matrix norm if you have positive definite or semi norm if you have semi positive definite matrices this less than one so the big or the big step that was done by Google and Steuben and now analyzing this iterative method is to split the analysis into two parts so you can really analyzed separately what's going on with the weighted Jacobi and you have to and you can analyze separately the structure of this matrix so the properties of this matrix so for multi grid you have a very similar expression remember multi grid just have several P J's you do several step so add several error correction steps so this matrix look familiar from before so you need to show the boundedness in the following sense and the first part of the proof of this efficient or the first efficient condition that you need to check that there is something like convergence of those weighted Jacobi methods or Richardson iteration whatever simple iterative method you use and then if you prove them those separately you are guaranteed to have the constant C which is less than 1 that was the big achievement by Logan Steuben and as you see the matrices P that we constructed so this connection between subdivision and the tuplets matrix F appears only in the second part of the proof or the second condition is the only one that involves the matrices P J so all we have to do is to really now concentrate on how we prove this bound or how would we prove this estimate and the first part was already known from before because those turpis matrices a very standard there appear all over again so when there is no P appearing there so we don't need to look at the proof of the first sufficient condition so if this if think about the circulant matrices there was a Fourier transform that was allowing us to see the F and relate it to the properties of P so I'm not gonna tell you all the details but in a simple case when you have dilation by 2 and un a univariate situation after the Fourier transform and some algebraic manipulations well everything what is left to show is that there this constant gamma J bounce the following expressions and now if so what are the conditions what are those F J is appearing here all of a sudden we had a symbol F was the symbol that determined the tuplet system matrix at the very beginning then we were computing the matrices AJ that were also circulant so there is a way I don't give you the explicit expression also to compute from F and P the symbols of those circulant matrices AJ and to ensure that those f GS have the same property as F meaning that they have 0 at 0 here I only give you 0 for the 1 but also with the same order as the 0 of F at 0 you need to satisfy the following property and this was proven before by Marco Don Italian co-authors I don't give you the proof of that so you will believe me here hopefully but now how does the polynomial generation of those zeroes conditions come into play so now if you know that FG is 0 at 0 has a certain order of 0 at 0 and you look at those expressions you see that F G at 0 appears in the denominator here and it also appears here we know that otherwise so FJ at X plus pi is greater than 0 because everywhere else the symbol is greater than 0 so we have to match the order of the zeros in the denominator numerator that's all we have to do here so this is nonzero here we have a 0 and here we have a 0 so that our only hope to compensate for the zero in the denominator for x equals to zero is to assume that P at pi is equal to zero or construct choose such ep that it is zero at PI but now we have a double zero in the numerator it doesn't disturb us we have a Z double zero here which is this is non zero this is non zero and this is zero so we have a double zero in the denominator and double zero in the numerator so everything is fine if we require PE add PI to be zero and the same thing appears here so that's all you're doing you're matching the order of the zeros that's how the polynomial generation property comes into play and it's also easy to show that this condition will be implied by the that version of the coins condition that I showed you before so that's what we did okay but just now to show you what is really going on what is the role of the P or the role of those matrices PJ what is bad PJ and what is a good PJ so if you look at the corresponding symbols the eigenvalues of the circulant system matrix a or all the other circulant matrices AJ would lie on this curve okay so we definitely have a 0 at 0 there positive semi-definite and other eigen values so it looks like the P J's that are used to define AJ do not improve the conditioning of the matrix of the matrices AJ if you choose the good PJ what happens to the symbols you still have a 0 at 0 but it looks like you're improving or decreasing the order of 0 at 0 if you choose a good PG so you are improving the conditioning in a way what happens in the case of isagen when we've used the eyes of the discretization of the via the geometric approach so the bed so the binary or the dilation by the subdivision schemes with dilation by two they do try to improve the singularity at pi but very slowly and the ternary after six iterations already don't see that zero certainly something happens here but in total you can really check that the conditioning is improving and the ternary manage to remove the singularity at PI fast so that's it that all I wanted to say so let me just summarize shortly so what I try to convince you is there an algebraic properties of the symbols of subdivision schemes influence the convergence and optimality of multi grid main system matrices if we look at template system matrices with this particular property that we have zero at zero the choice of the dilation matrix can influence the conditioning of the multi grid and there is still a lot of to do with just starting and that's why I wanted to present a very simple version of the results just to show you that there is really hope to connect subdivision and algebraic multi grid so thank you for your attention thank you last I understand what is algebraic multi-grade that's that was my purpose Thank You Eddie thank you I have to question or comments I don't know first of all the fact that the polynomial generation is important it is linked to the fact that the differential operator are exact on polynomials is it true yes but no I wonder if you take a differential operator which is exact on other families of no no you mean exponential polynomials or something no no no it's just really the location of the zero can be can be we just we don't explore him and exploited all possible connections yet that's all we saw in the examples you show that they were exact polynomials and the other remark is could it be important to use also the derivative of the function because you have a solution of differential equation so maybe you also have the derivatives so I you might take it to a meet how many subdivisions gives a well maybe non station anything you know we don't know yet yes you have used the same P for each PG one can use different piece maybe for different pages to introduce non-stationary schemes the same thing with hermit I don't know we have to look at all this no idea thank you so did you look at multi-level to please you mean the doctor please oh yes that was the poster by Valentina trotty so you can still go back and look at it but it's just at the beginning so the results generalized in a very easy fashion but now we have to find examples where it really makes a difference but Valentin on the poster there is the use of the anisotropic dilation how can you make use of an isotropic dilation to improve the convergence and optimality of the multigrid but just the beginning so very interesting and I want to ask you I mean you're all the analysis all your analyst makes heavy use of Fourier and very structured thing now so if you imagine operator coming from unstructured I mean discretization on a selected mesh which come out all the time in the discretization of PD I mean now what is left from all this for example I mean polynomial you could imagine yellow crystalline structure and the notion of polynomial reproduction still make sense and the notion of stability conductivity could still make sense so there I want to understand what what carries out what can carries over in an unstructured setting and what we have to think about it I mean this is all we've done so far and now we just image what we discovering that there are lots of possible other directions and thank you for the comment also so no I cannot say anything at the moment one has to think about it thank you but we just started like at least me half a year ago so so I I have in a sense a philosophical question the fact that the the distribution of the zero of the symbol of the projector meaning this the zeroes of P as have an influence in the convergence is the theory of algebraic multigrid already so what now there is this Tencent okay you can see this and it's a nice interpretation a nice connection with a subdivision but I mean what you gain more knowing that this P can be seen as the symbol of a subdivision scale and not much but saying that we have already a whole class so they were not familiar with this whole class of peas that one could just take and use for the multigrid because the I mean we know the properties we don't have to prove that they have certain polynomial generation it's all known we don't have to show that this table it's also known you just take it for granted and apply it yes what usually what you look is just a property of the 0 if it is trigonometric polynomial that's it but constructing it in such a way that it also satisfies the second sufficient connection but you don't need to construct them you see you have them already you just have to identify which ones you can use and which ones you will do cannot second mode because also sufficient conditions there but which ones you can use for sure and I also have a comment the analysis has done for say tablets or multi-level tablets but in practice also if you use structural discretization so it's an easier cases with respect to water birth viscosity due to the boundary condition this is not the case I agree completely so that's why I emphasize the structure of the matrix here so so this so and that the symbol is important but it does not capture everything so this is also something to do in your car I know there is a lot still to do I agree completely okay I am the chairman so I I can you know Zoey ran out of time so you you idea is to how to solve how to do the refinement of the course in the in coarsening also and coastal English so you need this F so you are not solving algebraic right agree is when the problem has no underlying structure often in general yes but I showed you that there are lots of applications where there is more structure to it to the system matrix and it's only for those classes of matrices that we can do some so you really are walking on multi node bright particular because you have provided a differential operator with the symbol right true but algebraic multi grid as understood it is the name comes from the following fact that the definition of the PGA's depends only on the matrix a and there is no grid so that's why I'll do break so multi grid in general needs a hierarchy of grids and then you that's a question you have a grid like triangulation well how do you use the appropriate subdivisions Kim okay so here this is it seems to work just for the important case thank you very much and now it's time to go to our you
Info
Channel: Centre International de Rencontres Mathématiques
Views: 1,152
Rating: 4.6666665 out of 5
Keywords: Cirm, CNRS, SMF, Mathematics, mathématiques, Marseille, Luminy, Centre international de rencontres mathématiques
Id: hSr5sn5R4Yw
Channel Id: undefined
Length: 51min 15sec (3075 seconds)
Published: Wed Oct 05 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.