Monte Carlo Geometry Processing

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi i'm keenan crane from carnegie mellon university and i'm going to talk about how we can solve equations on extremely complicated geometry using monte carlo methods so just like monte carlo revolutionized photorealistic rendering around the 1990s the monte carlo methods i'll discuss in this talk provide some very powerful tools for broader problems in science and engineering this work started with my phd student rohan sahani and more recently with a growing list of collaborators from cmu and dartmouth and elsewhere a basic motivating problem is the classic interpolation problem you have some region omega with known values along the boundary and want to smoothly interpolate these values into the interior this is what's known as the classic laplace equation laplace u equals zero on omega subject to dirschlage boundary conditions u equals g on the boundary of omega now many of us have been trained including myself to think about solving this kind of problem using finite dimensional approximation for instance build a mesh of the domain which can be used to define basis functions phi sub i then compute a solution a function u that satisfies our laplace equation in a weak sense of course if you do this you necessarily have error due to approximation of the geometry and also due to approximation of the underlying function space but there is another way we can actually avoid finite dimensional approximation completely by using monte carlo for the laplace equation this starts with something called kakutani's principle which says that if i want the solution at a point x i can just take a bunch of random walks to the boundary grab the boundary value g and then take the average or expected value of those boundary values there's only one problem which is well how do i simulate a random walk it feels like i still have to take little finite time steps actually what i can observe is that a random walk starting at a point x is equally likely to exit through all points on a sphere around x so if i just uniformly sample the sphere then i haven't made any statistical error at all and this is the starting point for the so-called walk-on spheres algorithm which says that to solve a laplace equation at x naught i take the biggest empty sphere around x naught uniformly sample that sphere and repeat until i reach the boundary of course this is like zeno's paradox of motion i never actually reach the boundary but when i get within some epsilon neighborhood of the boundary it's probably fine to just go ahead and grab the closest boundary value a completely different way to motivate this same algorithm is through the mean value property of harmonic functions namely the value at any point xk is equal to the average value over any sphere around that point and so if i repeatedly estimate this mean value integral using a single random sample then i'll again get this same walk-on spheres algorithm and this sort of duality between the stochastic point of view and the integral or potential theory point of view will become important later on but before getting too excited about this strange and different approach i might want to ask well why why would we want to do it this way rather than just build a mesh like we always do and i hope you'll excuse me for a moment if i indulge in some propaganda about monte carlo methods and why i think they're interesting the basic reason for seeking out alternative strategies is well geometry is hard even really smart people at pixar are saying they hate meshes they cannot believe how hard this is and in contrast to meshing finding the closest point on the boundary of a domain like we need to do for walk-on spheres is a relatively simple operation this is super important because geometry is only getting more and more complex as the years go on we have extremely detailed data coming from laser scanning and shape optimization and computer vision and even after decades of working on methods for solving pdes we still struggle to handle the level of geometric complexity that we encounter in nature in fact even state of the art tools can take literally hours to mesh a single model and so even if your finite element solver is super super fast it doesn't matter because you're spending all your time converting your domain into a mesh that's good enough for simulation this is especially tough because geometry encountered in the wild can be really really bad to quote jean-paul satra hell is other people's meshes and if you've worked with real data you know he's right now one approach is to just keep pushing on better and better meshing tools and to be fair these have gotten extremely good especially in recent years but meshing really feels like a problem that will never be completely solved even with the latest and greatest tools you run into situations where geometry can take hours to mesh and even when the algorithm terminates it might not give you what you want it might miss small details that completely change the meaning of the solution so back in the 1990s people ran up against the same kinds of problems with rendering if you wanted to make a photorealistic image you would probably use something called finite element radiosity where you make a mesh of the scene you build a large linear system and then solve for the illumination everywhere in the scene and this idea fell out of fashion for many reasons but in large part because meshing is again just really really hard and painful instead the rendering community moved away from finite element methods and toward monte carlo methods where you recursively shoot rays that bounce around the scene and so now instead of meshing you just have to compute where array pierces the domain boundary which is way easier to do beyond just being a beautiful idea the shift to monte carlo methods really enabled modern photorealistic rendering and has had a major impact on industries like entertainment and industrial design and these days computer vision where we're seeing leaps and bounds in the ability to reconstruct 3d objects from photographs of course there are a lot of physical problems beyond light transport which show up in lots of important applications and so this is where i believe monte carlo methods can provide a lot of the same benefits for science and engineering for instance just as in rendering as long as the problem we want to solve is well posed we can be confident that monte carlo methods will produce a physically meaningful solution we don't have to know for instance which function space we're working in or find finite elements that span that space or anything like that just like rendering the cost of adding geometric detail grows like n log n which means we can easily scale the problems with literally billions of elements so here's one pde on the right that we solved where the boundary mesh has over a billion elements just like rendering you get perfect parallel scaling by just averaging the results from n different processors just like rendering you can immediately get a reliable preview of the solution by taking just a few samples so here i'm showing a pde being solved in real time in a shader program with no precomputation just like rendering you can work directly with a mix of different geometric representations like meshes and implicit surfaces without having to generate a separate finite element mesh just like rendering you get a totally reasonable solution even if your geometry has major defects like cracks and self-intersections just like rendering where you shoot out rays only for the pixels that you want to see this walk-on spheres algorithm lets you evaluate the solution only at the points where you actually need it and you're not forced to use monte carlo everywhere unless you really want to in rendering for instance we're now seeing a lot of use of ray tracing inside rasterization pipelines in principle you could also use monte carlo methods somewhere inside an existing mesh-based simulation now despite my exuberance about this approach i'm not naive there is amazing work on solving all sorts of physical equations that we may never be able to do using monte carlo methods but at the same time given how far monte carlo has brought us in helping to understand and simulate and predict illumination we would be absolutely crazy not to see what we can also do for science and engineering okay enough with the propaganda let's get back to looking how we can actually estimate the solution to various equations these ideas by the way have been around for a while but have been kind of forgotten in time the earliest record i know of are some brief remarks in the book on numerical methods than in 1956 mueller really lays the groundwork for walk-on spheres and he understood the propaganda even then he says as long as the problem is well posed you don't have to worry about the geometry as with any good algorithm this one was reinvented a decade later under the name floating random walk then much later bender and bravon give a thorough analysis of its performance and complexity and over the years there have been maybe a couple dozen papers that propose various extensions but compare that to finite element methods which probably have tens of thousands of papers written in the same period of time there have also been some applications of walk-on spheres especially to integrated circuit design and a few other things but that's really about it and the main reason why this hasn't been taken further is that so far walk on spheres applies to a very limited class of partial differential equations so that's really our mission or our call to action has been to try to expand the kinds of problems and hence the kind of applications that can be solved using these monte carlo methods so let's begin by looking at some basic facts about monte carlo the idea is i have an integrable function f and i'd like to compute its integral i over the domain omega to do that i'm going to take n uniform independent random samples evaluate the integrand at these points add them up divide by n and multiply by the volume of the domain one very important fact about this estimator is that it's always consistent meaning that as i keep taking more and more and more samples it will converge to the true integral with no further hypotheses on the function f so nothing about its regularity or anything like that so it's a very general technique fact two is that the estimate is unbiased meaning that the expected value of the estimator is equal to the true value of the integral or more intuitively i get the exact solution on average even if i take only one sample and we'll use this fact a lot fact three is that the error is very predictable for any function in any dimension the error goes like one over the square root of the number of samples another basic tool we'll need are closest point queries which for a given query point x find the closest point x prime on the domain boundary so for instance if our boundary is just a single line segment then we can easily write down a little explicit formula for x prime and the distance between these two points then gives us the sphere radius for walk-on spheres so the details of this formula aren't particularly important a more important observation is that these atomic queries are easily combined to get the closest point on a more complicated set for instance if our boundary is now two line segments a and b then we can just take a minimum of the individual distances to get the distance to the closest point on either a or b and for domains made of many small pieces let's say many line segments we can also accelerate these queries using a standard spatial data structure like a bounding volume hierarchy which lets us compute the closest point to n objects in only log n time so given those basic components how do we come up with a walk-on spheres estimator for a given differential equation the path i'll take is to first find a stochastic representation for the pde over a general domain then convert it to a deterministic integral representation over a ball-shaped sub-domain which can then be estimated via monte carlo one kind of funny thing is that in retrospect you can often give the whole derivation without the stochastic picture but the reality is the stochastic approach is pretty important for developing a lot of the ideas in this area and it would be pretty hard to get by on just the integral picture alone i'll also mention that in this section of the talk i'm just going to give estimators that are correct so later on i will talk about how to make these estimators more efficient so here's the pde that i'll consider today and if we imagine the solution u represents for instance the temperature of the domain then the boundary values g described fixed temperatures along the boundary a source term f describes kind of the background temperature a term sigma u an absorption term describes sort of cooling due to absorption of heat into the background medium a drift term omega dot grad use sort of models heat getting dragged along with a flow of velocity omega and the scalar function alpha in the second order term describes how quickly heat diffuses through the domain now in general these coefficients could vary over space but for simplicity for the moment we're going to assume that these coefficients are constant okay so with all our basic concepts in place let's actually write down a walk-on spheres estimator for each term of our pde we will start with what we already saw at the beginning a laplace equation with dirshley boundary conditions g and to handle this boundary term we had kakutani's principle which says that we're going to take a bunch of brownian random walks to the boundary and take the expected or average boundary value to actually simulate this process we restricted our attention to a ball around the point x and replaced the boundary values g with the unknown values u of our harmonic function so in this case the expectation just becomes our usual mean value integral now that we have this deterministic integral we can estimate it using monte carlo so uniformly sample points on the boundary of the ball those are our points y sub i and recursively evaluate our estimator for u as mentioned before this estimator and any monte carlo estimator will remain unbiased even if we take only a single sample so setting n equals one and so then to estimate our boundary term we simply recursively sample a point on the largest ball around the current point and we get an algorithm that looks like this it started x naught sample x1 sample x2 and so on and so forth and as we said before for this to ever terminate we need to then jump finally to some point on the boundary the closest point on the boundary which is going to introduce a small but controlled amount of bias our entire solver for the laplace equation boils down to about 10 lines of code so initial our estimate to zero then for some number of walks we start at the evaluation point x naught we find the radius of the largest empty sphere around this point and sample a new point until we come within epsilon of the boundary we then add the closest boundary value to our estimate and in the end we return the average of all the boundary values we've seen along the way okay but we already saw that at the beginning so how do we add new terms like a source term so here the stochastic picture is basically that as our walker wanders around the domain over the total time t that it takes to reach the boundary it's going to feel and pick up this background radiation f okay so then the question becomes how do we account for this additional heat in our monte carlo estimator well we can again just think about one of our balls and ask where does the particle spend time while wandering around inside this ball so you can imagine a bunch of random walkers that sort of leave trails of where they go and if we average this picture over a very long time what we get is something that eventually looks like the harmonic greens function for a ball which we know in closed form so we expect to pick up background radiation equal to the integral of the source term f times the green's function g over the ball once again we can approximate this integral using a one point monte carlo estimator which means we sample a point uniformly from inside the ball rather than on the ball boundary and evaluate the source term and green's function at this sampled point and multiply by the volume of the ball our overall estimator is then the same as before except that we now add this additional contribution at each step right so instead of just sampling the next point x sub k we also sample a point y sub k inside the ball at each point and they finally jump to the boundary likewise our code gets longer by just a few simple lines how about absorption well the idea now is that the longer a particle wanders around our domain the more likely it is to get killed so we have a cumulative probability described by exponential decay to turn this into a deterministic integral we can ask where the particle spends time inside the ball just like we did before and now we get a different greens function sometimes called the yukawa potential that depends on the absorption parameter sigma we also need to know how likely it is that the walker reaches each point on the boundary of the domain or the boundary of the ball which we can do by tabulating the exit distribution or what's called the poisson kernel associated with this stochastic process okay so the overall expression for the solution u is then the integral over the ball of the source term times the greens function plus the integral over the ball boundary of the unknown function u times the poisson kernel p so this is what you would see from just ordinary potential theory to incorporate drift as you might have in a convection diffusion equation we now transition from a purely brownian process to a diffusion process with drift and everything kind of works the same way this time the poisson kernel is given by what's known as the von mice's fischer distribution which also shows up in volume rendering and is related to the green's function by a normal derivative okay and this is our general recipe if we know the green's function and the poisson kernel then we know how to apply the walk on spheres algorithm okay so that completes the sketch of how we go from our second order pde all the way to a monte carlo estimator a small but very useful observation is that we can also compose differential operators by nesting estimators for instance if i want to solve a fourth order by harmonic equation laplace squared u equals f i can write this as two second order poisson equations delta u equals v and delta v equals f and then every time my estimator needs to evaluate v i can just think about that as estimating a source term and instead of sampling a single point i'll take a whole walk a whole walk on spheres all the way to the boundary like this okay so there's a kind of obvious problem here which is that this is horribly inefficient since if n is the maximum number of steps in any walk i now have to take order n squared steps rather than just order n so i'll describe a way to speed that up later on another very nice thing i can do with walk-on spheres is estimate spatial derivatives of the solution so if we again have our basic laplace equation we know the value at a point x is represented by our mean value integral and the gradient can be expressed with something that looks almost the same except that we take the average value times the normal of the bounding sphere this is basically an application of reynolds transport theorem if we wanted to take a directional derivative along z we would shift the ball a little in the z direction and pick up a contribution proportional to z dot n and once we can estimate the gradient or the directional derivative we can also estimate any other first order operator like curl and divergence and so forth finally we can estimate higher order derivatives like the hessian of the solution which in our case we did using some tools from stochastic calculus and from there we can also estimate any second order derivative of the solution point wise now may be worth mentioning this is something you can't necessarily do with finite element methods because you may only know the solution up to some number of derivatives now real systems of course can exhibit some very complex spatial variation maybe capturing fine scale structure in material properties and so we'd really like to be able to solve pdes where the coefficient functions continuously vary over space so i won't go through all the details here but the basic strategy is to transform this variable coefficient pde into one where all the variability is captured by the source term so the sketch of this process is that we start out with our general second order equation to push all the variability from the second order term into the first order term we basically just apply the product rule and divide by alpha then to push the variability from the first order term into the zeroth order term we apply a gersonov transform which is basically a change of measure for our stochastic process finally to move the variability from the zeroth order term into the source term we use a technique called delta tracking which actually comes from volumetric rendering okay and then we get a constant coefficient pde that we can estimate using the estimators that we've already described what we get in the end in particular is this screen poisson equation where i've made all sorts of variable substitutions okay but don't worry about these the main point is that we now have this kind of funny looking pde where the solution u appears on both sides of the equation and from a finite element perspective we really didn't make life any easier right to solve this we would basically have to move all those terms back to the left hand side to form the stiffness matrix but for monte carlo solving recursive equations is no problem at all for instance we always have to recursively sample our solution to evaluate our boundary term now we just have to also estimate it for the source term and so our overall estimator looks like this to evaluate the solution at a point x sub k we estimate the boundary term at a point x k plus 1 sampled from the sphere and we estimate the source term at a point y sub k sampled inside the ball now if we do this naively we have a problem because the number of samples we need is growing exponentially we have two new samples for each sample so what we can do instead is just flip a biased coin sometimes we sample the boundary term sometimes we sample the source term and then we just weight these two terms appropriately the real takeaway here is that walk-on spheres is not just another way to think about integral formulations which are already widely used for instance in boundary element methods but rather it lets us pull off kind of fundamentally different tricks and transformations that can only be solved by recursively approximating integrals and this viewpoint of recursive integral equations is also what lets us to connect to a deep knowledge from rendering the rendering equation is a recursive integral equation in fact if you write out the volumetric rendering equation for the radiance l in something like a cloud or a puff of smoke then what you see is that structurally it's almost identical to the feynman formula for our pde so we'll exploit this connection a lot as we continue especially because there's a lot of deep knowledge from rendering and computer graphics about how to accelerate these kinds of calculations so so far we have estimators that are correct and now we would like to make them more efficient for monte carlo efficiency means well we'd like to either reduce the cost of evaluating the estimator or decrease the variance of the estimator where variance is basically how much our estimator deviates from the true integral on average if you recall the variance for all monte carlo methods is inversely proportional to the number of samples but as we've seen in rendering the constants in these estimators can make an enormous difference in practice one classic technique for reducing the variance of an estimator is important sampling so if i have some complicated integrand f of x it really doesn't help much at all to put samples where f is small right i'm not adding much to my estimate but if i have a decent approximation of my integrand that i can easily draw samples from then i can weight my estimate so that more samples are placed in larger regions and then i just weight those samples accordingly what does that mean for pdes how do i important sample a partial differential equation well if we consider the source contribution one thing we might do is use the greens function g to do important sampling since this distribution is often known in closed form so here we have to be a little bit careful since g is singular but if we do our important sampling in polar coordinates then the singularity essentially goes away because we now have g times the area measure so in 2d g times r so now rather than picking a point uniformly from the ball what we're going to do is sample uniformly in the range 0 to 2 pi then sample the radius from this distribution rg and finally evaluate the source term f at this new point y equals r theta divide by the probability p that we sampled from this distribution alternatively we can important sample the source term in this integral so suppose that inside my ball i have a point source z like a little pin prick of heat i have a source concentrated onto a curve gamma kind of like a hot wire and a source supported on some two-dimensional region or n-dimensional region if i don't do any important sampling then i'll get something like this i'll miss these points and curve sources because they get sampled with probability zero but if i use a proper important sampling strategy then these will show up as we might expect in particular for a point source my importance density is going to be what you might think just a direct delta centered at the point z which means my important sampling strategy as well i just grab the value at z and multiply by the greens function import and sample a curve i'm going to take uniform random samples from the intersection of that curve with the ball and then evaluate the source function and the greens function at that at those points multiplied by the length of the restriction of the curve to the ball okay a very natural question to ask is well i have these two different important sampling strategies which one should i use for instance in rendering i have a similar choice to either sample the lights or sample the materials and different strategies are going to work better in different scenarios but there's a very nice observation from vichangi boss that says well we can actually always combine different important sampling strategies in a way that's provably never much worse than optimal and this really helps to improve the robustness of these monte carlo estimators so we can now do the same thing for walk-on spheres for these pdes and say well if we have let's say a screen poisson equation where the absorption parameter sigma is kind of going to affect which strategy is better we can automatically get the best of both worlds by using this multiple important sampling another way to increase efficiency in rendering is to reuse parts of paths that have already been computed and this actually doesn't decrease the variance of the estimator because the paths are now more strongly correlated but it can significantly increase efficiency because pads are expensive to generate likewise let's consider our biharmonic equation laplace squared u equals zero which we split into a system of two second order equations laplace u equals v and laplace v equals zero and so now instead of taking a completely new walk for each source point y which has quadratic cost what we're going to do is let y walk back to the boundary along the same path as x so overall we just have a linear cost again we're going to reuse all that hard work we already did finding walks to the boundary this scheme really pays off as one might hope even though again correlated walks have higher variance they are much faster to compute and so we have an overall improvement in efficiency here about 150 times faster for similar amount of error finally for variable coefficient problems we can adapt techniques from volume rendering for instance a technique called delta tracking kind of fills up a heterogeneous volume with fictitious matter that makes the problem computationally equivalent to a uniform density medium essentially what we did when transferring our pde into one with constant coefficients the next flight method is very similar in spirit to our tree walking strategy allowing one to reuse paths in the context of heterogeneous media and weight windows make sure that all paths have roughly equal weight by either splitting high weight paths into multiple independent paths or killing low weight paths with russian roulette again increasing efficiency so we've done some work to adapt all of these schemes to something appropriate for walk-on spheres and in practice we get the same kinds of improvements in efficiency as people do for volume rendering another sort of orthogonal approach to improving efficiency is denoising so right now denoising monte carlo renderers is kind of a hot topic in real time rendering and you can get some pretty amazing results from even just one or a few monte carlo samples actually it turns out many of these denoising methods can be used off the shelf for walk-on spheres for instance we applied intel's open image denoiser to our walk-on spheres estimates and can easily get beautiful results with just a few samples and this is really just the tip of the iceberg because we have this nice bridge between worlds now there are decades of strategies from areas like rendering and stochastic control and mathematical finance that can be used to help solve partial differential equations though it does take a little bit of thought in each case to translate from one domain into the other of course it's important that we're really getting the right answer here and not just making pretty pictures so a few words about validation and comparison to other methods a very basic question you might have is well when does walk-on spheres work and what kinds of solutions does it produce so let's look at our basic laplace equation again and a really nice thing about walk on spheres is that well as long as this problem is well posed we're going to get the right answer so the only thing we have to assume about the problem is that a solution exists in the smooth setting for this laplace equation or this dear slate problem what that means is we just need the boundary to be sufficiently regular and actually the boundary data doesn't even need to be continuous that is in and of itself interesting because it means the solution we get might not be in h1 and in particular that means we're talking about a much larger space of solutions than we ordinarily do with finite element methods we can also talk about properties of the algorithm so mueller for instance uh proved the very basic fact that walking spheres reaches the boundary with probability one and most importantly is he showed that the exit distribution of this walk-on spheres procedure is the true harmonic measure on the boundary so that is why it's giving us the true solution to the original pde another useful fact is that the spheres don't even need to be the maximal radius spheres the spheres just have to be bounded away from zero in order to get this convergence and you can find similar results for other other differential equations in terms of the computational complexity binder and broverman did a very thorough analysis of this algorithm showing that it takes order 1 over log epsilon squares steps to reach the boundary in 3d or if we're in 2d or if the boundary is smooth only one over log epsilon steps okay so really converges quite fast and to kind of see what that means let's take a look at a little example where we're going to start out and make the epsilon shell huge right so we're going to say basically as soon as you start you just jump to the boundary and grab the boundary value and what we get is something that looks a little bit like a voronoi diagram of the boundary conditions as we start decreasing epsilon so waiting longer and longer to jump to the boundary we very quickly get something that looks like the right solution and we see that the number of steps per walk is increasing very slowly and so we don't really need to worry too much about tuning this parameter carefully we can verify empirically that our estimators converge at the expected rate not only the estimator for the solution but also for the derivatives and then comes a question that people always ask me okay so this is all well and good but at the end of the day which one should i use should i use walk-on spheres or finite elements which one is better and to me when i hear this question i always think well this kind of feels like comparing apples to oranges like in particular what do you want me to measure to tell you which one is better so in finite element methods usually the way that you talk about error is in terms of the order of accuracy you might say if i have a node spacing h then the error goes like h to the k well this question of order of accuracy really isn't meaningful for walk on spheres there's no such thing as node spacing and we just have the usual order one over n variance or one over root n error for n walks and in fact even mueller kind of comments on this he says well the only inherent possible source of error with these processes results from the statistical fluctuation of the estimators and so the answer i would first give is to say well generally you know these are very different methods they have very different capabilities they have very different use cases and the best thing to think about first is what is the appropriate problem for each of these methods not which one has you know the faster rate of convergence now when i say that people say okay but that still sounds like a cop-out just just show me a comparison right tell me tell me something and so fine here's a comparison i've gone ahead and i've measured all sorts of statistics about doing a solve with finite element methods or monte carlo and you can look at different things like when we uh meshed the problem with finite elements well the mesher couldn't handle all the details so it simplified it to two million triangles uh the monte carlo solver when we ran it you know we had fewer pixels than we had nodes in the fem mesh the precomputation time for fem was about 14 hours for monte carlo it was 0.4 seconds for uh finite element methods it took 13 seconds to do a solve for monte carlo it took about 57 seconds in the end you look at these numbers and you could say okay well i don't know who won you know which method is better i don't know i don't have a great answer that question although one number that really jumps out at me is certainly this 14 hours of pre-computation time from building the mesh right and then also all these other problems that we've talked about before like loss of geometric detail when you use a finite dimensional approximation of your problem now to that sometimes people say hey that's no problem because there are these methods in the finite element world that guarantee that you capture all the detail you need in particular something called adaptive mesh refinement so let's go back to this problem where we have these billions of boundary elements that i mentioned near the beginning and run adaptive mesh refinement on just one of the many objects in this environment okay and if we do this using uh just ordinary meshing that's not adaptive well we're not going to resolve all the detail in the solution so it's not just a question of did we resolve the geometry but also did we resolve all the features in the solution itself if we go ahead and use this adaptive adaptive finite element meshing then we do capture all the geometric detail all the detail in the solution but you notice now is this gives us 8.5 million tetrahedra it takes two and a half hours and that's just for one object in this problem right forget about solving this kind of infinite array of blackbody emitters so the next thing people sometimes say is hey you're having all this difficulty with meshing why not just use meshless finite elements right meshless finite element methods have the word meshless in the title so they must solve all your problems with meshing and the reality is that's not really true because meshless is a bit of a misnumber meshless fine element methods actually entails a process that in the end looks very very similar to meshing so for instance i start out with my domain omega i need to place some nodes in this environment to define my basis function so i adaptively sample the geometry and then something that's not always acknowledged when you talk about meshless finite elements is that in order to get convergence the placement of these nodes has to satisfy some very stringent geometric criteria that are not so different from mesh quality criteria so then you have to go ahead and optimize the placement of these nodes right you can't just put them wherever you like and build a nearest neighbor graph that connects these nodes so that you can define basis functions so the object that you end up with in the end is really not so different from a mesh from there the whole formulation is literally identical to all other finite element methods you solve a big global system of equations and in fact you end up with matrices that are actually far more dense than with standard finite element methods okay and just so that this is not just a kind of wild conjecture we really did try out these meshes finite element methods on a large benchmark of about 10 000 models and what we see is that when we try to solve some kind of variable coefficient problem on these pdes we get a behavior that is actually known in the the meshless finite element world as stagnation so these methods actually fail to converge if we go and get kind of the latest and greatest research method for meshes finite elements we see that things do actually now have some guarantee of convergence but the error is wildly unpredictable so across different models in the data set we have something like 100 million times variation in the error right so how much computation do you need to do to get good results it's hard to predict with standard mesh based finite elements things are a bit better things converge and the error is lower but still quite unpredictable and finally when we do this with our monte carlo method we get the best of all worlds things converge and we get very predictable error finally when people hear about walk-in spheres they often say oh yeah this kind of sounds like a boundary element method right you're talking a lot about the boundary and you don't need to mesh the interior so this is just a boundary element method right and the answer is no this is actually quite different because boundary element methods can't directly deal with things going on on the interior of the domain like source terms or variable coefficients and to do this with a boundary element method you actually have to do something like a hybrid of bem and guess what a meshless finite element method for interior terms right so you run into all the same problems okay so hopefully that gives a sense of kind of the overall challenges and trade-offs with different classes of methods okay finally let's take a look at some examples of things that you can do with a walk-on spheres solver and when you go to implement these applications it actually looks a lot like building a renderer so if you have some experience with rendering you know you have a scene that might have some materials and some lights and a camera that says which rays you want to shoot and then the geometry where the critical operation is intersecting rays with the boundary of your scene and then you use a monte carlo path tracer to estimate the illumination or the radiance in the scene and people have thought very long and hard over the years about how to build good rendering systems right how do you make this efficient how do you make the software well designed so that it's flexible and so on and so forth well the good news is because we have all this parallel structure between monte carlo path tracing and walk-on spheres for pdes we can use a lot of the same wisdom to build a system for solving pdes except that instead of materials we might have greens functions instead of lights we might have boundary and source terms instead of a camera we just have a description of where do we need to evaluate the solution and instead of these ray intersection queries we have kind of this critical part of the system these closest point queries which we talked a little bit about earlier on and just as in rendering the good news is closest point queries are easily computed for a wide variety of boundary representations so we can work with polygon soup meaning just a jumble of triangles that might have cracks or holes or whatever right there doesn't have to be any kind of special condition on the geometry we can work directly with spline curves and surfaces so sort of higher order representations we can work directly with implicit surfaces which are nice for blending or other kinds of modeling we can work with instance geometry so if we want to make many many copies maybe translated and rotated in different ways of the same model that can be done very cheaply and easily and just to stress how flexible this closest point idea is we can even solve pdes on domains with a fractal boundary and kind of an important thing to say is we don't actually need to know the exact distance to the boundary this all will work as long as we can compute a conservative bound on the distance to the boundary okay so in terms of actual examples here's one fun one so let's say we want to take a vector field x and decompose it into its constituent parts a curl free part a divergence free part and a remaining harmonic part something called a helmholtz decomposition well we can do this by solving two poisson equations using our standard poisson estimator for two of those components and then use our gradient estimator to subtract off those components and get the remaining harmonic part the really cool thing that happens here is to actually draw these curves to trace out these curves in the plane we just need to run a standard ode integrator using again our gradient estimator to decide which direction to move at each moment and what that means is we never need to solve the pde on the entire domain only along the one-dimensional curves or only along the sample points that we need for our ode integrator right so this is a huge savings in cost especially if we're now solving a 3d problem right if we do 3d helmholtz decomposition we're only tracing out these curves and are doing a 1d subset of this three-dimensional domain here's an example that's kind of amazing that we can do without any meshes which is we want to decompose a given domain in this case given by bezier curve into nice coarse quadrilaterals which then might be further subdivided for various kinds of modeling and simulation so without going too much into the detail we basically solve for a cross field by solving some kind of laplace equation we then use our gradient estimator to find where the singularities are basically doing a gradient descent and then again use the gradient estimator to determine what the index of each singularity in this field is which we can then use to trace out these quadrilateral regions and again to stress there's no mesh used at any stage of this process we're solving this directly from bezier curves now why might that be useful well there's a lot of well-known artifacts when you go to solve these kinds of problems with finite element methods for instance something that people sometimes call a singularity party meaning just that there's aliasing in your solution of the crossfield if you're on a mesh especially if you have very high degree uh singularities in this field that give a false sense of what's going on this is something we don't have to worry about at all in the monte carlo setting okay and then there's lots of other examples for instance we can do 2d shape deformation using our bi-harmonic estimator we can compute diffusion curves so this is a way of saying i'm going to solve a laplace equation with two-sided boundary conditions to extend color from my curves into the rest of the domain this is a kind of nice way of authoring vector graphics with nice smooth fills we can do interactive editing so we can explore the parameters of our model the geometry the coefficients and so forth in real time this is done in either glsl or in unity and and one thing to say too is these methods are often very easy to code so within about 24 hours of putting our paper online somebody had gone and taken one of our examples from the paper and recoded it from scratch in one of these shaders we can also start to think about how to solve partial differential equations on curved surfaces so so far our domain has always been a flat subset of rn but let's imagine we have a conformally parameterized surface well then the conformal scale factor that tells us kind of how the plane gets deformed into the surface can be reinterpreted as the coefficients for a variable coefficient differential equation right and that lets us solve things like again diffusion curves on surfaces in the future we'd very much like to generalize this to arbitrary anisotropic parameterization so that we can solve pdes directly on nurbs and subdivision surfaces and other patches that are important for modeling and engineering finally we can give a little bit back to rendering by using walk-on spheres to help us with subsurface gathering so depending on whether we're near the boundary or we're deep inside the medium we switch back and forth between traditional volumetric path tracing and walk-on spheres approximation of diffusion so that gives just a little bit of a taste of what you can do with walk-on spheres but of course there's much much more that can be done with these algorithms okay to conclude i just want to mention a little bit of kind of recent work that's going on in this area not just by us but by other people so there's been some work on uh kind of translating this idea of bi-directional path tracing of starting not just from the observer but from the emitter or from the lights into the setting of walk-on spheres and that can help very much to reduce the uh the noise the error there's been work on differentiable versions of walk-on spheres some sort of preliminary work that says maybe i want to find parameters of my pde that explains things that i've observed so this is something that's been very exciting recently in rendering and computer vision using monte carlo renderers to try to understand which geometry produced an observed image like this image of the chair and likewise in scientific or engineering problems you might want to know for instance what was the density or what was the electrical conductivity or what were other parameters of your problem varying over space that produced some kind of physical measurement or observation there's some very nice work on making exterior problems more efficient so let's say i want to solve a pde on the outside of a shape rather than inside one thing i can do is let my random walks wander around and if they get too far from the domain boundary i just kill them off but this is pretty slow and noisy so a beautiful observation is that we can apply what's called a sphere inversion to turn this exterior problem into an interior problem if we also know how to transform the corresponding pde and that makes everything much more efficient so here's just one example application from that paper of saying i want to apply forces to various molecules or atoms in a molecular dynamic simulation and so for that i need reliable estimates just at isolated points just at kind of the locations of each atom there's also been a lot of work on extending closest point queries to different kinds of geometry so some very nice work on efficiently computing the distance to spline surfaces using sum of squares geometry processing and also on finding closest points on neural implicit surfaces so now you can use these closest point queries to directly solve partial differential equations on neural surfaces by running the walk on spheres algorithm there are also many very creative uses of pdes and graphics and geometry processing for instance taking a curve and inflating it into a nice surface um we are currently working on generalizing and extending the walk on spheres method from for instance not only directly boundary conditions but also to neumann boundary conditions which in the past have really only kind of worked efficiently on domains that are convex and so we very soon will be able to uh tell you how to do neumann boundary conditions efficiently on arbitrary non-convex domains and we're also thinking very hard about how to get some kind of global path reuse so we've already done some path reuse locally with these tree walking strategies or next event strategies but in rendering there's been a lot of good thought on how to take paths globally and recombine them to get efficient estimators and so we're doing something kind of akin to radiance caching or photon mapping in the walk on spheres setting beyond these recent developments there are of course many big questions still to answer the biggest one being which pdes can we solve using walk-on spheres because the more equations we can solve the more applications we can have an impact on where we can take extremely complex geometry and get kind of efficient reliable solutions we'd also like to think more about how do we share information between different estimators without losing some of the unique benefits of monte carlo like parallelism this whole enterprise is kind of by its nature stochastic so if we have problems where the data itself the measured data has uncertainty in it how do we incorporate that into our estimators so that we can kind of understand the uncertainty in the solution how do we make this whole thing differentiable there's been some preliminary work on that but still a lot of questions to answer uh where are there high dimensional problems that benefit from walk on spheres so monte carlo famously kind of gets around the curse of dimensionality we'd like to be able to you know take advantage of that for pdes and in general for which applications does walk on spheres offer unique capabilities so i think the wrong thing to do is to just keep trying to solve the same problems that we've been solving with finite element methods right those are actually very well adapted for the problems that they solve walk-on spheres does a lot of new and different things so where can it really be beneficial and here's a really nice example from ryan schmidt where i'm trying to do some kind of agent path planning so conceptually this ball is going to follow a harmonic vector field that is extended from the boundary of the domain in order to follow this character that's running around and in practice i don't actually have to compute this vector field everywhere i only need to know it at a single point at the center of this ball so this is kind of an ideal candidate for walk-on spheres i can run a ton of samples just from this this ball center at very very low cost so these are the kinds of creative applications we'd like to see more of in geometric and visual and physical computing that really take advantage of walk-on spheres and we would love it if you would join us on this journey of exploration okay thank you very much
Info
Channel: Keenan Crane
Views: 23,177
Rating: undefined out of 5
Keywords:
Id: bZbuKOxH71o
Channel Id: undefined
Length: 52min 4sec (3124 seconds)
Published: Mon Aug 29 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.