Repulsive Shape Optimization

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi i'm keenan crane from carnegie mellon university and in this talk i'm going to take a deep dive into some recent tools for optimizing shapes based on the principle of repulsion and these tools are not only quite new in mathematics but they also turn out to be very natural for a wide range of problems in visual and geometric computing so the basic problem is that we're given some geometry a collection of points or curves or surfaces and we'd like to find a nice well-distributed arrangement of this geometry in space that avoids any self-intersections possibly subject to constraints on things like length or area or in this case the geometry has to be confined to this bunny shape and so we're going to talk about how to formulate this problem mathematically how to solve it computationally and also how that opens the door to a bunch of new applications the work i'll discuss comes mainly from two papers one on repulsive curves with chris u and henrik schumacher and myself and then one on surfaces also with caleb brackenseek here is a high-level outline of the talk and i know how easy it can be to shut off a video as soon as you feel lost so i've organized the talk into levels of difficulty if at some point you feel like giving up well you might still find something cool and interesting by just skipping to a later section so first and foremost why would we want to solve this kind of problem in the first place well look if we think about just point repulsion this already shows up as a basic computational tool across every major area of computer graphics for instance in geometric modeling well distributed points are used to compute meshes with nice element quality in image processing repulsive points might be used to find nice stippling patterns in rendering point optimization helps to improve for instance the distribution of samples used for density estimation and in physically based animation repulsive points show up as a basic component in solvers like sph and other point-based methods yet given all the applications of point repulsion there's surprisingly little work on algorithms for repulsive optimization of curves and surfaces so why might you care about higher dimensional repulsive geometry well one perspective is to think about regularization so in general a regularizer helps to provide smooth solutions when fitting data so for instance you might want a function that matches your observations but also has small derivatives in geometric problems curvature functionals like euler or willmore energy have traditionally played the role of regularizers enabling us to fit data while keeping the fit smooth but these classical functionals have one glaring flaw which is that they allow geometry to pass through itself even if the initial data was free of self-intersections so here a repulsive regularizer makes a whole lot of sense another basic reason is that the no intersection condition is quite natural when we want to draw or design things for instance if you click out the vertices of a non-intersecting polygon using a standard drawing tool nothing about standard spline interpolation schemes guarantees that your interpolating curve will be free of self intersections okay likewise if you want to model 3d geometry that's destined for the real world let's say you want to 3d print something well then you'd better make sure that it doesn't pass through itself and finally when visualizing data unwanted intersections can really change the the meaning of what you're looking at so for instance if you're drawing a graph you might be really sending the wrong message if two edges intersect and so you might like to optimize the edge curves to avoid intersection mathematical visualization is another place where repulsive energies can be extremely powerful for instance if i want to convince you that a knot can be untangled to a circle i might draw a diagram like this one which is pretty hard to believe if you don't know how to read it in contrast if i can just make a movie of this transformation then the proof really is in the picture likewise there are all sorts of counter-intuitive things that can happen with surfaces that are really hard to communicate with a drawing but can be made into beautiful animations if we have the right tools and we'll look at a few of those examples today finally there are some very natural connections between repulsive energies from geometry and those showing up in physics and biology in particular electrostatics is critical to understanding and predicting and designing various physical systems so in microbiology long-range interactions between membranes and proteins and so forth are mitigated by the electric electrostatic potential from ions in a process called debye screening and so you really need to think about repulsive phenomena of all different dimensions right you have zero dimensional particles like dust and proteins and viruses you have one-dimensional curve-like phenomena where repulsion has a significant effect like supercoiled dna and you have two-dimensional phenomena for instance uh synthetic bio-membranes which might be used for water filtration or drug delivery or all sorts of other interesting things now it's really important to make a clear distinction between repulsion and collision avoidance which is also sometimes handled using potential functions but computationally these are very different problems okay so the right mindset for collision avoidance is contact mechanics if i shoot a cue ball then right before the moment of contact i have a potential that activates to avoid collision in contrast if all the balls on my pool table were charged particles then i'd need to simulate a potential that's active all the time and has a long-range influence on all pairs of objects so with these local potentials the goal really is to try to prune away and ignore any long-range interactions whereas with repulsive energies it's really the long-range balance of forces that's interesting and that's what we want to keep likewise the potentials we use for contact are kind of pseudophysical they're kind of a numerical device used to enforce inequality constraints whereas to get nice looking repulsive curves and surfaces we're gonna need to really start with a smooth energy and apply a principal discretization so with this new problem comes some new challenges for instance if we just try to naively apply the same energies we use for points to curves and surfaces the energy diverges leading to severe numerical instability and inconsistency with respect to tessellation minimizing these energies can be very challenging even if we use kind of the latest and greatest preconditioners from geometric problems optimization can easily get stuck and finally we need to consider interactions between all pairs of points which means our problem quickly blows up in size if we consider a discretized curve or worse a discretized surface and this is especially hard because we're going to have to invert a large dense matrix okay so for these reasons our approach to repulsive geometry optimization has three main ingredients first we introduce a discretization of the so-called tangent point energy which is well-behaved for curves and surfaces second we develop an intelligent optimization strategy specifically tailored to navigating the repulsive energy landscape and finally we build a fast hierarchical acceleration scheme to bring down the asymptotic cost of solving our dense system and apart from all this technical stuff a really important component of this work has been identifying new ways of applying repulsive energies to problems in visualization and geometry processing and visual computing so how do we model repulsive geometry well the basic idea is to define an energy e which assigns a score to any given curve or surface and we're going to say this energy is repulsive if it's finite when there are no intersections and infinite otherwise so we want an infinite barrier to things running into each other an energy that naturally comes to mind is the coulomb potential from electrostatics which for two particles of equal charge is inversely proportional to the distance between them so basically two particles want to move as far away from each other as possible and the gradient of this energy with respect to the locations of the particles gives the force of repulsion telling the particles where to go now importantly this energy is repulsive as two particles approach the same location the energy blows up to infinity unfortunately although coulomb potential makes sense for points it's not a very good energy for curves or surfaces so let's say we integrate the coulomb potential over all pairs of points x and y on the curve uh then if we fix x and follow y around the curve the integrand will look something like this okay so it gets huge for points y that are immediate neighbors of x along the curve but only goes up a little bit when the curve gets close to itself and this is not really what we want right we want the energy to be biggest where the curve almost intersects so that the forces can push it apart the fact that x has some nearby neighbor's y is not really interesting we don't want forces that are just trying to rip the curve apart we can also plot the contributions of all pairs of points x and y simultaneously and notice again that the near intersection is just a tiny little bump in the overall integrand what's worse is that the coulomb energy is not even repulsive it can integrate to a finite value even if there's an intersection now we can try to fix this by raising the integrand to some power alpha but it turns out there's no useful value of alpha if alpha is less than 2 the energy is not repulsive if alpha is greater than 1 then it's infinite for all curves including ones that don't intersect which means the energy gradient in this case doesn't give us any useful information about how to push the curve apart that being said nothing stops you from plugging this energy in and trying it out anyway but more often than not you're going to get results that you may not like so why does the seemingly natural idea of just repelling all pairs of vertices work so poorly well basically the effort is just focused on the wrong thing repelling your immediate neighbors rather than preventing self-intersections and while you can of course tweak parameters and add constraints and add regularization to try to make things better you're fighting a losing battle the underlying energy is simply not well-defined so for this reason people have come up with other energies that work better for curves for instance the so-called mobius energy sort of ignores immediate neighbors by saying if x and y are two points on the curve then we're going to penalize 1 over the euclidean distance squared minus 1 over the geodesic distance squared in other words the distance in space versus the distance along the curve if there's a near intersection then the first term will be large and the second one will be small and so the contribution to the energy will be big but if the two points are just neighbors along the curve then the euclidean distance and the geodesic distance are almost the same and we can ignore this pair okay however this energy still has a couple of drawbacks one is that it's mobius invariant in particular it's scale invariant which means that we can't always see tight spots where the curve is actually very very close to itself another which is going to be super important for surfaces is that geodesic distance is expensive to compute but also very hard to optimize so this is still not really an ideal energy for applications in visual computing instead we're going to use something called the tangent point energy where the idea again is to ignore immediate neighbors but in a way that's more friendly to computation so for two points x and y we'll let r of x y be the radius of the smallest sphere that passes through y and is tangent to the curve or surface at x then we'll take one over this radius raised to a power alpha if the points are close in space but distant along the curve then the sphere will be small and the energy will be large if on the other hand the points are close in space and close along the curve then the sphere will be huge and the energy will be very small and you can also show that as long as this power alpha is greater than two the energy is both repulsive and unlike the mobius energy it's no longer scale invariant okay so it's pretty much exactly what we want for applications in visual computing the parameter alpha by the way just gives some control over how it local the effect of repulsion is so as we increase alpha we care less and less about avoiding things that are very far away what's also nice is that the tangent point energy automatically provides some regularization of sharp corners and kinks because as y approaches x the tangent point radius approaches the oscillating radius which means one over the radius approaches the usual curvature at x so tangent point energy also behaves like a sort of a non-local bending energy similar to but different from an elastic rod or elastic plate okay so if we integrate this quantity over all pairs of points x and y on a curve then we get something like this [Music] so as y goes around the curve the energy remains small as we approach x we ignore the immediate neighbors and it only gets big at spots where we're close to self-intersection likewise if we plot the energy for all points x and y we can see that all of our attention is now focused on points where either we approach this self-intersection or where we start to develop sharp corners and so the corresponding forces in other words the energy gradient now provides some actually useful information about how to improve the curve and avoid self-intersections so now we have a pretty nice energy and we need some way to actually minimize it and this turns out to be pretty tricky repulsive energies are a bit different from many of the energies we're used to because first of all they're non-local they involve all pairs of interactions and second they involve high order fractional derivatives which is something we don't usually encounter and so if we just try to do naive gradient descent or some other standard scheme we'll end up with super slow optimization that takes extremely small time steps so our approach is to use something called sobolev preconditioning which in general is a powerful technique for optimization of geometric functionals it's sort of well known among a small community of researchers but i'd say it's still not very widespread in terms of most people realizing that yeah this is a very useful technique in the past it's been used for a number of problems in shape optimization and curvature flows it's been used a little bit for mesh generation and more recently for problems in sort of elasticity inspired injective mapping but for repulsive energies we're really gonna have to do some additional work to generalize to the non-trivial case of non-local problems and fractional derivatives so how does this work well in general if we want to minimize an energy e of f then we might follow its gradient flow just make the change in f equal to minus the gradient of e at the current f if we discretize this equation in time we get the standard gradient descent algorithm the new f equals the old f minus some time step tau times the gradient at the old f now one extremely important fact is that there's not just one definition for the gradient it depends on your choice of inner product and that's going to have a huge impact on optimization so more explicitly the differential of an energy e called d e tells us for any direction u what is the change in e if f moves with velocity u okay the gradient is a different quantity whose inner product with any u gives us the differential in the direction u so for instance if we think about the inner product as being given by a positive definite matrix a then the gradient is related to the differential by the inverse of a and since there are many possible choices of inner product that means there are many possible ways to do gradient descent one classic example is newton's method so if i apply ordinary gradient descent where the inner product is the identity and the gradient is just the differential then i might end up zigzagging back and forth across a long narrow valley in my energy landscape because the steepest direction with respect to the standard inner product does not necessarily point directly to the minimizer newton's method on the other hand replaces the identity with the hessian resulting in a preconditioned gradient that at least for this problem takes us straight down to the bottom but in general unfortunately newton's method is far from perfect because it's based on a very simple hypothesis that a local quadratic fit will provide a good approximation of the global energy landscape and away from local minima this is just not true newton's method is a pretty crude heuristic so for instance if this is our energy landscape then the ideal thing would be to go straight down toward the global minimum but this is not what newton's method does in fact when the hessian isn't positive definite it might take us to a saddle or even to a local maximum and to make things worse it can be really difficult and expensive to evaluate especially for a non-local problem like ours so we'd like a scheme with a good local convergence behavior of newton but that works a bit better for repulsive energies and our alternative is to use sobolev preconditioning which replaces the hessian with a sobolev inner product so what's a sobolev inner product well the standard l2 norm or l2 inner product just measures the size of the function values themselves for instance the inner product of u and v is just time u times v integrated over the domain whereas a sobolev norm is also going to capture the size of derivatives so for instance the h1 inner product integrates the dot product of the gradients of the two functions okay there are then three ways to think about sobolev pre-conditioning which i'll talk about one is that the sobolev inner product sort of smooths out or blurs out the gradient so it has more of a global effect another is that it reduces energy in all spatial frequencies at the same rate and a third is that it effectively cancels derivatives in the gradient flow equation thereby eliminating a mesh based time step restriction since the full-blown story takes a bit of work to explain we're going to start with a simpler toy example of deersley energy so deersland energy basically measures how smooth a given function is by integrating the squared norm of its gradient for instance this function has large directional energy this one has much smaller dirichlet energy we can also write directly energy as the l2 norm of the gradient or using integration by parts it becomes the inner product of laplace f with f itself from here it becomes easy to take the derivative with respect to f telling us that the differential in the direction u is just minus the l2 inner product of laplace f with the given direction u okay so how about the energy gradient well remember we said the inner product of the gradient with any function u should equal the differential in the direction u and this time around let's just use the standard l2 inner product so for dirschler energy we said the differential d e is minus the laplacian of f l two inner product with u and so we want to solve the equation l two inner product of grad e with u equals l two inner product of laplace f with u which must mean that the l2 gradient is just minus the laplacian of f if we plug this into our gradient flow we get an optimization scheme that looks like heat flow at each moment the change in f equals its laplacian and so heat flow will slowly smooth out little bumps gradually decreasing directly energy though what you notice is this one big bump in the middle still remains so let's see if we can do better by using a sobolev inner product for instance the h1 inner product we can rewrite the left hand side in terms of l2 by sticking in a laplacian okay which tells us the laplacian of the h1 gradient must equal the laplacian of f in other words the h1 gradient is nothing more than f so our gradient flow now just says the change in f should be proportional to f and this works perfectly we quickly eliminate all the bumps and reach a dirshly energy of zero from here it becomes really tempting to ask okay well can we do even better for instance maybe we should try an even higher order sobolev inner product like h2 so this basically just means we're gonna use the square of the laplacian rather than the laplacian as a preconditioner okay and so maybe we could call that resulting uh preconditioned gradient inverse heat flow but if we run this it doesn't work very well big bumps get eliminated very quickly but small bumps now stick around for a very long time so sort of the opposite of what happened with the l2 inner product so the takeaway here and really the story of all sobolive preconditioning is that you're usually much better off if you can match the order of the preconditioner to the order of spatial derivatives appearing in the differential a good mental image which we've kind of hinted at is to visualize frequency space so here if i project dirichlet energy onto a pair of low and high frequency eigenbases then we can clearly see the l2 gradient flow gets rid of high frequencies very quickly but has trouble eliminating low frequencies h2 gets rid of low frequencies but struggles with the high frequencies and h1 reduces the energy in all frequencies at the same rate very quickly reaching the minimizer okay so again a key idea is don't just pick an arbitrary inner product really think about how to match it to the order of derivatives in your gradient equation another good mental image is that if you pick a preconditioner of the right order then it's going to nicely smooth out the gradient over the whole domain so you're making steady progress everywhere rather than just in the one or two spots where things are really bad so here if we minimize tangent point energy with respect to l2 then all the effort is focused on the points where the curve is nearly in contact with itself indicated by these red velocity vectors whereas the sobolev gradient enables the whole curve to move around and reduce energy as we see with these blue velocity vectors okay but i'm jumping the gun here a little bit because i haven't actually said how to do preconditioning for the tangent point energy so what is the ideal preconditioner in this case well again i'd like for the order of the preconditioner to match the order of the differential but there are a couple challenges for one thing the energy isn't written down in terms of derivatives it's written as an integral so i can't just inspect this expression to figure out the right order of preconditioner for another as far as i know nobody's ever written out an explicit expression for the differential of this energy so i can't look at that either and instead what we had to do is infer what the differential order must be by considering the function spaces associated with the problem and this leads us to a differential of fractional order which ideally should be preconditioned with a fractional sobolev inner product okay so let's see how this all works so to cover both the curve and surface case let's first rewrite the tangent point energy for an embedded manifold of any dimension n then if f is a c1 embedding of m we're going to say e of f is the integral over all pairs of points x and y of the inner product of the unit normal n at x with the difference between f at x and f and y over the distance squared all raised to the power alpha and as before the numerator says that if two points are close together in the tangential direction then the contribution basically gets ignored and so also like before this energy will be repulsive as long as alpha is greater than 2n so 2 for curves and 4 for surfaces a recent theorem of blot then tells us that in order for an embedded curve or surface to have finite energy it must be at least s times differentiable where s equals two minus n over alpha so for instance a curve with a sharp kink in it will have infinite energy even if it doesn't intersect itself okay in turn we can argue that the differential d e must decrease the order of the embedding by 2s where 2s will always be a fraction rather than an integer for any dimension n and any repulsive power alpha okay so in short even if you didn't follow any of that the point is that unlike most energies that you encounter in geometry processing or computational mechanics the derivatives of a repulsive energy effectively have fractional order what does that even mean what is a fractional derivative well there are several ways to understand fractional derivatives but the intuition is they sort of interpolate between ordinary integer order derivatives for instance if we have a function f like this whose second derivative looks like this then we get a continuous family of lower order fractional derivatives going all the way back to the original function more formally the sort of differential viewpoint is that you take a standard differential operator like the laplacian and raise it to a fractional power so explicitly you could do this with a spectral expansion where phi are the eigenfunctions and lambda i are the eigenvalues and then to apply a fractional laplacian you just raise each eigenvalue to a fractional power the integral viewpoint is that the fractional laplacian can be expressed in terms of an integral that looks a bit like a repulsive energy so integrate over all pairs of points in rn the product of the difference of the functions at x and y divided by the distance raised to a fractional power okay so just like we used the laplacian delta to define ordinary sublevel inner products we can use the fractional laplacian delta sigma to define a fractional sobolev inner product so we get for instance the h sigma inner product for any real positive sigma by just inserting delta sigma into the l2 inner product okay but with one little gotcha which is that in our case we need to consider functions on a curve or a surface rather than all of rn so one idea is to build a fractional version of the curved manifold laplacian in other words the laplace beltrami operator but unfortunately a full spectral decomposition is way too slow and we also don't have a nice integral expression that works on curved domains so what we do instead is replace the intrinsic distance with the extrinsic distance in this integral expression and we also tack on some additional integer order derivatives d to match the desired order of the differential and so the overall operator a subsigma is now our fractional inner product up to some low order terms that i won't discuss here but are discussed in the paper okay so to recap the idea of sobolev gradient descent is to precondition the differential using an inner product of the same order as the differential the differential of the tangent point energy has fractional order 2s so we construct a fractional sobolev inner product of the same order our final optimization strategy then looks the same as before we pre-condition the differential using our fractional inner product and we use this fractional gradient to perform gradient descent now just as in our toy example optimization works best when we use a preconditioner of the right order so for instance let's say we want to minimize tangent point energy for this curve which should untangle into the round circle if we just use l2 then we're going to make very slow progress if we use h1 things improve but we still don't quickly reach a minimizer if we use h2 we now focus a bit too much on large scale features and don't smooth out sharp corners and finally if we use our fractional inner product we get to the minimizer and we get there extremely quickly so it really makes a big difference in practice to get the order of preconditioner right here's an analogous example for surfaces where we try out these different pre-conditioners on a mesh of various resolutions going from top to bottom so the tangent point energy in this case should turn this bumpy mug into a round torus and what we see is that as the edge length decreases most of the preconditioners make less progress because they have to take smaller steps except for our fractional preconditioner which gets to the minimizer in roughly the same number of steps we've effectively eliminated this mesh-based time step restriction so this is really the payoff if we can construct a preconditioner whose order perfectly matches the derivative of the energy then optimization becomes asymptotically faster than other preconditioners which means even if there are very different per step costs the optimal order scheme will always win out as the mesh gets finer of course our goal is going to be to make the per step cost very small which we'll do in just a minute but first to do any kind of computation or optimization we need to discretize our energy and here we adopt a very simple discretization curves become polylines surfaces become triangle meshes and we approximate the energy with low order quadrature for each element pair now we could use higher order spline bases but since we can't exactly integrate the energy this is really just a choice of fewer elements and more quadrature points versus more elements and fewer quadrature points and our preference was to keep all the formulas as simple as possible for surfaces we also found it was very important to remesh the surface during optimization to avoid getting stuck and of course as a precondition we assume that the input is already free of intersections more precisely for a curve we're going to write the tangent point energy as a piecewise integral over all pairs of edges i and j of the usual tangent point kernel k now since a polygonal curve has sharp kinks we have to ignore neighboring edge pairs because otherwise we'll always get infinite energy but this is an okay thing to do since unlike coulomb energy the tangent point energy doesn't blow up along the diagonal and so as we refine the curve we're emitting a negligible contribution to the overall energy in particular we apply the trapezoid rule to each integral giving us a sum over pairs of distinct edges of the tangent point kernel evaluated at all four pairs of endpoints using the edge tangent ti times the two edge lengths li and lj for surfaces it's not much different we sum over pairs of distinct triangles s and t the two triangle areas a s and a t times the magnitude of the inner product of the unit normal at s with the difference between the triangle berry centers x s and x t divided by the square distance between barry centers all raised to the power p we can check the rate of convergence by comparing to the tangent point energy of a known smooth surface and interestingly enough even though we use linear elements we get a convergence rate that's closer to quadratic with respect to the edge length h to get the energy differential we just take partial derivatives of the discrete energy and to precondition the differential we also have to discretize the fractional inner product our strategy here is very similar to what we did for the energy we just used low order quadrature summing over pairs of elements and emitting immediate neighbors importantly the final inner product is a dense matrix and so in a few minutes we'll talk about how we can invert this matrix efficiently finally i mentioned that for surfaces remeshing is very important because large deformations can lead to bad elements which give a poor approximation of the energy which can cause optimization to get stuck so for instance we might like to untangle a surface like this one but without remeshing optimization gets stuck here and so we apply a little local remeshing at each step where we split long edges collapse short edges flip non-delany edges and then apply a little tangential smoothing using the optimal delani scheme described by chen and holst which basically moves each vertex toward the area weighted average of neighboring circumcenters so on a static mesh that looks like this so elements gradually improve and if we apply it during our flow we will get all the way to a minimizer to do interesting stuff with repulsive curves and surfaces we also have to be able to handle various constraints because without any constraints the behavior can be pretty uninteresting for instance point charges might just fly off to infinity or a curve grows without bound and so forth so we incorporate a generic constraint function phi of f equals zero that can describe constraints on points or edge lengths or tangents or whatever and apply a standard projected gradient technique where we first project uh the gradient onto a feasible direction and then we project the geometry itself onto the constraint set by the way this all becomes a bit more complicated when we want to accelerate computation so i'll just describe the simpler unaccelerated version here here are just a few examples of the kind of constraints we might like to enforce so like fixing the length of an edge or the total length of the curve or fixing a point in space or prescribing a tangent vector or keeping a curve on an implicit surface and so on the point is you can define whatever constraint function you want so in particular if we let g be the unconstrained descent direction given by our fractional preconditioner and c is the jacobian matrix of our constraint function then we want to find the closest direction g tilde that preserves all of the constraints or is tangent to the constraint set and importantly closest here means closest with respect to our fractional sobolev norm so this is a convex quadratic problem that amounts to just solving a linear saddle point problem now if we start at the old curve and take a step in the projected gradient direction then we get an initial guess for our new curve that still isn't quite on our constraint set so what we do is look for the smallest offset x that approximately eliminates the constraint violation and move in this direction to get our new curve this is basically a slight modification of newton's method that again involves solving a linear saddle point problem and if we want to do better we can just repeat this procedure but usually just need to run it once or twice per optimization step okay so the final question is how to reduce the overall computational cost of each optimization step so preconditioning already helps us take big steps which is a very good thing but the per step cost can still be quite high for instance in 3d if h is the average edge length and n is one over h then evaluating the energy or its differential is already an order end of the fourth operation and what's worse is we have to build and invert a dense n squared by n squared linear system so we accelerate computation in three different ways first we do a hierarchical approximation of our energy and derivative evaluation second we use the so-called block cluster matrix to perform matrix vector multiplication and third we use either a geometric multi-grid scheme for curves or a special decomposition of the fractional inner product for surfaces so how do we evaluate the energy and its derivatives more efficiently well naively we'd have to iterate over all pairs of mesh elements but we can make things a lot faster by treating clusters of distant elements as kind of one big element in particular we apply the barnes-hut algorithm which first builds a bounding volume hierarchy or bvh and we'll say a bbh node i is admissible relative to a mesh element s if it gives a good enough approximation of the total energy of all elements within i as determined by a taylor series analysis in particular we ask that the maximum bounding radius of either the element s or the node i is less than some error tolerance theta times the distance between s and the convex hull of i the approximate energy for element s is then the sum over all admissible nodes that have no admissible ancestors in particular we sum up the area of s times the total area inside node i times the tangent point kernel k applied to the centers of mass and the normal of s using the same tests as before we can check that this scheme indeed provides a very good approximation of the naive sum over all pairs and a nearly identical scheme can be used to approximate the energy derivatives next we have to efficiently precondition the differential which involves inverting a large dense matrix and our approach here is to use a hierarchical decomposition of the matrix into rank 1 blocks so we imagine that each bvh node a corresponds to a subset of matrix rows or columns and say that two nodes a and b are well separated if their rank one block provides a good enough approximation of the original matrix so this is very similar to our barnes hut admissibility condition the matrix is then partitioned into the coarsest collection of well-separated blocks and we use the fast multipole algorithm to perform a matrix vector multiply so now we can multiply this matrix efficiently but we still need to solve our linear system so for curves this is pretty easy to do using geometric multi-grid we build a hierarchy of curves by just averaging every other node and then use our fast block matrix vector multiply to implement multi-grid in reality there's some additional complexity here for dealing with constraints which all defer to the paper in the end when we put this all together we get the desired behavior the compute time scales almost linearly in the number of vertices and is faster in an absolute sense past about 1000 vertices now for surfaces building a multi-resolution mesh hierarchy is a lot harder and more expensive than it was for curves especially because we're re-meshing the surface every time step so here we're going to take a different route and decompose our fractional operator delta s into two integer laplacians delta and a lower order fractional part delta to the s minus 2. now if we want to invert this operator we just need to do two sparse linear solves using the ordinary cotan laplace matrix plus forward evaluation of the fractional operator which we can do using our fast hierarchical scheme so no multi-resolution mesh hierarchy is needed as it turns out this scheme is a bit too approximate to precondition the differential directly but still provides an excellent preconditioner for an iterative linear solver like gmrez and most importantly if we run this scheme on larger and larger meshes we get performance scaling that looks about as good as multi-grid except that again we didn't need to actually construct a mesh hierarchy okay so with everything in place let's see how well our optimization scheme does compared to other possible strategies our basic benchmark is to untangle some complicated curve or surface so basically we want to find the minimum energy configuration without ever letting the geometry pass through itself since there's not a whole lot of past work on this problem we generated our own benchmark data set of several hundred curves by randomly perturbing a bunch of different knots into very tangled configurations um if you think you can do better at untangling these knots please go ahead and give it a try we'd love to see that but be warned that some of these knots are much much harder to untangle than others so even if you succeed on one of them you really should try the rest of the data set so we first tried out some classic algorithms for not untangling from the mathematical community like not plot and shrink on no overlaps and found that these methods can take forever to make progress basically because they're not applying any kind of intelligent preconditioning this is sort of like just doing l2 gradient descent we also tried out several integer sobolev preconditioners as well as some general purpose strategies like lbfgs nonlinear conjugate gradient and stochastic gradient descent and found that our fractional sobolev method does a much better job of reaching a minimizer that's what's shown in the middle here and that's true given the exact same amount of real compute time we added to this set of methods some more recent optimization strategies from geometry processing and tried untangling the entire benchmark data set and what we found is that the fractional solve scheme is not only more efficient whether you consider the number of iterations or the actual wall clock compute time but it also succeeds on a much much larger fraction of examples so even some of the state-of-the-art methods would tend to get stuck in an almost intersecting configuration whereas we succeeded on untangling nearly all of the examples for surface untangling very little work had been done on algorithms before our work on tangent point energy the one exception is the so-called cosine energy of kushner and sullivan which has a similar flavor to tangent point energy and was implemented in the surface evolver package of ken brachy but when we actually try this out we find that it makes extremely slow progress and gets stuck almost immediately so probably nobody has ever made movies of surface untangling before now and although there are some comments that well maybe you just need to run it on a faster computer we know now that's not really the story if you want to optimize repulsive energies on surfaces you really have to think more deeply about the mathematics you can't just throw more compute power at the problem finally we also tried out more modern optimization strategies for minimizing tangent point energy on surfaces and again found that our fractional sobolev scheme is much much faster than the alternatives and just to be fair we also check that that's true even if we disable remeshing so that schemes like lbfgs can keep using the same update vectors the most exciting part of course is to think about what we might be able to do now that we have fast reliable algorithms for repulsive optimization and here the main limitation honestly is just our own creativity so hopefully these examples will inspire some of you listening to this talk to think of new and interesting things that can be done with this technology just one fun animation example is untangling a pair of headphones of course many of us have been frustrated trying to do this ourselves and you know wouldn't it be wonderful to get a little robot to do this for us or something okay another interesting question is how to best pack longer and longer curves into a finite volume so at some point the curvature of this curve has to increase just like dna coiling up into a cell nucleus or okay maybe ramen noodles coiling up into a package um if we constrain our growing curve to a surface we again get these sort of hilbert like curve patterns but can generalize these patterns now to different geometries or different numbers of curves which could be useful for something like milling or tool path planning taking this a step further we could ask curves to align to a given vector or direction field on a surface which again might be useful for tool path planning or just for nice vector field visualization also in data visualization graphs are often drawn by minimizing energies involving the graph nodes the points but of course the edge curves are just as important for clearly communicating and laying out information for instance a standard package like graphviz sometimes generates drawings where edges intersect which could send the wrong message about connectivity so if we instead treat the edges as repulsive curves we can get some very nice graph layouts where both edges and vertices are uniformly distributed in space likewise if we have non-planar graphs we can find nice embeddings in 3d where there's maybe less confusion about how edges get connected to vertices standard vector graphics tools like inkscape or adobe illustrator have no knowledge of whether curves cross each other or how far they are from collision which means if you click out some control vertices the interpolating curve may very well intersect itself you can of course imagine building vector graphics tools on top of repulsive curves that avoid any unwanted intersections which is especially important if you're designing something like a stencil or a cutout there's recently been interested in sort of biomimetic design for instance if you want to build artificial tissue you need some way of delivering nutrients to that tissue maybe using a network of artificial capillary vessels and so if you'd like each point to be close to a nutrient source well that's something that we can optimize directly using a repulsive energy likewise people are interested in designing and simulating artificial muscles of various sorts but the algorithms for muscle fiber design can be pretty simplistic maybe trace out the gradient curves of a harmonic function if on the other hand you look at real artificial muscles it turns out that twisting is important for getting strong forces and torques which can't be captured by a gradient field so instead what we could do is constrain the endpoints of the muscle fibers to sit on surfaces representing the joints and then use repulsive energy to optimize whatever kind of twisted arrangement of curves you like a very different kind of application is path planning so suppose you have these four agents or robots that want to travel to the opposite corner of this room if they all act independently they may collide or at least get too close for comfort but if we think of the trajectories as now curves in space time then we can minimize repulsive energy to get new trajectories that perform this sort of beautiful ballet where agents not only avoid collision but stay as far away from collision as possible at all times and that's super important especially if you have uncertainty in measurements about where things really are note that this is very different from doing collision detection and response where things would just bounce off each other at the moment of contact a whole other class of applications is mathematical visualization so one basic question you could ask is given a topological space like a surface of genus g what is the sort of simplest or most ideal way to draw this shape in space so if we minimize tangent point energy then for genus 3 we get something with tetrahedral symmetry like this for genus 5 we get something with the symmetry of a cube like this and then for genus 11 okay if we let it run for just a bit then eventually we get something with the symmetry of a dodecahedron so about as perfect as you could expect and also giving us some sense that we're actually maybe finding global minimizers of this energy if on the other hand we minimize a classic energy like wilmore energy we'll get surfaces that aren't nearly as nice and symmetric and that's basically because wilmar energy considers symmetry in the three sphere s3 rather than in euclidean 3-space of course unless we're making sculpture mathematical illustrations are going to be projected onto a screen or onto the page and so we might want something that's easier to understand in a 2d projection so here we can add a penalty that encourages the surface to be in a 2d plane giving us these very nice kind of button-like embeddings or we could add another plane if we really want a linear arrangement of handles now where this gets really interesting and takes us way beyond classical things like the wilmer energy is when we have knotted surfaces meaning surfaces that are not in the same isotope class as just a linear arrangement of handles so in the past the best people could do is to draw these things as sort of not diagrams or graph diagrams and now we can use repulsive energy to draw for the first time what these knotted surfaces really look like so here for instance i'm showing all the knotted surfaces of genus 2 up to six crossings and here's just a little zoom in of what one of those looks like beyond static drawings it's often helpful to illustrate the transformation of one surface into another to show that something is possible or impossible so for instance let's say i have this linked and unlinked pair of handcuffs do you think i could turn one of these into the other without breaking the loops apart or letting the surface pass through itself or something like that well people have tried to make nice drawings of this phenomenon some that are very very beautiful but they can be really difficult to follow unless you already have a very strong geometric imagination but if we use our repulsive optimization we can actually make a nice continuous movie of this transformation leaving no doubt that it really is possible to unlink the handcuffs okay here's a slightly harder example now if i want to remove one of these loops from around this pole do you think that's possible well okay here are some some drawings but if you're not convinced by the drawings here's how we can make that happen okay and the thing to emphasize here is that we're not keyframing animation there's not an artist sitting down and telling the surface where to go we're really discovering this transformation or discovering this fact about this this surface by doing nothing more than minimizing the tangent point energy another fun example is turning the punctured torus inside out so i want a continuous intersection-free motion that makes the red side blue and the blue side red and the neat trick here is to minimize the signed volume enclosed by the surface so when this goes to zero we get a symmetrical mid surface and then we can just apply a symmetry and play the movie backwards okay and there we go finally you can imagine all sorts of ways to apply repulsive optimization to geometric modeling and geometry processing not only making geometry aware of collisions which is something people have looked at in the past but really making geometry adapt to more distant interactions so for instance bending these bars long before they run into each other or making these letters grow around each other which won't happen if you use some classic energy like wilmore a basic geometry processing task is to reconstruct a surface from measured data so one of many approaches is to sort of shrink wrap a surface around the data and this is nice for a number of reasons for one thing you're guaranteed to get the right topology if you know it ahead of time it's helpful if you want to fit data to a common template and unlike many popular reconstruction methods you don't need consistent orientation for your points or polygons a downside is that surfaces can get badly tangled while shrink wrapping but this is something we can easily avoid if we use a repulsive energy as a regularizer while reducing the volume as we do here for points or for some polygon soup similarly if we shrink wrap a sequence of progressively coarser meshes then we generate a collection of nested cages in the spirit of uh socked at all which can be useful for all sorts of things like collision detection and multi-resolution simulation and we can also use tangent point energy to automatically generate surface geometry with interesting detail so much like we did for curves we can increase the area of a repulsive surface while forcing it to remain inside a small volume to get wrinkles or to grow something that looks like nice cobblestones finally though i didn't really talk about boundary conditions much we can at least handle simple deerslay boundary conditions where we pin vertices to locations in space so here for instance we use tangent point energy to regularize standard minimal surfaces in other words we minimize area but also prevent self-intersection which gets rid of classic pinch-off behavior and also lets us construct minimal surfaces with obstacles like this pink cube okay so to wrap things up it's worth noting that there are certainly limitations to our approach for one there's no way to remove the bending regularization that's implicit in the tangent point energy so if you want to have sharp creases or kinks you might have to think a little bit harder about how to do this since we approximate our energy using a finite set of quadrature points there's no hard guarantee at the numerical level that intersections won't occur so if the mesh is super coarse like this one then triangles could pass through each other of course here we could also use standard techniques like continuous collision detection or maybe try to minimize an upper bound on the exact energy there are a lot more opportunities for acceleration like parallelization and adaptive meshing maybe doing multi-resolution on surfaces and so forth more broadly there are so many questions that we'd like to answer for instance how do you interpolate between two shapes in a collision-avoiding way rather than just minimizing energy how might you incorporate an elastic shell energy to retain more memory of the original shape and so on and so forth so in general this is a very new topic for visual computing with lots of questions to still explore along many different axes and so i hope i've inspired you to think about some of those questions thank you very much
Info
Channel: Keenan Crane
Views: 13,046
Rating: undefined out of 5
Keywords:
Id: dtYGiCpzzbA
Channel Id: undefined
Length: 53min 1sec (3181 seconds)
Published: Tue Dec 07 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.