Geometry Processing with Intrinsic Triangulations (Day I)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi i'm keenan crane i'm a professor of computer science and robotics at carnegie mellon university and today i'm going to talk about how expanding our definition of a triangle mesh can provide lots of new opportunities for geometric computing and the story really starts with a story about 19th century mathematics so back then if you wanted to talk about a shape like a surface you'd have to describe how it gets embedded in space in other words you'd give it a so-called extrinsic description like a parameterization but a big development by gauss and riemann and others was to realize that there's a whole lot about geometry that doesn't depend on the way a shape sits in space and so you can toss out this embedding and instead work with a purely intrinsic description like an atlas of charts okay and so this ability to work with geometry without having to think about how it sits in space even though that seems like a totally ordinary idea to us now at the time that was a real revolution in the way that we think about geometry leading in the 20th century to some pretty important discoveries like einstein and hilbert's work on the theory of general relativity okay so what about the computational side of things well right now most people who work with geometric data so programmers and engineers and so forth typically think of a mesh as being given by extrinsic data like vertex positions in three-dimensional space okay so this is an extrinsic picture because it depends on how the shape is embedded and instead today i want to think about a different picture the intrinsic picture where we throw away the vertex positions and just store the edge lengths of the mesh okay so if you're a mathematician you might already call this object a just a triangulation but for clarity i will call this an intrinsic triangulation because it captures only intrinsic data about the shape for instance how far is it from point to point along the surface but not how does that shape sit in three-dimensional space okay actually where this all gets really interesting is when we have an interplay between intrinsic and extrinsic descriptions of shape so for instance i can take an extrinsically defined polyhedron like this cube and i can triangulate the underlying space in many many different ways right so now even though i have triangles that appear to be extrinsically bent on my cube well as long as those triangles don't contain any vertices they can be unfolded into the flat plane without any stretching and those triangles can still be described by just three ordinary edge lengths so again this intrinsic triangulation sitting on top of the cube is given by ordinary connectivity graph plus edge lengths okay so the rough idea is we want to allow bent triangles that are still intrinsically flat what's so nice about this is that we now have the freedom to take our surface and triangulate it in all sorts of different ways without changing the geometry at all which as we'll see opens up a world of possibilities that we didn't have in ordinary geometric computing but to take advantage of this mathematical perspective we also have to think about some computational issues in particular what are good data structures for keeping track of the correspondence between triangulations and once we have those how do we build useful algorithms on top of those data structures and that is what i will talk about today now one natural question to ask is why why is it useful to draw one triangulation on top of another and i'll try to motivate this from three different angles so from the perspective of classical computational geometry you might imagine okay let's say i have a triangle mesh in the plane some input triangulation and i want to improve its quality somehow while preserving the original vertex set well that's not too hard i could start flipping edges for instance until i get a so-called delani triangulation but if i force you now to preserve not only the vertices of the input mesh but also the edges of the input mesh and i say please make this a better mesh well this is really annoying there's not much i can do other than to maybe start refining the mesh to get better triangles okay and this annoying situation is exactly what we're always faced with in 3d geometry processing if i want to exactly preserve the input geometry you're forced to preserve both the vertices and the edges of the input and so basically the only op option you have is to refine the mesh to improve it okay or is it right so what we'll see later on is that this intrinsic perspective basically lets us use the polyhedral surface as the background domain for computation rather than the euclidean plane so what that allows us to do is sort of translate a lot of classical algorithms from 2d computational geometry onto three-dimensional surfaces and we'll talk about a few of those another angle is scientific computing so here we might be using our mesh to solve a physical equation like some formulation of elasticity and the challenge is that the mesh now must serve two conflicting goals one is to just describe the geometry of the domain and the other is to describe functions a basis of functions that we use to represent the solution to our physical problem so it kind of feels like a no free launch situation our input mesh is a concise description of the exact geometry of the domain but it may have very poor element quality we can refine this mesh to get better elements but now the size of the mesh blows up here by a factor of 100 or we can retain a nice small mesh with nice elements but now the geometric quality suffers okay and this is a nice thing about the intrinsic viewpoint in the intrinsic setting we can decouple the basis functions used to represent the geometry from the bases used to encode our function spaces and even though our geometry is piecewise linear one can of course still use any basis function with a triangular domain so maybe like quadratic lagrange bases okay so now we kind of get the best of all worlds our intrinsic mesh perfectly preserves the input geometry it has good element quality and the mesh die size stays pretty reasonable the final angle and the one that i'll talk about most in this talk is geometry processing so one observation we can make about a lot of geometry processing and shape analysis is that it's often expressed in terms of intrinsic data maybe because we care about things like isometric deformations of shape for instance the laplace beltrami operator delta is absolutely fundamental to geometry processing in the same way that the fft is fundamental to classical signal processing and it is intrinsic as are the harmonic spectrum the gaussian curvature the geodesic distance fosterstein distance functional maps and all sorts of other things that we know and love so given that we want to work with intrinsic objects why then would we try to process the mesh from an extrinsic point of view another big challenge is that geometry encountered in real data sets can be really really bad so to quote jean-paul such hell is other people's meshes right and if you've ever worked with real data you know that this is true so the bad news is that data just keeps getting worse and worse and worse over time for instance as we start to see for instance more user data from scanning and 3d printing so this is a problem that's not going to resolve itself we really need a way to help users of geometric software abstract away the mesh so that they can just focus on the geometry in numerical linear algebra there's a nice idea that helps with this kind of thing so if you have a badly conditioned system you might apply first some transformation like a permutation then you get a better system that you can solve using algorithms that are not particularly robust then invert this transformation to get the final solution right and the beauty of this idea is that it can be encapsulated in sort of a black box that the user doesn't have to think about they can just type backslash in matlab and everything works out okay so we'd like to do something similar for geometry processing and the intrinsic perspective helps with that okay so now what we can imagine doing is taking a bad mesh doing an intrinsic re-triangulation to get a better mesh run some existing geometric algorithm that's not particularly robust get our solution and then transfer it back to the input mesh okay so this way as before we can encapsulate this robustness in kind of a black box or a subroutine and the user can just forget about all these meshing issues and focus on the shape that it represents okay so hopefully that sets the stage a bit to give a quick overview today i'm going to talk about a little bit of history of intrinsic triangulations and then dive into two recent data structures we've developed both signposts and one based on normal coordinates and tomorrow we'll see more applications including a new discrete laplace operator and algorithms for computing geodesics as well as a list of some really interesting open questions to give just a few highlights um we're gonna over these two lectures develop some very fundamental surface meshing algorithms with quality guarantees so basically extending delania refinement and constrained delani triangulation to surfaces we'll have extremely robust algorithm for injective mapping that's enabled by an integer based encoding of correspondence we'll develop a laplace operator that works for general non-manifold triangle meshes and even point clouds that provides kind of a bridge between mesh processing and point cloud processing and finally we'll have a greedy edge flip algorithm for computing geodesics very much in the spirit of the delani edge flip algorithm that kind of unifies uh computation of geodesics and retriangulation of the surface before going any further i want to acknowledge that all these great new developments would not be possible without my collaborators so mark gillespie and nick sharp who are my current phd students at cmu youssef solomon who is a former undergrad and boris springborn at tu berlin not to mention all of our very generous funding sources okay so what are intrinsic triangulations and where did they come from our first mental cartoon again is that we want to allow triangles that look bent in space but are flat intrinsically in other words they're still captured by just three edge lengths and which define an ordinary euclidean triangle okay so we want these bent triangles but they're still intrinsically flat so a little more broadly we have this kind of hierarchy okay so we start out and normally we just think of extrinsic triangulations where edges are straight segments in euclidean space we can talk about geodesic triangulations where we trace straight paths along the input polyhedral surface but more abstractly we could just think of a triangulation as a disjoint collection of triangles which are identified along edges of equal length okay and this final class really is strictly more general than the other two right so not all intrinsic triangulations are geodesic triangulations for instance here is a perfectly good metric space made by gluing together euclidean triangles right but it can't be embedded in r3 if i try to fold up these flaps they're too short to meet okay so this one definitely doesn't come from a polyhedron in r3 more precisely we'll say that an intrinsic triangulation consists of a topological triangulation k of a surface with vertices v edges e and faces f plus a discrete metric meaning just a length for each edge that satisfy the triangle inequality in each face okay now importantly we will not require that k be a simplicial complex for instance it's fine if two edges are glued together or if all three vertices of a triangle are the same this turns out to be incredibly important for making sure that algorithms work in general so formally we could say that k is a delta complex meaning a manifold two cw complex with triangular faces even though we no longer have vertex positions we can of course still compute basic geometric quantities from lengths so for instance if we wanted the angle at a corner we could use the law of cosines for the area we could use heron's formula and so forth geometrically the data we have describes what's called a polyhedral cone metric in other words a space that has zero gaussian curvature everywhere except at the vertices where the geometry is indistinguishable from a circular cone right since each pair of neighboring triangles can be flattened into the plane there really is no intrinsic curvature along the edges and so an extremely important mental cartoon is to smooth out the edges and just leave the vertices okay and so the critical point here is that we could have described this same space with many different triangulations from the intrinsic point of view there's absolutely nothing special about the one that we started with so remember historically the progression was from extrinsic geometry like parameterized surfaces to intrinsic geometry like romanian manifolds and that perspective was really necessary to develop some big ideas in geometry and physics like relativity it's very appropriate then that some of the very first work on intrinsic triangulations was done in an effort to guess what solve einstein's equations of general relativity so here's a paper by tulio reggie from 1960 and it's pretty remarkable how much it resembles our modern way of thinking about discrete differential geometry in fact it's even motivated by solving equations on a computer so you could also think of it as one of the very first papers on geometry processing digging a bit deeper we find that red jay was exploring this idea of intrinsic triangulations and came to all the same conclusions we did at the beginning so he says all the data we need is the connectivity and the edge links the particular choice of edges actually isn't that important and the edge lengths are what give us our discrete metric okay so here's a little bit uh broader history of intrinsic triangulations starting with alexandrov's proof that any convex polyhedral metric can be embedded on r3 going through lots of work by thurston and others on using triangulations to understand three manifolds and then a gradual build up of tools for working with polyhedral surfaces which i'll go through in a little bit but the main thing to notice is that so far we haven't seen that many practical algorithms that use intrinsic triangulations and the reason for that is that we're still missing some very basic ingredients so one of the most basic ingredients is something called an intrinsic edge flip this is something that's used in pretty much all of these algorithms so rather than connecting vertices by a straight line segment through space right if i want to flip this edge normally i would connect the opposite two vertices by this euclidean segment instead what we're going to do is trace out a straight path along the surface and the very important fact is that this intrinsic flip leaves the original geometry completely unchanged okay building on top of intrinsic edge flips one very useful thing we can do is construct an intrinsic delani triangulation so the delania property is something that generally ensures good behavior of a lot of geometric algorithms and it has many equivalent characterizations in the intrinsic setting the most natural thing is just to say um every edge must have an angle sum an opposite angle sum of greater than pi okay and something that's really nice is there's an easy algorithm to always construct such a triangulation if you see an edge that's not dylani you simply flip it and you keep doing this until there are no more edges to flip so one useful fact it turns out is that you can always flip a nandalani edge this is always a legal operation and more importantly there's a theorem that says this algorithm always terminates so you can always flip to an intrinsic delaney triangulation using a finite number of flips on top of intrinsic delaney triangulations we can then build what's called an intrinsic laplace operator so the laplace beltrami operator delta is absolutely fundamental in geometry processing it's even called the swiss army knife of geometry processing and bobenko and springborne define a very nice intrinsic laplacian so the idea is you just flip to the intrinsic delani triangulation and then build the usual cotan matrix or finite element stiffness matrix a very useful fact is that the cotan weights are non-negative in a dalani triangulation right and zero only if you have a concyclic pair of triangles all four vertices on a circle in this case the choice of delani edge is ambiguous so what that means is that there's a canonical laplacian on every polyhedral surface that doesn't depend at all on the input triangulation and it always depends it always uh provides what's called a maximum principle so no spurious extrema in your solution in practice this laplacian behaves a lot better for all sorts of algorithms in geometry processing here's one example for instance where we want to compute geodesic distance by solving partial differential equations using something called the heat method so here if you plug in the intrinsic laplacian rather than the ordinary extrinsic laplacian you immediately without doing anything else already get a lot better results another thing that has really motivated the development of intrinsic triangulations is computing discrete conformal maps so conformal maps are widely used for texture mapping and fabrication and machine learning and all sorts of things and there's a discrete definition of what it means to be conformal which is that two discrete metrics are discreetly conformally equivalent if the length cross ratios c are preserved so these length cross ratios are literally just that you take the product of opposite lengths in this diamond and take their ratio okay so the problem then is given some triangulation with a metric and maybe some target angle defects you want to find a conformally equivalent metric with either zero angle defect or these prescribed ones and the algorithm kind of flows the metric toward this flat metric and eventually you'll be able to lay this out in the plane so a nice theorem which has finally been developed after many years is that this is always possible you can always uh discretely uniformize any input triangulation as long as you allow intrinsic edge flips along the way and we'll talk a little bit more about that later okay beyond that i can't really think of a whole lot of other real algorithms that use intrinsic triangulations there's a method of surface parameterization uh based on circle patterns uh there's a method of embedding metrics into space convex polyhedra into space there's a metric of field aligned parameterization a few other examples but that's really about it there's really not much more use so far of intrinsic triangulations in practical geometry processing so why is that if this is such a useful perspective why are there so few practical algorithms well one thing to notice is that all the algorithms we saw so far assume a fixed vertex set the only operation we allowed was flipping edges whereas in classical geometric computing of course there are all sorts of operations inserting vertices moving vertices splitting edges and so forth so that raises a natural question which is how can we implement a much broader range of algorithms in the intrinsic setting and i'm going to talk about how to do exactly that okay one quick note on visualization when you see a picture like this the black wireframe is indicating the input mesh and the colored triangles show a new intrinsic triangulation sitting on top of that input surface okay so let's look at some more sophisticated data structures for intrinsic triangulations that enable these additional operations starting with something called the signpost data structure and this comes from a paper with nick sharp and yusuf solomon so historically they're really just a few data structures for encoding intrinsic triangulations you can use this basic data structure of just connectivity and edge lengths and if you wanted to track the correspondence between triangulations then you might use an explicit mesh that represents the common refinement this overlay data structure so today i'll talk about two alternatives the signpost data structure and one based on normal coordinates this basic data structure is what we've talked about already right a minimal encoding of an intrinsic triangulation would be to have just a list of the edge lengths and some topological mesh data structure that can encode a delta complex right so a reasonable data structure might be something like a half edge mesh or a signed incidence matrix a little vertex face adjacency list is not going to cut it because there are triangulations that simply can't be encoded this way right so here's a perfectly good triangulation of the torus made with you know two triangles and just one vertex but this is not a valid simplicial complex okay so this is a very easy data structure but it has a huge limitation which is that it doesn't encode the correspondence between the original and the new triangulation all right i have no idea from this list of edge lengths and connectivity how the intrinsic edges cross the extrinsic edges okay but i can at least evaluate some basic geometric quantities from these edge lengths so we talked about getting angles and areas and generally the cost and accuracy of evaluating these quantities from edge links is not much different from evaluating them from vertex positions it's important to keep in mind that using vertex positions to compute quantities on intrinsic triangulations generally doesn't make sense so for instance if i wanted to compute the length of this intrinsic edge the distance between vertex positions is going to give me an underestimate but i can still think of the embedding as a signal on the surface and that can be useful for various algorithms for instance i can compute extrinsic minimal surfaces using the intrinsic laplacian okay to implement our basic edge flip what do we do in the intrinsic setting well one thing we can observe is that the five edge lengths of two triangles let us lay them out in the plane right so if we wanted to we could lay them out in 2d we could compute this opposite edge length and we could just store that on our intrinsic mesh it's useful to know that this edge flip operation is only valid if well first of all the triangles form a convex quadrilateral if not we're going to kind of fold the triangulation over when we try to do this edge flip and also the endpoints of both ends of the edge have to have degree no less than two right because if i have a degree one endpoint and i tried to do an edge flip that what i'm going to end up with is a cone point inside a triangle so this no longer is a standard euclidean triangle that we can describe using just three edge lengths okay so what if we actually do want to store the correspondence between two different triangulations well an idea introduced by fischer and colleagues is to track where the new and old edges cross explicitly so we're going to explicitly store the common subdivision of the two triangulations using let's say a half edge data structure and we're going to mark each edge segment as either original new or contained in both triangulations the crossing locations are then stored as 1d coordinates what i'll call barycentric coordinates along these edges okay the good thing about this data structure is well first and foremost it actually does encode the correspondence between triangulation so if i have to interpolate a function from one triangulation over the other i actually have the data i need it's also guaranteed to always give us the right connectivity of the common subdivision because all the choices we're making are based on combinatorics and connectivity and not about vertex positions on the other hand it only supports edge flips that's the only operation we can do and even those edge slips can't be performed in constant time in general it can be quite expensive to keep track of all these crossings as we flip and flip and flip and flip and we have to store all these crossings so in the worst case even in this simple example on the right we might have to store order n squared crossings so to overcome some of these difficulties we took inspiration from these signposts that you see all over the world which tell you how far and in what direction to walk to get to a particular destination okay so then the basic idea of this signpost data structure is to store at each vertex the direction and distance to each neighboring vertex and this gives you sort of an implicit description of the new triangulation we don't explicitly store the list of points where the two triangulations cross only how to get from one place to the next okay in more detail the signpost data structure consists of well the same data as before so a topological triangulation k a discrete metric l but now we also store the outgoing direction of each half edge all right halfage meaning just oriented edges of our intrinsic triangulation and we also possibly store barycentric coordinates for any vertices that we've inserted into the mesh so that's a new operation that we'll add but assuming that we don't insert any new vertices the storage cost is now fixed anytime we need to know a crossing point we can just evaluate it on demand in a lazy fashion just trace out across the surface though as we'll see we actually rarely need to do this okay so the key idea is that we're going to maintain kind of not just this description of the intrinsic metric but also in a sense a description of the intrinsic tangent spaces okay to encode these edge directions how do we do that well we're going to take a cue from work on tangent vector field processing and store a unit direction x at each vertex as an angle phi around the vertex cone so for instance to get the direction of the outgoing half edges we would normalize the euclidean interior angles so that they sum to 2 pi and then just take cumulative angle around the vertex from some fixed reference direction importantly this fixed reference direction will stay fixed for all time so even as we're doing flips we always imagine that we have this one fixed reference direction okay so suppose that we want to find where an intrinsic edge crosses an extrinsic edge well no problem the signpost tells us what walk we need to take so we can just grab the edge length and the edge direction we've stored and sort of walk across the surface starting at the one of the endpoints in other words we can evaluate the discrete exponential map to get from one end point of the edge to the other which algorithmically means we just do a bunch of little 2d ray edge intersections edge flips look very much the same as before except now we have to just update these sign post angles as we flip which is just a trivial calculation based on the new interior angles that we compute from the new edge lengths okay okay so then one very important new thing we can do this with this machinery is to actually add vertices to our triangulation right so so far we've just been talking about flipping and flipping and flipping now we can actually refine the mesh so how do we do this well here we really make use of the directional information so let's say we want to insert a vertex at a point i what we're going to do is just trace from any corner of the intrinsic triangle to i right so we're going to walk along the extrinsic surface and see where we land on that original mesh we then record the barycentric coordinates of this new point insert it into our intrinsic mesh and update the signpost angles by adding the interior angles to the incoming direction that we got from our trace okay and from there we can implement all sorts of other operations using the same kind of thinking we update the connectivity the edge lengths the signpost angles to reflect whatever changes we wanted to make to our mesh and the really important point is not so much the specific details of how this all works but the fact that as we do this we start to build up a full-blown intrinsic data structure that's kind of a lot more feature compatible with the standard extrinsic mesh with a mesh that people are used to working with and a nice thing is that we can hide all of this complexity of working with this intrinsic triangulation with a simple interface right so we can rather than asking people to work out heron's formula or the law of cosines every time we can just write some routines that compute a triangle area or a angle at a corner or whatever likewise all of this thing about tracing and updating signposts and everything can be encapsulated in little local remeshing operations like flip edge and insert vertex and so forth and once we have this abstraction geometric algorithms mesh processing algorithms can just be implemented as usual right you can write code that looks exactly like what you would write for an extrinsic mesh except that you're calling out to these intrinsic subroutines and that is exactly what we set out to do right to provide kind of a black box interface that improves the robustness of mesh processing so we can completely hide this intrinsic mesh from the user now sometimes we will actually need this intrinsic mesh or actually the common subdivision of the original and intrinsic mesh for let's say processing downstream although this happens less often than you might think so for instance for visualizing a solution or a parameterization or something doing this is actually not too hard it's an easy two-step process so first what we can do is trace out all the intrinsic edges over the extrinsic mesh store the crossings and then run some simple local algorithm on each triangle to emit the subdivision with just a couple cases to consider the very important difference between doing this at the end versus maintaining an explicit representation of the common subdivision all the time is that we don't have to pay the cost of maintaining these crossings during processing right we just have this kind of output sensitive algorithm so if we compare the performance we find that the signpost data structure can be around 20 to 30 times faster than explicitly tracking this common refinement half edge mesh all the time not to mention that this explicit overlay idea uh only supports edge flip operations right that nobody ever said how to do things like vertex insertions now one thing to realize is that always when you're doing geometric computation floating point error will creep in somewhere no matter what data structure you use so let's consider kind of a pathological input this near degenerate triangle mesh where this is its true delani triangulation with this basic data structure of just connectivity and edge lengths even just flipping edges might end up in a situation where you violate the triangle inequality numerically with the overlay data structure you're guaranteed to get a valid topology but you can actually get the wrong geometry and with the signpost data structure tracing an edge out from a vertex i to vertex j might actually just fail to reach the neighborhood of vertex j okay so i'm pointing out that all of these things can happen in theory but in practice they're all essentially nitpicks all these data structures actually work really great on real data even on some pretty terrible meshes so here's a test of robustness what we're going to do is flip to the intrinsic delani triangulation and then extract the common refinement this test succeeds on a hundred percent of ordinary meshes uh so things you might find for instance in the princeton shape benchmark or the miles and zorin data set and for the worst three percent of meshes in the thingy 10 k data set we needed to perturb near zero area triangles or use quadruple precision to get things to work out okay so what is this data set this is a good representation of the kind of bad data that lurks out there in the world right now so this is a data set of 10 000 3d printing models made often by kind of non-experts and if you zoom in on these models you look and you think wow this is insane like how could anybody make a mesh this bad right these are really really not made for doing any kind of mesh processing or simulation okay but in general uh apart from some really pathological examples these data structures actually do work quite well okay one open question interesting open questions so far we've assumed that intrinsic triangulations always include the cone points from the original surface but what if we wanted to work with a coarser mesh with fewer vertices right if we want to do some kind of intrinsic simplification this is a very important thing to do in geometry processing well one nice observation is that our signpost data structure can already be used to encode any geodesic triangulation on the surface even if the vertices of the triangles are not cone points so we could have triangles that go around cone points okay and then there's a question of what to do like how do we capture this information so for instance maybe we store an additional curvature ii form that accounts for these cones inside triangles or whatever one interesting geometric phenomenon that comes up in the setting is that it turns out a geodesic triangle on a polyhedral surface can't contain less than negative pi curvature okay so there's still a lot of thinking to do about exactly how to handle this situation okay so it's nice that we have these robust data structures but to really make this this black box idea work we actually need some nice way of re-triangulating the surface and we'll consider three different triangulation algorithms one is the delani flipping algorithm that we've already talked about so this makes the mesh delani but the angle and area distribution of triangles can still be very bad for that reason we're going to extend the classic delany refinement algorithm to the intrinsic setting this is going to give us guaranteed bounds on angles but might still have bad distributions of areas and then we'll also consider an intrinsic version of optimal delani triangulation where we can kind of spread out the vertices across the surface so just to recall lawson's flipping algorithm says how do we get a delani triangulation well we enqueue all the edges and until this queue is empty we pop out an edge we flip it if it's not delany and if we did flip it we should enqueue the neighbors because they might now have become non-dalani due to this flip if we do this on a pretty bad quality mesh like this guitar well what happens is all of the triangles do become delania or all the edges become delany but we haven't really done much to improve the angle distribution right this angle histogram looks pretty much the same the delani refinement algorithm is going to help us with that so specifically choose second algorithm says okay until the minimum angle is above some lower bound we're going to flip to a delani triangulation using the same lawson algorithm and then pick any bad triangle any triangle where we have angle less than the angle bound in this triangle we're going to insert the circumcenter of the triangulation and then we're going to flip again to delani okay the question we have to answer in the intrinsic setting is well what is the intrinsic circumcenter if the circumcenter's contained in the triangle it's no problem but what if we have an obtuse angle so now the circumcenter is outside the triangle if we try to draw the extrinsic circum disk right this doesn't help us we can't put this red point in our triangulation it's not even on the surface so what we're going to do instead is draw this triangle in the plane right and then consider the vector that goes from the vertex to the circumcenter and we can trace this vector out over the surface over the polyhedral surface using the discrete exponential map and that tells us where to insert this new vertex so that gives us an intrinsic version of this delani refinement algorithm where we really do improve the angles you notice here that the minimum angle is now no less than 30 degrees however the area distribution is still not great so our final algorithm is to use optimal delawani triangulation where we allow the vertices to move around okay and in the intrinsic setting this means we're going to only allow non-cone vertices inserted vertices to move otherwise we'd be changing the geometry of the polyhedral surface okay so how does this work well until convergence we split any edges that are longer than some maximum length and we move all the inserted vertices to the weighted average of circumcenters to do this in the intrinsic setting we again find these vectors that point toward the circumcenters average them and then move along that direction and finally flip again to delany until we're done so here's just an interesting example of the delani flipping just showing how completely different this intrinsic triangulation can be from the input here's some examples of delani refinement and the important thing about these examples is that now no matter how bad our input meshes we can always just click a button and be sure that the triangulation we're working with that the triangulation we're using for our processing has good angles that we have a good quality mesh to now run our algorithms on okay so this is again what we wanted to achieve this kind of black box robustness and finally here's some examples of optimal delay triangulation where the triangles have been nicely distributed over the surface although now you notice that we don't get these hard angle bounds anymore you might ask at this point well this is great but can't you do the same kind of stuff with just traditional extrinsic meshing so here's a little comparison so on the left we have our intrinsic delania refinement on the right we have a very recent algorithm for refinement using more traditional restricted delani triangulation okay and for one thing the latter can't provide hard angle bounds for polyhedral surfaces right there's just not this guarantee anymore we also see that it produces a mesh that is about 10 times larger than the intrinsic one using about a thousand times as much computation and also relies on a volumetric mesh data structure rather than just a surface mesh data structure in the end another thing is that it doesn't actually provide positive edge weights for the laplace matrix because the notion of delani in this extrinsic case is not the one that corresponds to positivity of these these edge weights so what's going on here why is it so much harder to do this extrinsic meshing well it's exactly what we talked about before in the extrinsic setting we really have to juggle element quality and geometric approximation quality right and in the intrinsic setting we can just focus on the quality of elements if you're a computational geometry person you can think oh well the intrinsic thing is just like working in 2d in the plane all right i don't have to worry about approximating the plane well it's just the plane it's kind of a similar thing here okay there are also extrinsic meshing algorithms that seek to satisfy the intrinsic delani property but again the struggle between element quality and geometric error produces meshes that are generally way bigger and have way worse error bounds okay here's another interesting open question so given the importance of lawson's flipping algorithm will it always be efficient for every mesh we throw at it or could this be really catastrophically slow so in the 2d case it turns out this algorithm has a tight worst case bound of order n squared but in the intrinsic case as pointed out by bomeko and springborn it's impossible to bound the number of flips in terms of the mesh size meaning the number of elements because we can just keep flipping further and further and further away from the delani triangulation so you can consider this twisted cube with progressively longer and longer diagonals okay however a kind of surprising empirical observation is that on a large-scale real-world data set the number of flips required is typically just a fraction of the number of edges in the mesh why would this be true one reason we think is that for a metric that comes from an embedded surface the longest intrinsic edge is actually bounded by the extrinsic diameter so the hope is that you can somehow bound the number of flips in terms of the geometry of the mesh rather than just the number of elements in the mesh okay but this question is still open okay so let's finally look at some applications where this signpost approach pays off in general one natural question to ask is how do we get a solution on our intrinsic triangulation back to the original mesh this last step of our diagram here well the answer is it really depends on what you're doing for instance if you're doing something like just computing the eigenvalues of a differential operator like the laplacian there's really nothing to do you just grab a better value of the eigenvalue that you computed on this intrinsic triangulation period in some settings like computing geodesic distance it makes sense to still copy values at vertices right because the for distance the the triangulation you pick doesn't affect the notion of point-to-point distance sometimes you just want the value at a point so let's say you're computing the karcher mean of a distribution okay we'll just extract the location of that point in some settings like injective mapping you might be able to flip back to the original mesh while preserving injectivity this is something that was done in the circle patterns work and finally as a fallback you can always construct the full overlay mesh right so if you wanted to actually visualize a function from the intrinsic mesh on the extrinsic one then you can go ahead and do this though usually this is this is kind of overkill one application is finite element problems so again we still have an ordinary triangle mesh we can still use ordinary linear basis functions no need to change our solver code just build the usual mass and stiffness matrix we'll get the same matrix density the same performance the same conversions guarantees and so forth but with better accuracy and stability if we want we can also still use higher order elements so here's one example we just want to solve a poisson equation laplace u equals f and if we do this on an extrinsic or intrinsic optimal deline triangulation we find that we get about twice as much accuracy even on this rather coarse mesh okay another example is computing geodesic distance we mentioned this before so here's an input mesh that you have no business actually computing anything on this is a really awful finite element mesh here's the exact solution the exact geodesic distance and if we run this heat method on the original mesh well things really don't work out so well we get a huge amount of error if we now flip to intrinsic delaney things get quite a bit better already and then if we just do a little bit refinement so we don't even double the number of elements in our mesh the error already goes down to 0.7 percent almost the exact solution okay so this is pretty sweet we can also apply adaptive mesh refinement to our surface meshes so we can for instance if we want to compute things like harmonic greens functions or short time heat kernel which are often used in geometry processing this is something where we have a lot more detail in one place than another and it's nice that we can now apply these techniques from the plane to curved surface meshes and even with all of our error estimation and rebuilding systems and so forth we still get a significant speed up over kind of a naive refinement injective surface parameterization is another place where this intrinsic perspective can really help so let's say we want to compute a harmonic map from a surface to the circular disk if we apply the intrinsic delani triangulation then we can already eliminate flipped triangles like this red triangle on the bottom left and then if we further use adaptive mesh refinement we'll reduce the geometric distortion near the boundary where it's the largest um this signpost data structure is also very natural for tangent vector field processing because it's all based on encoding the intrinsic tangent spaces okay and in particular in a previous paper we show that there's kind of a flip free property for the intrinsic delani triangulation so if we consider the minimizer x of what's called the vector directly energy something that measures the smoothness of a vector field then it turns out that this vector field will be flip free if we have the intrinsic delani triangulation so you can think of this minimizer as a vector field that's kind of harmonic with respect to the connection laplacian rather than the ordinary scalar laplacian and what this flip free property says is okay if i've found this nice vector field and i transport all my neighboring vectors to the center then the vector at the center will always be contained in the convex cone of the neighbors okay and so this is nice in practice because it helps prevent fold-over and things like field-align parameterization and meshing so here's just an example of computing this vector field on uh ordinary extrinsic mesh with no special laplace operator and you do get these uh flipped directions that really point in the wrong uh direction and here's what we get on the intrinsic delay triangulation where there's no flips anywhere if we then try to derive other quantities like parameterizations from this vector field we see that this really pays off so if we just use the ordinary laplacian on the ordinary extrinsic mesh we get all sorts of distorted garbage whereas the intrinsic delineation triangulation really gives the right result in general the vector fields that we compute in the intrinsic setting will just be nicer and smoother so for instance if we just look at the smoothest vector field with respect to this dirichlet energy then if we flip to delany or we do delany refinement we actually reduce this vector dirichlet energy further and further and so this leads to a nice open question which is is there kind of a vector version of ripa's theorem so ripa's theorem in the the traditional setting says if you give me a function a scalar function at vertices of a triangulation then it turns out the delani triangulation gives the smoothest interpolation where smoothness is measured by this dirshley energy so the analogous open question is if i minimize the vector directly energy on the intrinsic delani triangulation does that give me the smoothest possible vector field and this question is a bit hard to answer because there are actually many different notions of discrete vector dirschlay energy i know of at least five different ones okay so in fact we've actually found that for two of these you can find a non-dolani mesh where the vector directional energy of the minimizer is less than the energy for the delaney triangulation but in general it's not clear what the answer is uh to this question okay so let's move on now and talk about a more recent and very different approach to tracking intrinsic triangulations and this is work with mark gillespie and boris springborn okay so we've already seen a few different data structures for keeping track of the correspondence one is to explicitly store the common subdivision as a half edge mesh at all times the other is to use these implicit sign posts right this explicit representation was nice because we always got the combinatorics correct but it was expensive to track it all the time the signposts were nice because you know conversely they're cheap to keep track of it's this implicit representation but they can fail due to floating point error during these tracing operations so normal coordinates are going to offer the best of both worlds we're going to get the exact combinatorics of the common subdivision but it's an implicit representation we don't explicitly know where all the crossings are at all times and so that means it's an output sensitive algorithm the work to required to extract the common refinement at the end is just proportional to the size of the output and not all the intermediate operations to understand how this all works it's best to maybe start by just thinking about a single curve so the basic idea of normal coordinates for curves is to just count how many times a given curve on a surface crosses each edge and so for all edges that don't get crossed we just store zero okay so i'll use n to denote these normal coordinates okay we're going to make one very important restriction which is that this curve is not allowed to enter and exit through the same edge of a triangle and that's what we'll call a normal curve these normal coordinates were originally developed to study not curves but rather surfaces embedded in three manifolds and they show up in several different places so for instance in topology you might use normal coordinates to study what's called the mapping class group so every surface has all sorts of interesting automorphisms like this dane twist that maps the taurus to itself and we apply if we apply this automorphism to a triangulation then an element of the mapping class group can be identified with a path in the flip graph of the triangulation okay so that's pretty cool and that's just kind of the tip of the iceberg from the perspective of theoretical computer science normal coordinates are sort of a way of compressing curves so the number of bits you need to store a really long winding curve is much much smaller in normal coordinates and you can think about how to do operations on curves in this compressed format almost like i don't know fully homomorphic curve encryption we're of course most interested in geometry processing we're going to get a robust data structure for tracking intrinsic triangulations and one important thing we're going to think of normal coordinates more geometrically than topologically so we're going to always imagine that they encode geodesic curves on a polyhedral surface rather than normal curves on a topological surface okay one basic observation we can make is that as long as our curve doesn't stop or start at any vertex the normal coordinates will always satisfy the triangle inequality okay so 2 plus 1 is equal to 3 4 plus 3 is greater than 5 and so forth why is this true well it's pretty easy to see that at most arcs that go into the first two edges can come out the third the third edge in general we will allow curves to stop and start at vertices and here we have two useful numbers so we can keep track of the number of edges leaving corner k and the number of edges crossing corner k which we'll call e and c respectively and you can get a feel here for the kinds of little formulas that we use when working with normal coordinates so the kind of remarkable and useful thing about normal coordinates is we can go the other way given just the number of crossings on each edge we can actually recover the curve itself so how does this work well you can imagine each time we enter a triangle we have a few different cases where we're basically trying to figure out okay should we go left or should we go right or should we just stop at the opposite vertex okay and we can turn this into a little algorithm so this tracing procedure is pretty simple but so far it actually just gives us combinatorial information which sequence of edges is crossed by the curve to actually get the geometry of the path what we can do is then just lay out the strip of triangles that we pass through in the plane and just draw a straight line between endpoints we know that this line must be contained in the triangle strip because well we know exactly which edges we cross we didn't cross any of the boundary edges of this strip okay and so then to get the geometry we just find intersections of with each edge with this straight line and we'll store those as uh coordinates between zero and one along both the the blue edge that we've traced out and each of the black edges okay what's nice about this approach is that we're guaranteed to get the right edge sequence all of our tracing decisions were made made based on integer data the only possible error that we could have due to floating point is in these line line intersections so we might get these barycentric coordinates slightly wrong okay but in the absolute worst case we can just clamp to zero one to make sure that we still get a path with the right combinatorics this is very different from the situation we had with signposts where a trace could in principle fail to even enter the the neighborhood of the the vertex we're trying to hit right okay so so far we've been using normal coordinates to encode a single curve but there are a couple nice observations we can make well first of all if we have a union of disjoint normal curves then their coordinates their normal coordinates are just the sum of the individual normal coordinates second is that the edges of an intrinsic triangulation are just disjoint geodesic arcs hence normal curves okay so what that means is we can actually encode an entire triangulation with a single set of normal coordinates pretty cool as with edge lengths these normal coordinates can be updated using just a little local update rule okay so the simplest case is if no curve terminates at a vertex then we can just use a little formula that kind of looks like a tropicalized version of ptolemy's relationship for the edge lengths in a cyclic quadrilateral if we know that our normal coordinates come from a triangulation then we can generalize this formula to the case where curves start and stop at vertices okay so this all sounds pretty good we can track correspondence we can do edge flips but actually it turns out that if we want to know the correspondence between these two triangulations normal coordinates are not enough and the reason is that after performing a bunch of edge flips weirdly enough you might have two intrinsic edges that have the same endpoints so like on this octahedron these two highlighted edges both start and stop at i and j so if you trace one of these two edges out how do you know which logical edge it corresponds to and the solution we propose is to use something we call roundabouts so the idea is that at each vertex of the extrinsic mesh we're going to enumerate the outgoing edges in counterclockwise order okay and then each directed edge or half edge of the intrinsic mesh is then going to point to one of these extrinsic edges it's going to store the index of the next outgoing edge and the naming here is kind of analogous to signs on roadways and roundabouts or roadways that are telling you where to exit okay if you stare at this for a minute what you come to realize is that actually these roundabouts are kind of a combinatorial analog of this idea of signposts so with signposts we stored a floating point number that told us the outgoing angle now we're storing an integer that tells us something about the nearest outgoing edge okay once we have this data structure we can extract correctly the common refinement of the extrinsic and intrinsic mesh which is needed for something like texture mapping okay so after tracing out all the edges we just do something similar to what we used for sign posts we find all the intersection points we connect those intersection points up into segments within each triangle and then we extract these polygons the difference is that the connectivity we get is always correct the ordering of these intersections is always known exactly from our combinatorial tracing operation the application that really motivated us to do all of this is discrete conformal mapping so um control mapping or uniformization is a really powerful tool in differential geometry that allows you to construct a canonical map to a domain of constant curvature or more generally a domain that has maybe a few isolated cone singularities so you can imagine taking this head and mapping it to this sort of cuboid that's flat except at the corners once you have that you can cut it open and lay it out flat in the plane a very important result is that this smooth classic uniformization theorem has a perfect discrete analog so actually every polyhedral mesh no matter how awful the input triangulation is can be uniformized in an exact sense and this is really important for robust geometry processing it means we have this kind of guaranteed mapping algorithm so this is a problem with a long history with lots and lots of contributors and also some very nice connections to hyperbolic geometry so very recently we have a new paper on this topic that resolves let's say some of the lingering practical difficulties with actually computing these maps so for one thing it uses this scheme based on normal coordinates and roundabouts to get robust tracking of correspondence it gives a very nice way of interpolating texture coordinates using the light cone model of hyperbolic geometry it handles the case of mapping gina's zero surfaces to the sphere which wasn't done in previous algorithms and finally it takes advantage of a nice observation that vertex scaling commutes with so-called ptolemy flips which simplifies optimization and this really seems to be the right computational approach in fact there is another paper by completely different authors uh that was done concurrently that takes this exact same approach okay so to continue with the story about robustness in practice it's not enough to just compute an intrinsic description of this flattened surface you actually have to extract the mapping between the different triangulations and in theory this is no big deal okay all you have to do is construct the common subdivision of three triangulations the input mesh its intrinsic delaney triangulation and the uniformized or flattened mesh once you have that common subdivision you can compute projective texture coordinates at the vertices of this uh common subdivision in practice unfortunately you can't escape the difficulties of floating points so all the proofs of correctness that this all just works out easily assumes that you're working with real numbers and so this is a case where if you really push things then even the signpost data structure will break down and these these normal coordinates are really really necessary okay and this is especially true because now we're intersecting not just two triangulations but actually three triangulations all on maybe some really bad input nonetheless with these normal coordinates everything works out beautifully we can texture map the absolutely worst meshes that you can possibly throw at it so we get the right result even if we have totally crazy configurations of cone singularities and if we have totally awful triangulations like coming from this thingy 10k data set a major shortcoming of normal coordinates so far is that we really only know how to perform edge flips but for geometry processing you also need other operations like vertex insertions and edge splits and so on and so forth so these are things that are not maybe typically needed for topology where maybe you're happy to just work with the coarsest triangulation so we're currently working on turning normal coordinates into a full-blown practical data structure and the preliminary picture looks pretty good for instance we can recover the common subdivision not only after delany flipping but in fact also after delany refinement which is much much harder and here things work a lot better than with this signpost data structure okay so the point here is to just really push the boundaries of robustness on these data structures more and more and more so that people working with geometric data can spend less time getting frustrated about meshes and more time making beautiful discoveries okay so that's it for today uh next time i'll talk less about data structures and more about actual algorithms that we can run on these data structures thanks
Info
Channel: Keenan Crane
Views: 7,356
Rating: undefined out of 5
Keywords:
Id: w99UcsgkUgE
Channel Id: undefined
Length: 58min 57sec (3537 seconds)
Published: Fri May 07 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.