Sea of Thieves: Tech Art and Shader Development | Valentine Kozin | GDC 2019

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] so thank you all for coming I'm Valentine chosen I'm a principal technical artist at rare and I've been there for the past six and a half years or so most of that has been working on C of Thieves some of this talk will be kind of a high-level overview about tech art and shader development along with using Houdini and I'll give a high-level over you and I'll also dive down into this of the kind of nitty gritty details of some of the actual hip scenes that we've constructed in Houdini along the way so the pace will be quite brisk feel free to take pictures or videos if you need to there'll be code which I don't really have time to kind of leave up on the screen long enough for you to digest properly and obviously this is all being recorded so you can go back to that for reference later so normally I start with the CF thieves release trailer but in fact it was our one-year anniversary yesterday and we've released the new anniversary trailer which shows off all the beautiful new things were adding in April that's okay it's the the volume is actually just set to low so it wouldn't drown people out and in case I want to talk over it which I didn't so we've been using Houdini in the studio for a while but mainly on the VFX teams for rendering out explosion sprites and creating spreadsheets and doing all kind of the effects magic for tech art we only fairly recently adopted Houdini - all started with trip to escape studios in London who put on an introductory course for us I was a contingent of technical artists and other artists who got our hands on with Houdini and were introduced to the basic concepts of it myself in particular I took it on as a mission sort of afterwards to try and evangelize Houdini within the studio and try and take time out from my normal feature work to try and deliver some of the features that we're doing with Houdini and explore this cave as a new way of creating tools pipelines and creating assets it was kind of a tough road at some times because I was really for a long while the only sort of extensive Houdini user using it in the way that we're using it in the studio so kind of had to figure a lot of it out as I went along nowadays there's a you know every year there's more and more tutorials and this process of becoming more and more accessible I think to new users of Houdini which is reflected in how many Houdini users there are out there now so primarily I'm gonna be focusing specifically on how Houdini fits into shader developments so I used to work more with tools certainly onceá thieves the majority of my work has been in terms of creating graphical features and developing shaders but when you're developing very complicated shaders you end up having to create the pipeline for generating the data that will go into those shaders and this the sort of thing I'll be talking about this talk will reference in parts the talk that I did at SIGGRAPH last year which is about technical art and the sea of thieves that one talks more about specifically the shading techniques and all the tech we developed for that in ncf thieves that's available on the ACM digital library as a recorded video we've struggled to find a good way to share that out officially surprisingly it's quite difficult to share a 1.5 gigabyte PowerPoint file but very cheekily you can get off my personal onedrive and quite quickly as well you can get this talk off of my onedrive as well so hopefully they'll get it out there a little bit more otherwise you can just contact me and ask further for the links so a little bit of background about to see thieves we're developing our Unreal Engine 4 we started on the 4.6 version we stopped taking out dates at 410 because we modified the engine itself quite a bit on our team so we don't have access to some of the newer bells and whistles that Henry keeps pumping out we use deferred rendering and target platformers Xbox at 900 P 30 terms of lighting and shading it's mostly stock huy for graphics we've done some extra implementation for cascading shadows behind the scenes and things like that we have a fully dynamic time of day and we don't bake out any sort of static light data and so we use lion heads lpv implementation for real-time GI we have like maybe 3 baked out reflection maps and that's it so interestingly enough because we're targeting the Xbox one as our target platform we found our main constraint was the CPU on the Xbox one so the main reason for me explaining this now is that some of these techniques I'll be talking about kind of make a lot of lot more sense potentially in the context of offloading work from the cpu onto the GPU and that's that's been a regular theme of the work that we've done in tech art so contents I'm going to start out talking about vomit usually I do this one as the as the last few slides to finish off a talk because it's a good comedy value but it's actually also really useful for talking about the overall benefits of using Houdini in your pipeline I'll be talking about lightning clouds and the crackheads and these are all kind of slightly different aspects of using Houdini in game to help and and the different ways that you can plug it into shaders but first of all there's a just a quick high-level overview again of why would you use Houdini as a technical artist because presumably you're using Maya and you know Macs and you have Python scripts and all these other tools which we use in to output data first of all show hands how many of you are regular Houdini users just see if I'm preaching to the choir okay about a 50-50 split that's good how many of you write shader code interesting okay so for me it all started with this which is how in Houdini you would go about baking your normal maps into your vertex colors rather baking your vertex normals into your vertex colors in a different package which will not be named this might take you ten lines and this is also gonna be extremely slow this will work for me to 400 vertices we've got a million vertices this will probably take an hour or two you can do this faster but that's going to be even more code and for me this was the thing like for me as a technical artist it's like yes because especially if you're writing shader code or you know you're creating material graphs and unreal you're really used to being able to work with data in a very intuitive very straightforward way you can you know take your vertex colors sample a texture multiply things stick that into interpolator access that data from the pixel shader and everything's super quick everything kind of flows into every other stage of the pipeline but the problem was that you can't do that easily in standard DCC packages here's another example this is a quite common tech art trick where you might want to for a static mesh in code like a secondary blend shape into the vertex data there's various reasons why you might not want to use this in your sort of skeletal mesh pipeline where you can bring in blend shapes that tends to have a low CPU overhead this is virtually free that's like a little bit more memory cost on the vertices but then in a shader you can access displacements and like a second set of normals in your vertex data and herb from the default pose to your secondary post which have stored and in unreal there's actual material functions in the material graph which later you do this but in order to bake out that data from DCC you have to use a specific script that someone has made and there's a script for max that epoch have released and there's a Maya script that someone's like taking that max script now adapted to Maya and both of these they're like there are their own tools they are a hundred two hundred lines of code they have their own bespoke interface it's a thing that someone has to assemble you know over a course of time then release to other people to use and in Houdini this is effectively a single pointer angle node with five lines of code which just distribute this data to whatever vertex channels you want it's not even a tool it's just something you do as part of your pipeline you can export an FBX plug that into your shader code and you're good to go so with that overview out of the way and so the the basic reasons of why is attack artists you should use Houdini let's talk about vomit so it started out as NCAA thieves you can drink grog and if you drink too much grog he'll get drunk and eventually start mama ting so the as part for the feature the tech our team got requested to create technology to you know render these vomit decals and this is something that we could have done quite quickly there's lots of different ways to render out a vomit decal to create these foam decals curved painted in Photoshop could have zebra sculpted a vomit puddle and baked out the normals if we wanted to be fancy and I thought that this was quite early on on the chronology of learning Houdini and implementing it in our tools pipelines so I thought this would be a really good learning opportunity so rather than taking a day or two days to sort of create fantasy vomit decals I thought I'd take you know five days out and my manager was thankfully quite obliging and I said do this in Houdini in a slightly more elaborate way than we normally would so the way I've done this is this is effectively I'm spawning a bunch of spheres with noise Mountain stops on the surface and I'm assigning to those fears different density attributes in fact they're density attribute is linked to the randomized size of the sphere so the smaller the sphere the higher the density to create that lovely chunkiness that you would expect out of a puddle of vomit and then just running that through a fluid simulation in Houdini to generate this splatter and what we get out of that is I'm then placing a subdivided grid of maybe a thousand 24 by a thousand 24 vertices under that splatter for every frame I'm converting the political mesh of the vomit spatter to a signed distance field and for every vertex in for every vertex in the grid underneath the spatter I'm sampling the signed distance field and effectively baking out a vertex color based on whether it's close enough to the to the vomits batter mesh or not and then what I can do from there I effectively have these binary footprints for every frame of the animation it's got a sound that's I wouldn't do scale the sound then what I can do is I can again in Houdini aggregate that data over all the frames and conveniently there's 255 frames I can aggregate all that data so that I bake it out to the Alpha Channel such that every value of my alpha Channel is effectively data for one frame of this animation of vomits splattering out from its point of impact and in fact what this original animation was showing just now is this one as I am threshold engrailed alpha value so you have the animation of it spattering baked into just that single scalar texture and what I was allowed to do was after we did the initial prototype of this I'm sure to the our director our director was like guys we don't want people to actually feel sick when they're sick in the game this is first of all it looks too disgusting we don't want the chunkiness in there and secondly he was like well we have one of the core pillars of what the core art pillars of our game is this kind of painterly style where we don't have a lot of high-frequency detail and he was like you know the image on the left you see here that's a lot of high-frequency data doesn't really fit in with our art style so I was like okay and so I went back to the Houdini simulation I started tweaking the simulation parameters changing the density changing how is creating these balls of vomit which are spattering onto the ground and end up tweaking them to make them less chunky and more viscous so the bits that you were seeing here we this is actually a chronological concatenation of all the different simulations I've run and as we go along they start becoming more sludgy gungi kind of viscous shapes which our director was happier with and he wanted to have these more simplified shapes and so we were very easy easily able to go back tweak the simulation tweak those values gather serve values we were happy with and then you know just alter the seed and bake out a bunch of different variations for our actual final decals and then as a little extra step I was able to just tilt the gravity value slightly and get the vomit running down a particular direction so now we have a version of decals for vomit running down the wall in case you vomit on the wall and in the actual game this is actually most of this is quite difficult to notice because the vomit decals are actually quite small at the end of the day but if you exaggerated the mechanic a little bit it might look something like this there's a few other shader effects going on top of this we're also baking out a thickness map from from that simulation which we're using as the actual translucency and we're adding some extra normal animations on top to make it look like it's splattering but that's the basic approach lightning with lightning we had this interesting problem that we want to have lightning which you can see in the distance underneath the stone you have these Archaean lightning bolts but you can also be inside a storm and you can be struck by lightning it will hit you in the face so we wanted to have a solution for lightning which will work at very different distances and normally I think lightning will be done as effectively build board planes with some kind of lightning texture applied to it and that can be good for certain effects especially if they're always far away but it doesn't really scale that well with distance especially for the lighting is coming directly towards you you kind of want to see the three dimensionality of it so I decided to try and implement it as a 3d mesh this is the final result this was inspired by looking at a lot of high speed camera footage our actual lightning strikes and looking at the behavior of lightning where you start out with this kind of searching phase where you have multiple tendrils all coming out at about the same rate trying to find a point where they can make ground contact and then finally one of these tendrils will find contact with the ground and then all the other ones fade off whereas that main branch thickens and becomes brighter and that becomes your primary lightning bolt and then everything fades out by doing it with the mesh we were able to have a lot less overdraw than we would if we make this out of a texture and like I said it allows it to scale a lot easier with distance wants to get further away for example you can artificially push out the vertices out along their normals and give you a thicker lighten lightning bolt when you look at it from two kilometers away as opposed to directly in front of you so the way we did this is effectively using L systems in Houdini they helpfully already have a Lightning preset I had to modify that a little bit because there was an issue with the Lightning preset and we've hooked up the seed to the frame so you can create as many variations of this lightning as you want the the main modification we had to do was add more rotation to the L system because the default preset kind of looks very good from certain axis but if you wrote it to a different axis all the kind of random squiggly actually happens kind of only in one direction the other direction of the Lightning is actually perfectly straight which didn't really work for us because we wanted these things to work from all directions so what i'm doing here is again this was early on my use of Houdini there's likely more elegant ways of doing this but the brute force approaches I've just duplicated the L system three times and I'm getting slightly different date out of each one in this one I've set the angle variation to zero so that here I can take the Y position of every point and I can bake that down to the vertex and that effectively gives me the distance traversed from the origin of the lightening along each of its branches and tendrils I've also got one version of the lightening which is has a very thick main branch where effectively again I am comparing the distance of the vertex from the center of the L system to its current position and therefore I can bake out a binary value for every vertex to see whether that vertex is on the main branch or not and finally the middle version is just has very thin thickness which is the final I think actual mesh that I'm going to use the vertices are pushed to the center of the L system of each branch not entirely together because otherwise if you export that that creates the degenerate geometry but you know close enough together that it's very thin but you still have the vertex normals which you can then use to push the vertices in and now to give the Lightning thickness when you want it so then there's a pointer angle node here and there's some code there which effectively takes all of these three meshes they all have the same topology so I can very easily grab vertex data from one mesh graph that extent from another mesh bring them in and then figure out where I want to put them in I think in this case you can see I creating a single UV set and UV dot X is that value which tells you whether this vertex is on the main branch or not and UV dot y is lightening progress which is just the distance along that branch that this vertex has traveled and this is all the data that I'm exporting along with the normals no keep forgetting that space does not continue playing a video so I'm gonna have to use this screen to go skip ahead so the one I'm doing from that afterwards is I'm picking out the point which is at the very top of the Lightning stem and I'm kind of saying that that's the target point and then there's a little bit of transform maths at the end to basically align the start point and the end point of the lightning bolt so that the start point will always be at zero zero zero and that the end point is always at zero one zero or once it's imported into unreal zero zero 100 because we want this to be a standardized one metre long lightning strike and so you can see there's some transformation code in there for getting it aligned and then it's also flipped so the origin point is at the top what this means is that we can literally place the lightning mesh at the position of the impact that we want and immediately have that lightning mesh graph coming out at set distance so here you can see all the valleys we've written out and if I skip forward a tiny bit or in fact it's going to that now this is effectively what we'll export because we have all of the data now the mesh is normalized it's you know set up into the scale and parameters that we want but then we can also do is we can actually preview the shader inside Houdini so this is how I'd go about templating the shades to begin with I can actually do run the code that I want to be running inside the VEX node so what this is doing is there's a time variable which is just a slider that I'm running back and forth and as time goes up towards one all of the vertices get pushed out from the center of the Lightning branch and you get this sort of thickening effect in combination with the value of written out about progress through the Lightning branch you get a progressive effect as they start thickening at the core but they're still thin at the tips continue playing this you can see they're happening there now so you get this growing tree effect but then what happens is I'm setting a threshold value which is an or point seven not point seven is the point at which this will all run to the maximum value for progress through the through the lightning branch which means that we know that our main branch will have hit its final contact point at that point we have an if statement in here which will switch out the logic a little bit and have the secondary branches start to fade out and instead on the main branch we start thickening it even further and making it brighter and that gives us like that main Thunderbolt source towards the end and then as time Valley goes to one everything fades out so this is slow motion of these meshes being imported into unreal the final code that we're using we've modified slightly because you'll notice the tips aren't thin when they're searching out instead we've inverted that function a little bit so that the tips taper towards the top but they're always a little bit thicker when they're searching out was that just improves the visibility and makes the effect look better and we're not doing all of the fading just worth pushing the vertices in and out we're also using an alpha mask on this mesh but effectively it's mostly the same logic that's running in the shader here each bolt at the end of the day is about a thousand vertices and we've got four loaded in at all times firing at random from the storm and yeah it worked out quite well it's extremely cheap to render so next up clouds so again this is going to cover a little bit of what I talked about at my SIGGRAPH talk and kind of very quickly run of how we rendered them and why we're rendering them the way that we're rendering them effectively this was our our direction that came through from the concept artists we wanted to have very strong geometrical shapes for clouds we wanted to have storms which are volumetric three-dimensional objects that actually sit in the world rather than being I now detached images that are hovering somewhere infinitely far away in the sky and as you can see in the image on the bottom left they wanted to have skull clouds which would actually hover above an actual island which you could travel to so we knew we had to do something very special in terms of the tech to render these clouds we did some early iteration with using billboards and normal maps to light them combination of alpha and various other data baked out into them we experimented with having storms that had you know Ray marching through various 3d textures and getting lighting effects out of that ultimately what we've settled on which gives us images such as this beautiful sunset from a subreddit called Co photography is we're effectively rendering polygonal geometry solids you know 3d model polygonal geometry and we're running filters over that to soften it out and make it look like they're clouds that has allowed us to add ship shaped clouds in a later update which is even more difficult to do than a skull shaped cloud so our main challenge was how do we make solid geometry look fluffy and how do you do that in a extremely cheap fashion and in brief the way we do this is that we render out the cloud meshes to an off-screen buffer and these are rendered really quickly with forward rendering and a vertex shader in fact for the lighting it's not even doing any pixel shading we scaled that off-screen buffer down to a quarter size and then we do a very quick single tap blur we offset the stand deviation of the blur by the depth so that clouds their father way look sharper we had an interesting problem that we wanted to save out both the depth so we can do depth based effects with the clouds once we're compositing them and the Alpha but to keep things really fast and cheap we only have a single RGB a buffer so you've actually only left with two channels for color so we're encoding sunlight into a single Channel and skylight into second channel which later down the line proved to be extremely difficult when they said they wanted the Skull clouds eyes to glow green but what we're doing is we're doing a box blur on the depth and this allows us this box were slightly wider than the Gaussian blur we that we run on the two color channels on the Alpha Channel and this means that for every pixel we then stamp on the clouds we have a blurred depth and therefore kind of a blurred world position and then there's a alpha blended quad which is being rented in front of the camera which samples this off-screen target and composites it with all the main stuff that's going on in the scene so we from that blurred depth we reconstruct this blurred worldspace position based on the world's best position we sample a cube map which has a hand-drawn noise map which is available in both large scale and fine detail flavors and then based on the depth we toggle between these two noise maps so there are clouds in the distance have smaller noise features and clouds up front and that prevents it looking like a frosted glass effect and again for the Alpha we use that blur depth map to make sure the clouds in the distance have sharper edges whereas clouds up front are using the full extent of that that blurred alpha pass and now we're using the noise maps to add little swirls and distortions to the contours of the clouds and we do the same with the colors here we start something the actual sky color yeah your Sun color and we get the full rgba output there anything that's missing here is again because we have that blurred depth pass we do another pass off exponential height fog and that allows the clouds to blend in with the skybox so that's just a little bit of background no Houdini there unfortunately so what I'm sorry if you want to take a picture go ahead so what I'm going to talk about is this bass bass about how do we actually render out this geometry in such a way that you know they are appropriately translucent and extremely cheap to render like I said this all the lighting is done in a forward vertex pass but we kind of want them to look subsurface see and cloud like so this is a test hip file scene that I created just with this presentation it's doing mostly the same things as that what we actually did for the game but that was a long while ago so like rather than going through some of the messy code of me figuring out how to do this the first time around this is a little bit cleaned up and a little bit more correct so I'd rather give you guys the correct way of doing things rather than the actual way we did things so I'm hoping this will be a little bit more useful so we started by bringing in our cloud geometry which in this case I decided to go for the most difficult case for the demonstration which is an actual fully detailed ship which has been generated from our galleon we're doing vertex rendering and we're saving our vertex data to verify our techniques it's better to bring this a little bit closer to per pixel rendering so we stick a remesh node on that and that gives us a high density mesh and then effectively we just convert it to an SDF and we're a march through it and sort of standard beers law kind of very much thing similar to how like horizon zero door if you've seen their talk how a lot of these remark cloud systems do it it's effectively for every pixel the amount of light is the or to the exponent of - attenuation times the overall density that the cloud that the rare flight has had to travel through to get to malta to the eye this isn't modeling some of the more complicated in scattering that you get in the cloud is just kind of going out through after the signed distance field and just for every point that's inside the signed distance field it's then rain marching some ways towards the direction of the light and it's adding up all that density so in this case what you're seeing is I've actually created a point R angle sup which is doing that rendering and outputting the the result is a vertex color and because I've plugged in the camera position into the vertex table I have these this vertex color renderer update every time I move the camera in the viewport as you can see it's actually really fast to do rain marching in a point sub in in Houdini like it updates really quickly this this isn't sped up or anything a little advantage of doing off prototyping your array marching algorithm in a pointer angle is that you can actually really easily visualize what all if you're a Marshall samples are doing because if you think about you have a full loop which is you know creating new world space position where you want to sample in this case your Sun distance field or it could be a fog volume what you can do is for debugging purposes you can add in a function which says hey for every sample also create me a point and then at the end you can say hey you've got this array for the point samples you've created create me a curve out of them and this is what you're seeing so you can see the camera up there in the top left and I'm starting the sampling at the first vertex that I'm hitting well rather that's that's the vertex is currently being rendered and then from that vertex whenever it's inside the signed distance field it shoots out array of samples towards the direction of the light and you can see where you're taking those samples which makes it really easy to then look at this and be like oh god I'm doing too many samples I don't need that many or you can add in an exponential function for example to bias the samples towards the point of impact but then you think I don't need more samples further out this is actually what the code for this looks like it's pretty compact and you can see you can see all the the two for loops are running through this and the the density function so you'll notice at the start I've pulled out the bit where it's sampling the signed distance filled into a separate function the reason for that is that we're going to replace that later with a different kind of sampling as a side note you can very easily jitter your the direction of your vectors that are for directional lighting going a towards the directional lights you can insert sample them randomly on the hemisphere that's also just a function that exists in vex and then you can also have a sky lighting path again it all runs quite nicely and looks actually pretty good in the viewport so in our case we don't actually want to save out signed distance fields for clouds that said that might be something that you actually want to do in in your projects especially with newer versions of unreal obviously you have scientists and field representations of all the meshes available to you if you're if you're using that so you could absolutely just literally do it this way take a polygonal mesh and then Ray march through it's long distance fields and calculate this kind of subsurface scattering through you know participating media in this way but we're not doing that and we want this system to be really lightweight so my approach was instead to bake out data for every vertex which represents something off the entire mesh that that vertex belongs to and a good point to start there is to just shoot out a whole bunch of rays and see where they hit you might notice that I don't actually have any Ray intersect functions in here and that it's still for loops very marching through assigned distance for you this is kind of like a neat little trick the REA intersect can be incredibly slow when you're trying to intersect with a very dense geometry which in this case the ship is pretty dense I've actually found that Rea mushroom threw a signed distance feel can give you pretty accurate results and when you don't need that pixel perfect accuracy of I'm hitting this triangle and this spot actually we're a marching through a signed distance field gives you much much faster results if you just want to get rough line intersects it's really good if you're doing occlusion baking and things like that so in this case I am casting out about 2,000 rays and then I am averaging all of these rays that I've cast out to get and averaging them biased I think by their by their length to get a careful ray of most occlusion it's kind of an almost like an inverse Bend normal sort of baking process and then along that way if most occlusion I am doing another Ray March just to see how far that goes so again we can visualize this in Houdini because we still have access to all of these attributes we can promote any local attributes that we're using inside of X to be accessible by other nodes further down the line so we can see from this point all the Rays that were casting out and what we're going to do with all this data is we're effectively gonna create this kind of lobe out of it which is are going to be our internal representation per vertex of where most the mass and occlusion on the ship is at this point I should point out that you know this was sufficient for us this is really low-tech you guys can probably quite easily replace all this with you know a fitting function for a group of Gaussian lobes or you can you know calculate a spherical harmonic per vertex out of this and you're going to get a much more accurate representation of the mass of the ship and then you can sample that in this case this is kind of enough from our purposes this isn't even quite a Gaussian lobe it doesn't have the exp it does have a power and but the power is actually four so it's just all multiplication and addition and you can see kind of what this looks like across different vertices and then all we have to do is earlier when you saw we had our function for Ray marching through the signed distance field we split out the signed distance field function sampling into its own function now we could just replace that function with checking is that position inside this vertexes occlusion lobe and we end up with Ray marching through these occlusion lobes and this is the result you gain out of that so we've clearly lost a lot of detail but a lot of the actual subsurface scattering behavior is still there and actually the the qualitative look is it's pretty good we can do a side-by-side comparison by this is with all the exact same Ray marching and density and animation attributes this is with the lobe data and this is Ray marching through the signed distance field so we're definitely losing a lot of detail certainly on thin geometry we're losing some of that lighting up there should be doing but the overall brightness values are quite similar the main artifact you can see is towards the back of the ship it gets darker and should still be quite bright as is because of the way that the lobes are orientated because you're only really getting inclusion in a single direction rather than two directions again you can expand on this if you want you can bake out more data and approximate this volume better but this worked well for us oh and there's a there's an attribute blur pass after we calculate this the direction of mean occlusion and the length of that ray of mean occlusion which just smooths everything out as you remembered we were using a high-density remesh so if we now turn that off we can see how it looks on the original mesh and if anything that actually helps a little bit in bringing some of that detail back because of the way that these vertices are being distributed and then the difference here is that I've copy pasted all of that vex code replaced some of the properties and it has now magically become hlsl code this is running in unreal the main difference you'll see here is there's a little bit of shimmering because we're now running this property on the vertex shader we're interpolating it per pixel via natural HL cell processes if this is a problem for you you can then go back to Houdini and tweak your array marching to make sure that the output is less banded because effectively I'll move this code into pixel shader code you can see there is banding because of the rain marching steps that were taken this can be optimized but then obviously at the end of our pipeline we are just blurring and distorting everything at the end so that shimmering isn't really noticeable for us once you've actually got the full composited output and you don't have to rain much towards a one single directional light you can Raymar CH towards a point light and so you'll get really nice scattering through a whole bunch of different objects it's worth noting that the main thing that you're losing with this unlike with rain marching through you know 3d textures like horizon does is obviously none of these objects have any awareness of other objects around them so you know that's something that you could potentially add to the shaders of extra data about you know what is the spherical harmonic of occlusion around your object or what not but in our case our clouds tend to be separate objects so it's not been an issue and in fact we use this not only for the directional lighting but on the storms the Lightning that you see inside of the storm cloud is a point light that's spinning around and when yes close to the surface it's light bleeds out and you can see it as flashes of lightning so that's it for clouds and the final topic is the Kraken so the Kraken started coming together quite late in production I think in you know we released end of March last year around January we still didn't really have much for Kraken so we wanted to pull together quite a lot of assets at the same time and one of the issues we had was design wanted the Kraken to wrap around ships that's kind of mandatory for a Kraken in a sailing game we also had an issue there's the animation graph for the Kraken was quite slow to evaluate it's a massive tentacle it needs a lot of joints to wrap around things and wiggle smoothly we were doing we are doing this for freestanding tentacles when they're not attacking the ship directly but for wraps we can't wanted to save CPU perf we had the issue that it's a lot of wraps design want the crack of the the Kraken to be able to wrap around the front of the ship around the back of the ship there's variations based on which bits of the ship it's blocking you off from interacting with and we obviously need to make the all these wraps look good and prevent the tentacles from clipping through geometry it all this means that this would be a mammoth amount of work for animators because they would have to go through every position you know frame by frame move all these joints out of the way get it all lining up with the hole as best as they could and again these are quite large chunks of geometry they're not that high detail don't have that much control over what the surface is doing so we were inspired by talk at SIGGRAPH by the guys who did finding Dory - there's called finding Hank and they talked about their simulation pipeline for animating hank the octopus who has lots of library tentacles is exactly what we're doing so we decided let's try and do this as well we we create the animation but we use Houdini to drive an F AM sim to get that animation to actually physically interact with the body for ship and then we can use the now extremely popular technique of saving out that animation per-vertex to a texture and read that in a shader with normals on the positions so there's an early pre-visualization of combining FIM simulation and animation it's a sea cucumber obviously but putting that into production that was actually quite challenging this is our hardest rap position where the tentacle actually has to penetrate through the back of the sloop and we kind of got this result a lot I think the main issue is that if you're using pin constraints on some of the points in your femm to drive the underlying geometry if any of those pin constraints get trapped against collision it really struggled to get that to resolve properly it would just get stuck because the pin constraint is always pushing it in into into the collision and the other forces don't seem to be enough to move it out of the way our original solution was actually to reduce the fidelity of the collision simulation which allowed this stuff to work better it would still clip well instead it would clip through some of the thinner geometry but you actually still get very good wrapping behavior around the book of the ship and then we did some additional stop work to basically sample the scientists field of the ship and push out vertices to resolve kind of those smaller intersections so this sped up the animation workflow massively this meant the animators could effectively reuse pretty much the same animation and just roughly line up where the designers wanted that pose to be at the time speaking there's 19 rap animations across three different ship types and then the FBM simulation could just push out all the animation vertices and get that tentacle to conform to the shape of the ship properly we also have state machine for the tentacles so there are frames at different parts of the linear animation which are actually identical which allow for the animation to skip between different states states where it knows there's the same frame from which you can resume playing so this is a going to be fairly quick run through the animation graph of how this stuff has actually been generated we import the Alembic animation from maya or rather we import the animation as an Alembic from maya and we have a simplified representation off the ship collision as you can see there's in the original animation there is a lot of clipping going on the first frame in this animation is the bind pose which helps us with some of the setup so this is the graph for actually doing all of the simulation we begin with tweaking the mesh collision a little bit we were basically thicken out some of the areas which are really thin and where we can get into section otherwise and we add a little bit of cheeky guide geometry just to funnel the tentacle on its way so it goes where we wanted to go in mainly in this specific animation honestly none of the other animations were anywhere near this problematic just it all works but this one is a special case so now we want to create a tetrahedral simulation mesh so we take the original mesh and we basically put down some poly fill and boolean nodes to turn it into a single unified watertight mesh from which we can then create a tetrahedral kind of slightly lower poly representation which we can then use with FIM or as in this case we've won Sweden seventeen came out I went back and rewrote this to use vellum and it's now both a lot more stable and a lot faster what you see here is once we have the tetrahedral representation we use a bounding box to pick out a whole bunch of points down the center of the tentacle and a little bit in its mouth flaps and this is the the backbone of it these points will have ping constraints applied to them which actually drive the animation and whereas the points which are more on the outside are all simulated we use a point deform node to then animate this tetrahedral mesh based on the original animation and now we start bringing it into the velm simulation what's the file cache nodes so that to prevent duplicating work yeah so now we're ready to simulate here you can see we're applying a bunch of different constraints we ended up going with ping constraints distant constraints tetrahedral volume constraints and kind of quite late in the game I realized that it really helps to have some struts in there to help preserve some of that actual shape that had read all constraints weren't doing that by themselves so now we've got the velum solver itself it really helps with the whole workflow of this to have the velm solver inside swaps you don't have to do a lot of setup in our case we did end up having to rely on an extra little trick just to resolve those problems where the ping constraints conflict with the collisions so we added some geometry wrangles into the solver which effectively looks up into a volume we've predefined and turns down the strength of the ping constraints or turns them off and tiredly based on a signed distance field so as the tentacle gets closer to the ship that backbone stops animating and it relies on the rest of the animation to push the tentacle through and that helps resolve those collision issues because those ping constraints are basically inactive 1 they're too close to geometry and so this is the result of the final bake and as you can see it's getting really nice squishy defamation as a curve lunges and wrangles its way through the back of the ship and then at the end of the day we do a point to form to get our original mesh back and this make sure that we still have all the UVs and all the other parameters cool so there's a little bit more to sort to this we do some blending of frames to preserve the identity of those key frames that are talked about earlier and to allow the the animation to start and resume at these various points because they are identical in the animation but obviously not identical after a simulation and the other main thing is that this is a very large mesh which doesn't have that many vertices so it's crucially important that we preserve normal detail on the mesh which means we actually can't get away with just exporting out normals and point positions we also have to export out the tangent data so that we can actually have the full animated tangent frame for every vertex once we're rendering it so we do that with a poly frame which gives us tangents and we use the vertex split along you v's so that we double up those points and we have separate tangents for the two points on the either side of a UV split and later so we have an additional step which I'm going to skip through in the interest of time where we effectively take that baked out animation and we do a poly reduce and then we bake out another even simpler representations this tentacle which we use for the distant Lord that has its own set of animation textures which are much smaller and use a lot less memory and then finally we use a modified version of the game dev tools vertex animation node which was really good jumping-off point effectively we've added an additional few parameters in there for baking out the tangent animation data in our case we're actually putting that into the Alpha Channel of the position texture which is an sar so we have 16 bits of data into which we encode that that vector it read does not need to be particularly precise and all works quite well and the other bit which I believe this video might have already skipped through oh no I think it has yet the other thing that we do is we in the same pass we create collision data and the way we do this is we specify a range of frames during which we expect the animation to be collidable and we add up all of those frames we just create a single big sign distance field which accumulates all of these frames as you can see here and we basically create one large collision mesh at for all those frames which ensures that wherever the tentacle is in its animation during those frames the player can't intersect with it and that's kind of the the resulting collision mesh that you get out of that and the really horrible looking mass of all the tentacle positions it's quite ghastly so yeah and there's a little bit here where you create the actual tangent vertex data it's basically done the same way as you do normals you just pump out a different channel from your vertices so that's the recap for everything I just said what they fail to mention is that we actually decimate the number of frames by eight so we only export hundred seventeen frames at for animation which is originally 937 frames and we do interpolation in the shader which gets everything in looking very nice and smooth kind of like this this is all the wraps that we shipped with all working in unison and doing all their squidgy defamation so one less last it'll know just to bring this talk full circle one final tip that we found when doing the tentacles is I talked at the start about how useful it is to bake out be able to bake out a second sort of blend pose into a static mesh and bake that into the v's but obviously you can't do that if an object is skinned or rather you can but you have to go through the pole proper blend shape slash morph target pipeline in your skeletal animation system so if we don't want to pay that cost well your tangent coordinate frame is updated for free by the skeletal animation so what you can actually do is you can bake your blend shape data those vertex displacement into tangent space instead and that's for example how we do the suckers on the tentacle the displacement vectors are baked into tangent space and then converted out for tangent space during the vertex shader step and added to their current positions and in this case you we have several dozen suckers if you were doing this with a skeletal mesh you would probably have to have multiple blend shape for all these guys but in this case we have a single blend shape and we offset that animation in the vertex shader using vertex colors this is tech that we also reuse for the gills on Meghan and again this is extremely simple code in effects you do your poly frame node and you create a matrix which transform some tangent to world you invert that matrix and then you just multiply your displacements by your world to tangent matrix and now you have tangent space displacements so to wrap things up I'm hoping you've learned some useful tips and tips tricks and techniques and ways in which you can combine your Houdini and shader work if you write HLSL I think you can see that you'll find a lot that's familiar in the way that you write code but just just in the way you think about your data once you're working in Houdini and similarly if you're Houdini you using your right vex maybe you'll see that if you haven't done any shade of work this is also something you'll find quite similar that's what flow of data from Houdini into HLSL you can then start choosing what do you want to do where if you're writing a shader you can choose to offload some code into the pixel shader or some code into the vertex shader or into the computer shader but you can also do this in terms of just taking the code and moving it into your tools or vice versa so mandatory we're hiring slide this is our beautiful campus it's not actually dren footage is photogrammetry and and yeah contact me or Nigel and who's a technical art director if you're interested in asking any more questions I don't know if we've got time for questions I suspect not yeah I thought as much but I'll be outside so you can grab me if you want so thanks very much for coming guys [Applause]
Info
Channel: Houdini
Views: 54,079
Rating: 4.9662833 out of 5
Keywords: gamedev, game development, video game development, procedural gamede
Id: KxnFr5ugAHs
Channel Id: undefined
Length: 60min 15sec (3615 seconds)
Published: Mon Mar 25 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.