- [Paul] Hi everyone. My name is Paul Ambrosiussen and today me and my colleague Mai, are going to talk about
new and improved workflows for artists, which is
essentially an update to SideFX Labs that we've
done over the past half a year since the last presentation. This is also Mai's first presentation about labs in Public since
Mai had joined the team at sideFX Labs. So please everyone give Mai a welcome, So let's get started. So the presentation has
been split up in three parts on the first half. First of all, I'm going to talk about some user experience
enhancements we've made to the tool set. Then we'll be talking about
some new additions to the tools. We'll be looking at an
update to some tools, and then I'll be handing it over to Mai who'll be showing you some really cool and awesome stuff that
day he's been working on. Alright, so let's get started. So to begin with, something
that we've added sometime ago is something called the sticker picker. And the sticker picker
is essentially a tool that allows you to drop down
images in your network editor in a very easy and convenient way. So as you can see, we ship a
variety of stickers by default, that you can just place
in your network editor just by double clicking
on it in this interface. It is not new functionality. People have been able to do this using the network editor
images functionality that's been in Houdini
for a really long time. But this is a really nice, easy way, sort of an interface
around this functionality that you can drop down those stickers and work with them with ease. So since these are network images, you can use all of the
default functionality that the network images have,
such as controlling opacity. You can delete them, you can attach them to
nodes and much more. Another thing that we've added is the ability of painting
in the network editor. So if you, once again,
click on the labs menu and click on paint, you now
get a little paint cursor that allows you to paint in the network. So once again, let's take a look at some more functionality of there. You can of course change
the size of your brush. When you press I, you can sample any color that is found in the network
editor to sort of, you know, change the color of your brush. And in this case, that allows me to write some differently colored letters. So what would you use this for? Well, that's a good question. You could use this for annotations. You could use this to
draw some arrows on things and you could just use it
for a whole lot of things. And just make your network
editor a lot more visual and pre in general. Once again, those are also
just network editor images. So you have the exact same
functionality as the stickers. You can attach them to
nodes, you can delete them, you can control opacity and much more. Continuing on the trend of color. So we also added the ability
to sample screen colors for colored ramps. All you really have to do is just right click on a color ramp and click the sample screen colors. And that will allow you to
sample a color on any screen that is turned on on your computer. So right now I'm just doing it on Spotify that I had running in the background, but really what you could do is you could sample from a second screen that you have opened for
reference and much more. So this functionality is
very similar to what you, for example, have at
substance and it's useful for, for example, coloring
drains, coloring geometry, or just generally making
a ramp or random color so you can use for any other purpose. So this is a really cool
example of a use case for this functionality. You can open up Google
maps or Google earth find the location that you want, and then just sample
those colors and use them on a heightfield. All you really have to do
is use some grayscale value as a lookup for the ramp. And then just assign that as a color. So in this case, I'm just
using heightfield visualize. And that allows me to create these really, really pretty colors straight right there inside of Houdini. Next up, a new tool. So we added the ability
to do booleans on curves. This is something that we find ourselves doing quite frequently. Previously, you'd do this
using the intersection analysis and intersection stitch
and then some wrangles and some grouping. But since it's a workflow, quite often, we figured why not just
add it as a tool in labs. So you can see there are a
couple of different nodes. We have the intersect ability, which is going to give
you the intersection. It can give you the subtraction. So it can subtract a box from, for example, curves or shatter which is just going to cut up your curves based on the intersection
with the polygonal object that is fed into the
second input of the tool. Next up. Continuing on the UX trend. So HDAs are a major
part of Houdini, right? We all make them, we all use them, but it's quite difficult to
create them, maintain them and it's especially difficult to increase versions of HDAs, right? So let's say you you're working with an edge damage tool, right? You did some work on it and you want to increase the version of it. So now you wanna have a
version two of this tool. What would you do? Well, first of all, you'd
find your subnets or your HDA, you'd open the Asset
Manager, you press duplicate, you'd rename the tool and you'd hit save. But the problem there is that it would not have any of the changes that you've made in your network editor when you duplicate the tool, right? The changes you've made
would either have to be baked into the version one that you had, which then would also go to version two, which is not what you want. When creating an HDA,
you'd have your subnet, you'd say asset, create new digital asset from node. You'd get this little menu here. We can set the operator
name, operator label, and where to save it. You save it. And then you'd have to go to
the type properties window and change a lot of properties here. So just icon tab, menu, inputs,
and other things like that. So in general, it works, it's acceptable, but it's difficult UX. So what we wanted to do
was we wanted to make it a lot easier. So we introduced something
called version digital assets. What that allowed you to do is it just very easily
wrapped up that HDA workflow into a single menu or two menus actually. You just right click your node, which can be a subnet or
an already existing HDA. You go to version digital
asset, you click save as, and that brings up this window here that allows you to create
a digital asset out of it. You would just control the type, you're able to specify the username space or the branchname space
or both if you wanted to. You could set the version. And the nice thing about this
is that if you were to save as on a note that already exists, it would know that there
is a version one already, and therefore it would
give you a namespace that does not conflict with any other note that you have loaded in
your environment currently. You could set the label, you
could set the tab menu entry, set the save path, and all of these of
course have dropped down so you can see what is already there, making it a lot easier for you. We also have a preferences window, which allows you to control
some of the preferences that this window will use by default. So you can, for example, set
the default username space. You can set the custom branchname spaces. You can control whether or not the branch should be put in the label
of the tool by default, you can set a default menu entry. So if you know that you're
always going to create tools that have to be in a specific tab menu for your studio for example, you're gonna set the
default one right here. Additionally, it also
has some other things that you might wanna a
bit setting by default, such as the safe location, and these over toggles. So just give it a try and you'll see that it'll be a lot easier to use, create and manage HDAs in general. Additionally, we're also happy to say that all of the labs tools are now fully Python three compatible. They have been for a little while, but we've been doing additional tests to make sure that they actually are. And so far our tests
have returned positive. So according to the VFX
Reference Platform 2021, we are fully compliant. Then another new tool that we added. So we added the extract image
metadata top node, right? So since we're all getting more into tops, a thing that we figured was very useful is the ability to extract
metadata from images. So think of this use case, you have a bunch of images on disk and you would like to know how many of those images
have a specific resolution or how many of those images are larger than a specific word solution? Or you'd like to filter
out all of the images that are using a specific color model or have a specific bit depth
or number of channels, right? Find all of the images that
do not have an alpha channel or find all the images that
do have an alpha channel. Well, this node very easily
takes in input images from for example, a file pattern and processes those using i-INFO and gives you all of this
information as metadata. Very convenient. Next up another top tool. We added the concatenate text node. So this is very useful. If for example, you
have a file pattern node or some other node, like for example, the filter by state node
that creates a bunch of logs using or containing errors of work items that you had upstream, you would get tons of text
files on disc that, you know, ideally you'd wanna have
in a single text file so that you can print it out in a log and send it to someone
through an email, for example. Well, this node just
takes all those text files and concatenates them
into a single text file. You have the ability to
control a couple of things. The most important one probably
is this new line each input. And that just makes sure
that every single input gets put on a new line. So if you have a linear of texts or a paragraph of text here,
and a paragraph texts here, you probably wanna make sure that those are not directly
attached behind each other, which in this case would
be testing 1, 1, 1, 1, testing 2, 2, 2, 2. Well in this case, you just
split it into a new line. Next up another really cool tool. This tool was made by
Simon on editor labs, it's called labs edge damage. And it literally just
does what the name says. It adds edge damage to your geometry. So for example, you can create
this low poly geometry here, and you wanna make it look old. You wanna make it look worn out. Well, you just plug it in
and you have it process. And it gives you something that looks nicely decent like that. You have control over the
amount of damage that you want to sort of make it worn
out at a specific stage. You have the ability over
three different modes of adding edge damage to your geometry. You can use a VDB route, which
will convert this to a VDB and use noises to add detail or additionally, you can
use the boolean mode, which is going to use boolean
methods to boolean out these edge damages. The nice thing about this mode is that you'll preserve a
lot of the original topology, which is what you'd
lose with the VDB rite, because you're creating a
completely new set of typology. And then the last one is color, which is essentially the same as the boolean mode with the difference that it does not actually do
a boolean of your geometry, but it actually just adds furthest colors to the edges of where the damage would be. So this is for example,
useful for creating masks that you would wanna use for baking or for any sort of shader operations that you might wanna do in your game. This image here is the same... - This pillar here, is
the same as this one, with the exception that
we just added some color to make it look more old. Alright, next up another really
cool content creation tool that we added recently, which is the simple rope wrap. It's once again, very simple tool. All it really does is
just wraps some ropes around the object. So let's say that we have
this pig-head object here, and we plug in some grids
into the simple rope wrap. It would then use the cutting
surface of the surfaces to cut essentially a
ring around the object and use that to create ropes with it. You can control the
thickness of the ropes. You can control the rope profile. You can even have it be simulated to get a more organic looking result and it produces UVs and everything else that you need for a rope as well. It does part of reducing as well, giving you the ability to
create a nice game ready rope that you can just import
into your game engine. This tool also works in Houdini engine. So if you, for example,
want to use this tool inside of Unity, inside
of Unreal, inside of Maya, inside of 3ds Max, you're
gonna import that tool in those Houdini engine plugins and it will work there as well. Then another really useful
tool that we often use has to do with baking. So we've had the maps
baker for a long time, which is really a great tool of extracting attribute
data into textures, but we wanted a more
procedural method as well. Procedural in the way that it is live. And you're able to use those textures without actually having
to bake them to disk. Well, that's why we created
this attribute import cop. And what that does, it essentially uses the same technology that the maps baker
does with the difference that this is a cop node that is live, and it's not big things to disk. So you could use this to, for example, do any sort of shader
operations inside of Houdini using a cop node as inputs. You could do other compositing operations inside of cops itself and
just do crazy stuff with it. The nice thing is that
it uses an attribute to do the sampling with. By default, of course,
that is the UV attribute. But if you wanna use, for example, a secondary channel or a third UV channel or any other type of attribute that should be considered UVs, you're gonna just change it there. You can set the attributes per channels that you want it to sample. So in this case, by default,
as you saw in this video here. It is going to put the footage
color into the RGB channels. And then you could, for example, stick some additional data
into the alpha channel, for example, P scale or alpha or anything else that you might have. Next up, calculating UV distortion. So, Houdini has had the
functionality to show the amount of UV distortion on your UVs and on your geometry for quite a while. You were able to access
those using visualizers. But the downside of
those visualizes is that there was no way to really
sample those visualizers values as attributes. So what we've done is we've
taken the same source code that calculated the
values for the visualizer and put it in node itself. And we just wrote the
same algorithm in VEX. And that gives you this UV distortion primitive attribute value that you could use to, for example, insert additional seams, because you now know how
much distortion there is. Or you could use this for, for example, control the UV layout with it, right? So you might wanna take the shells that have a high number of distortion or a high number of stretch induce, modify the amount of textile density that is going to get in the UV layout. So now that we have the
ability to calculate the UV distortion, we wanted it to push it a little bit further. We wanted to find a way to automatically reduce the UV distortion based on those attribute
values that we calculated. So we added this tool
called remove UV distortion. And what it does is it
basically finds shells with high numbers of a distortion, right? So for example, this
packet of distortion here. And it has the ability to
remove the distortion there, by inserting new seams iteratively. So it does essentially a feedback loop, where it tries to remove
distortion over and over and over while making sure that
it doesn't introduce any new distortion. So what you might notice
in this image here is that this UV unwrap that I did, which was a really poor quality, which I did on purpose, by the way. You see that we have UV shells that are essentially tubes, right? So these legs here have
been unwrapped as tubes. And the problem with tubes is that you have these
internal loops right? Inside of this UV shell, where the internal loop
could, for example, be the bottom of the pants, so to speak. And the outside loop would be the top of the pants here, right over the tube. And because there is no seam between the top and the
bottom of that tube, you will have a high level of distortion because you're essentially
flattening a tube, right? And so what the remove UV distortion tool has the ability to do, is create a scene between those two nested
loops and flatten it again. And then once we do that, we can see that we've now removed the UV distortion here around the legs. That's one of the most after
you remove UV distortion. But as you can see, we still
have pockets of UV distortion that we want to remove even more. So that's what the tool can do as well. By just increasing the
number of iterations and setting it to remove hot
packets of UV distortion. And when we do that, we can see that we've reduced
the UV distortion even more. And there you go. So we've taken some UV
shells that originally were quite distorted and we've removed most of that distortion. So especially this shell here, you can sort of see what the tool does is it finds the outside
boundary of the UV shell and it finds the highest
level of distortion inside of the UV shell, and
it tries to insert a seam going from the pocket of distortion out all the way to the boundary. And once you flatten that, you can see that you sort
of get this unwrapping sort of origami like effect happening. So if you go back here, we can see that these
are all round, right? There is no sort of internal cuts, but when we go back to the last image, we can see that our insert
does nice cuts here. And so that's that tool. Give it a try and let
us know what you think. We've tested in a large variety of meshes and it performs quite well. The only thing that
the tool doesn't handle quite elegantly yet, is the ability to remove stretch, right? It's doing a really good job at removing compression wherever possible. So in this case, this is an area that you wouldn't really be able to remove any distortion on, but hot packets like these, for example, it does really, really well. But the stretch is something
that is a lot harder to do. And therefore the tool doesn't quite do that well enough yet, but that's something
that we're looking into. But please, please provide
us with any feedback you might have. Next up, we have the
skinny converter sock. So this is a tool that has
been in labs for a long time, but it used the old way of rigging and skinning inside of Houdini. So what we've done now
is we've updated the tool to use skin effects. So the output of the skinning filter is no longer an object level rig. but it is now an actual
skin effects compatible with geometry skeleton,
and animated skeleton. So in this case, you can see the first input
is just your bind post. The second one is your rest pose. And then the last one is
just your animated skeleton. So, in this case with skin effects that's just a set of
primitives and points. And if you plug it into the border form, you now have your same foot x animation that you had before, now
actually rigged and skinned and deformed using a skeleton. So the inputs of the skin convertor is something that you
would not be able to bring into a game engine without methods such as skinning or
vertex animation textures. But the bottom approach
actually gives you a scale so mesh that you can import
it into Unreal, Unity or any other game engine
that supports skinned meshes. Yeah, so that's that, that's the tool. That's my half of the talk. And I'd like to hand it over to Mia now. Thank you. - [Mia] Welcome to the
second part of our talk. My name is Mia. I'm very excited to meet you folks. Let's dive right into
vertex formation 3.0, which is a bit update
since vertex formation 2.1. Some of the goals we are hoping
to achieve with this update is to streamline the user workflow as well as adding more powerful features. So we started with redesign
the user interface, the Houdini site as
well as the Unreal site. And Houdini site will basically
will follow a top to bottom and left to right workflow. And the number of parameters
and options are not necessarily fewer than the previous version, but what changed is that you don't need to do many, manual work just
to get the thing to work. It's very easy. It's you just use default
settings to get it to work. But the new options are extended features for you to customize and
do more advanced things. Here on the Unreal site,
as you can see here, we've got a new interface as well. And a big thing that we got rid of is the less of real time data that you'd need to previously
input just together to get the animation to work. As you can see in the parameter interface, most of the stuff you've got here is just static parameter switches, used to enable or disable features. You don't have to type a lot of numbers. You just input the
textures and tissue work. So what did that data go? The submission we found is
to use two needle triangles. As you've seen here in the image pointed by the yellow arrows. There was two triangles basically situated at the boundaries of your simulation and the list information
such as the frame number, the padding ratios and all that stuff are encoded in the positional
data of those two triangles. And an additional benefit is
that because those triangles define the edge of your simulation. Now you don't have to worry about, the vertex formation get
accidentally collision, sorry. Frustum code, when you don't intend to and just pan the camera away and the vertex is not in view anymore. The new bounds ensures
that whenever you need to see anything from
your vertex formation, the camera will always detect it. We also kept the original
right click menu, scripted action options. We actually added a few more. Now we have the action to set HDA touches as well as non-HDA touches. You do have the legacy
support to use the data table. Which comes in handy if
you want to use instancing with emission stacking meshes. It's also useful if your
simulation is quite large in terms of the size, then using the data tables numbers can sometimes provide you with slightly more accurate result. So it's optional for you to choose. One of the big things we need to fix is the rigid node first
frame crack problem that's just related to the pivot accuracy. So on the top left, you
will see the 2.1 version and to the right as the offset,
which is the 3.0 version. As soon as you can see,
the most of the cracks are gone from the first frame. We used several encoding methods, in the blue and red images
shows you how the shader will dynamically pick which
encoding method is better. So one of the methods that the blue one is instead of using buckets color, where using two UV channels
to store the pivot position, and each channel has a 16 bit depth. So that's a lot better than the vertex color is a bit precision. The red one is a more advanced version where we did a lot of bitwise manipulation to just preserve even a higher accuracy while still using 16 bits UV channels. As you can see, most
of the pieces actually ended up using the,
the RET encoding method because in the Houdini
site when the note cooks, you will dynamic evaluate
which piece should use which coding method that should
give you a better result. And that information did
carry over to the shader, so shader can pick on how
to decode that accordingly. At the bottom you can see two
of our three accuracy nodes. So the left one is the high accuracy mode and the right one is very high. There's a slight difference
here, as you can probably see. And we also have a third version, which is the maximum
accuracy use the 32 bit instead of 16 bit. That is more memory intensive, sometimes resulting in the
slower calculation speed. I didn't show that because
the visual is in this example, there's barely any noticeable difference between the very high and the maximum. So I do recommend you
just stick with very high, because it's faster and it's cheaper. The accuracy improvement does not only relate to the first frame. It actually improves the accuracy of the animation throughout
the whole duration. So I did a lot of aggressive
testing during my development. As you can see here, what I tried to do here is
using the green version, which is exploiting static mesh version of a particular floating frame. It's not even entered your frame
something like 121.4 frame. And we'd compare that
against the color version, the multi-colored version, which is the vertex formation version. And I was so glad to
see so many z fighting. That means the accuracy is pretty good. Another thing that's added is the interpolation for rigid pieces. So this video, what you see
here is what you would get if you export a fast moving animation and play at a slow frame
rate using the 2.1 version. Basically there's no interpolation. So the standard interpolation
for rotation is called Slerp, which stands for spherical
linear interpolation. This video demonstrate
a limitation for Slerp because the algorithm needs to figure out in which direction you need to rotate. Since between two orientations
in adjacent frames, there's basically two ways to
put some two ways in 3D space to rotate from this frame's
position, to the next frame. You can go one way and you can go the complete opposite direction. And the standard Slerp
needs to limit the rotation to under 180 degrees. So they always picked the shorter path and it's just for safety. Otherwise, we get confused
about which direction to turn from frame to frame. You can see the problem here because I color-coded the pieces based on their rotation speed. So the redder they get, they
should be spinning faster. But what you see here is the
red pieces and the blue pieces and poker pieces is basically
spin in a like a similar speed because every piece is
limited to only 180 degrees of rotation per frame. So if you're going higher than that, there is no way to correctly interpolate using the standard Slerp. And you can see some pieces start to go in the wrong direction as they are increasing the speed. So the solution we ended up using is a unique method called... I termed it, the multi-RPF Slerp, which stands for multi
revolutions per frame. It supports much higher
than 180 degree of rotation, can basically go several
cycles, put into your frame. So like conditioned to go to thousands of degrees per rotation. The advantage of that is, now you don't have to worry about how fast your pieces are spinning. You can just use our new
tool to export the animation and you can play your animation in shader at a much slower speed, and you still get pretty
accurate and smooth rotation. This method uses the angular
velocity as an assistant for us to help decide which way the pieces should be spinning and how fast they should be spinning. But there is more. As you can see here, this is
using the multi-RPF Slerp. The simulation here is
sort of like a rock cyclo, could be that cool magical effect, just a bunch of rocks rotating around in the tornado formation. And because there's such a
gap between the at 20 frames, you can see that piece
is actually contracting. And contraction is a reflection of the pieces is trying to go
from frame to frame position in a straight line. While in actuality, they're supposed to be going in a circle. So what we want to achieve
is to somehow find a way to reconstruct the curved trajectory based on limited frame information. So we could do that using observation. So the two... The fat rock now can support internally and compute acceleration and export that through the color texture. Now you can pick that
up from the shader site and use the observation
to help you compute the original curved trajectory. So this is a much better result. And this video shows that
sort of the comparison between the smooth version
and the contracting version. Now, if you compare this, which is the, or the interpretation turns on against this version
playing at the same speed with intervention turned off. And you can see this
is basically unusable, because there's such a time
gap between each frames as this animation is basically
playing outside of 5% of its export speed. Now this opens up the opportunity for you to basically run your simulation,
then exports through VAT, using a much lower frame rate, and just turn on interpolation
on the real time side. So this saves your texture memory because there's less pieces and less time you need to export, and you will still get a smooth
interpolation in your game. Another thing we added is the support for tenure space normal map, which is a feature that's missing from the previous versions. In order to support that, basically we need to add
tangents data to the shader as well as normal data. Because with just the normal data, your tangent space normal map can essentially to spin
around the normal vector. There's another degree of freedom that needs to be controlled. So we swapped out the
original normal map export into basically every mode is using the rotation texture
because the rotation texture is a more compact form of storing both the tangent information, as well as the normal information,
as just the quaternion, because the quaternion can describe both degrees of freedom simultaneously using just four Tetra
channels instead of six. And that get decoded on the shader site. And you can apply your artist
authored surface normal to the result of the
vertex animation rotation, either per piece or per point. So on the left side is a
relative emission version. And on the right side is
just a static mesh version. And you can see the result
is pretty much identical. Now that is not limited to softs. We basically have blanket
support for tangent and a normal map for all
nodes, including fluid. For fluid, that means
we're also providing... We're also providing
native support for UV. The two ways to handle UV. It's better to determine yourself how you want to UV or fluid. And you just send the UV data to the channels that
we offered as options, and that will be sent to the shader and the shader then decode
it and apply the textures. So you can get your normal
map as well as other just textures onto your fluid mesh. Fluid is also not just limited to actually fluid type of animation. It's basically a damaged rematching. So I kind of slightly renamed the node. It's just called Dynamic Remesh. You can basically get
any mesh into the shader using fluid mode. The additional advantage
is that a fluid node has a very good preservation of the origin normals you signed. With soft node, you need to
make hot edges duplicated points so that each point can
have unique vertex data or vertex normal data. For flute mode, because we're basically
assembling triangles, therefore you can export
smoothly shaded versions like this one, or you can
export very harshly shaded it like a fount and turned into a geometry that was the hard edges and the edge of the
fence that works as well. We try to strike a balance
between ease of use as well as extendability, because that's a lot of... A lot of times people are requesting to have more ability
to do things with that. So don't let this
intimidate you in any way. So the essence is that if
you just need normal usage for VAT, you can just hook up the, some of the standards outputs or use the standard parameters and that will get the job done. But most of the additional
long list of parameters and outputs here, are
for you to extend things in your own way and make cool and crazy stuff happen. We have provided very detailed two tips and some explanations for everything from the parameters in VAT, as well as all the
parameters in the shader and or the material functions. So you just need to hover
your cursor over them And see what they do, and how
to use each of the options in relation to other options. So, one of the example here
is to do a customization with the rigid node. So this is actually a rigid node, but you can see the piece of stretching because we are stretching
the pieces by their velocity. We had exported the velocity,
through the car texture. Now we can use the
velocity to fake basically like a motion block effect. In animation, this is sometimes deployed to create an illusion of motion block. It also kind of looks like a soft body, even though it's a rigid body. Another thing we could
do with the porker node is now we support more features that comes with a
standard pipe or systems. So we can have a scale by velocity and align the orientation of the porkers along its trajectory, as well as adding just custom rotation or per particle randomness. We also ship additional material functions such as transformed velocity
or computer based randomness that you can be... That you can use it
together with the functions to achieve some of this
additional results. Within the hot code, everything, because some of that implementation
needs to be flexible. So, a lot of this will be
covered in upcoming tutorials that will go into details of
how to use these features. Another thing that we
added for porker node is instead of just a square code, now you have a default
option of a square, triangle as well as hexagon codes that can help you prevent
overshading or overdraw better. And especially if your visible pi code is in some sort of circular shape, then you might not need to use a square. It might be more efficient
to use a triangle or hexagon. We also added custom shapes. So you can literally
just hand draw a curve and that roll core, turn the curve into a custom card. And get an export you
can use in the shader and you can even include multiple cards. So this is a single static mesh, but just use different shape IDs and you can identify and
differentiate different shapes and assign different weight. And that controls the
distribution between them. so here I had a little firm put
together some of their cool, some of the demos for you guys to showcase the new VAT Rob's capability. One thing I'm trying
to do here is basically to push the limits of our, to using some almost film level assets with sometimes hundreds
and thousands of pieces all over it as this. But the idea is to really see how high is the seating of our quality? Then from there, we
could go on to optimize and make smart decisions
about the trade-off between the visual quality you want to hit against the performance you can afford based on your game,
based on your platform. Now you're using credits
to the kind of people who provided me with the
asset, the simulations. So this is the soft node. The previous one is a fluid node. This one is also the asset that was the film level
of a density, essentially. But that's a lot of fun. This whole thing is actually
three different assets put together, but you don't see the seams between them because they align pretty
accurately together. This is a Pyro Sim I
did against a background from the epic games environment. It showcases the new particle VAT nodes ability of using basically
the scale by velocity, as well as the color export. And basically and a lot of
extendability using the... Utilizing the power of [indistinct] and volume sim. So in this example, I used the volume sim
to drive the Pyro Sim, to get a more fluid like motion. Here's another film as an example. There is probably hundreds
and thousands of pieces here. Maybe even hit a million piece. I don't quite remember, but especially in multiple
assets put together and they merged together
pretty seamlessly. In this illustration, it
showcased the difference between the book pieces
as well as the concrete and glass pieces. So this is also with
interpolation turned on so we can get smooth motions. Personally, I love this video
when I've put it together. What I did here is essentially
it's the same exported Richard Buddy animation, but
I slow down the time so much. It almost looks like it's frozen but inside it's so still slowly moving. And you can see some of
the pieces in front of you are still spinning. Whereas the pieces at the back
are almost like stationary. That means the piece that the
front is spinning very fast. And we can accurately support that. The frame rate that this
particular recording was playing at is 50 times the exporting frame rate. So it just shows you how you can do so much with
a limited amount of exports. Speed of effects. We also did updates on Pyro systems. So we upgraded our make loop tool to 2.1. In 2.1, one of the
major things that we did is to support the tool properly for Niagara particle systems. Basically we need to do some manipulation of the particle ID, age and life attribute to make Niagara systems read the particles properly
so that they can loop and interpolate correctly. So once that's figured out, it's very easy to use the tool itself. You can just create a
Pyro Sim and drop down and make loop and it will
create the loops for you. Then you can just put it seamlessly to a Niagara wrap, to a Niagara system. We also upgraded the volume blend. Previously, the volume
doesn't blend very well if you have dynamically resized volumes. Now that's properly supported and here I'm testing it
with the high raise volume and the result it's pretty satisfactory. On the subject of Pyro Sims, Much things... One thing that's pretty
useful in a day-to-day life is to have some basic tools
control some special things such as the random directions. So I created a random direction note. It's not very technically sophisticated, but it's just, it's one of
those quality of life tools that help to speed up your workflow. So here are some of its
abilities you can do basically pure random direction,
or you can do directions that's radiating from a reference geometry or reference point, or it can ready the directions
from a reference axis or from a curve or such
something like a closest position on a reference geometry. That's basically the same thing as curve. It just pick the closest
position and radiate from there. The square example uses the closest point on the reference geometry. So all the arrows are pointing
from the corners of the cube. Then we also have two
directions where you can control the distribution between
the two directions randomly. Here is another companion tool which I call the random selection, which give you some quick
and artist friendly way to select randomly or in a periodic way. You can do periodics
with constant intervals, or you can do random
intervals where it can create some very interesting patterns between. Just a skip and select that
pattern between each period is totally random within your given range. Another tool we'll be releasing is currently called match surface. Maybe it will become
another shrink wrap mode but in the future. What we want, especially
a Slerp level solution to take any typology and wrap it against a target
typology as you see here. The shaking here that you see
is that you try to target... To try to get closer to
the hardware as it tries to resolve itself. There's a difference between
the new nutrient crab or match surface on the left and traditional shrink-wrap on the right. As you can see, they
produce different results. It could be useful when you
want to create some collision or dissolve completely up. Convert the geometry into
a single mesh and use a clean topology to wrap it around it. Another geometry level
tool that we made is symmetrical PolyReduce. This is to... It's then the ability of hoodiness is pretty powerful PolyReduce Slerp. One limitation of the
PolyReduce as currently stands is that it doesn't have internal support for keeping the symmetry if your input geometry is symmetrical. So basically we created a rocket to provide them what around. This could be very
helpful with characters, especially because characters tend to... You to tend need them to be symmetrical in the PolyReduce version. So the major challenge
is how to figure out how to preserve the UV islands. Because the solution we used
is to clip that geometry in half and just PolyReduce half of it, then mirror that to the other half. But then the other half's
UV items, will be destroyed. So we'll pre-process the
UV items and trace along this conference of each ardent and unpack a triangle, try
three points to form a triangle. Then using the triangle, we can pretty easily define
the relative transform between two mirrored islands and
color-coded here in red. And then we... After the PolyReduce we could restore that transform. So here is an engine
example for high res version versus a PolyReduced version. Here is what it looks like with
the area properly preserved. As you can see the mesh and
the geometry is symmetrical, but the texture is not symmetrical. We're happy to announce
that SideFX Labs now have an ArtStation page. We want to make ourselves
more available to the artist because this is a place
where artists hang out and check out other people's
works pretty frequently, and we want to make
ourselves visible to you and just keep you updated
to what we've been up to. We also update pretty
regularly of what we're doing. On the side effects forum, we have a pin post that we
document each releases, updates, and, bug fixes. You can check that out and figure out if there's some new tools
you need to upgrade, or if there's problems
that you want us to solve, you can send an email to
support and [indistinct]. Now, thank you so much for
coming to our presentation and hope to see you again soon. Next time.