>>Amanda: Ready to
save 50% on hundreds of Marketplace products? Well, we've got some good news. Now through Friday, April
17, at 11:59 PM Eastern, more than 750 select
products are on sale during the April Flash Sale. With organizational tools,
photoreal and stylized models, material collections,
effects, and more, there's something for everyone. Are you a university student
looking to learn Unreal Engine? Unreal Fast Track is a new
five-week online challenge for students around the
world to learn Unreal, game industry
relevant soft skills, and attend Q&As with game
development professionals. Get all the details
and register today at unrealengine.com/fast-track. With the successful advent
of the Nintendo Switch, developers now have an
exciting, new, unique platform on which to release
their products. For many studios, this
opens up the possibility to expose
previously-released games to an entirely new audience. Read Bulkhead
Interactive's journey on bringing their
2016 puzzle game, The Turing Test,
to Nintendo Switch where they discuss
their process and design decisions to make the
most of Nintendo's versatile handheld platform. Unreal Engine 4.25 Preview 7
has landed on the Epic Games Launcher and GitHub. Now's your chance to test
drive the latest features. Download Preview 7 and
share your feedback on the Unreal Engine 4.25
Preview Forum thread. Raccoons, cars, and marionettes- The creators behind
Backbone, Dead Static Drive, and A Juggler's Tale shed light
on their unique inspirations and what fuels their creativity. And the latest additions to the
refreshed Revving the Engine series from Unwinnable. Dive into their
stories of horror, anthropomorphic
animals, and puppets. Then keep an eye out
for more adventures into the minds of the
creative Unreal community. Now for the Top Weekly
Karma Earners on AnswerHub. Everynone, ClockworkOcean,
FSapio, ViceVersa, Vecherka, WiiFitTrainer, aNorthStar,
Blackforceddragon, Xenobl4st, and Shadowriver, thank you
so much for helping out others in the community. Over to our first
Community Spotlight, The Last Goodbye
by Labidi Wassef. The scene was a
practice in building realistic procedural
landscape materials based on the idea of a hero who had to
leave his homeland for battle. What a beautiful piece. And check out more of Wassef's
work on their ArtStation page. For those of you who would
like to use a 3D mouse or space navigator, try out David
Morasz’s SpaceMouse plugin, which allows you to control
any 3D editor viewport in focus with 3D connection space mice. Find out more about
the plugin on GitHub, and try it out for yourself. Our final spotlight is Alchemy
Story, a casual farming sim in which you play as
an apprentice alchemist where you'll discover a world
of magic, cute creatures, and adventure. Grow your garden, brew
potions, care for your pets, and meet new friends. Alchemy Story is now
available on Steam. Thanks for watching this week's
News and Community Spotlight. >>Victor: Hi, and welcome
to Inside Unreal, the weekly show where we
learn, explore, and celebrate everything Unreal. I'm your host Victor Brodin. And I'm here today with Mike
Lyndon and Paul Ambrosiussen, two technical artists at SideFX. Welcome to the show. >>Mike: Thank you. >>Paul: Hey. >>Victor: Hope you're
doing all right. Would you mind
talking a little bit about what we're doing to
cover as a topic today? >>Mike: Sure. Paul, do you want to go? >>Paul: Yeah, of course. So I'll kick off the
stream with a couple of world building focus topics
such as the building generator. I'll be talking about how you
can take OpenStreetMap data-- which is sort of like a Google
Maps-type website online, but in a public format. So you can just download it, bring it into Houdini, and then use Houdini Engine
to generate buildings, roads, and lots more, straight
inside of Unreal. After that, we'll be
talking about a new tool that we've written to export
point clouds, which you can use with a new plugin that
Epic recently acquired and published for free
for use with Unreal. And then we'll be looking at an
algorithm called wave function collapse and how we've
implemented that in Houdini, so that you can use it for world building-- in this case,
a dungeon generator. And now I'll give
it off to Mike. >>Mike: Yeah, and
I will be looking at the Houdini-Niagara
data interface. So we showed this, I think,
about two years ago now, but since then, Niagara
has kind of come forward in leaps and bounds and
so has the data interface. And so it's a way of getting
point clouds from Houdini into Niagara. And today I'll be showing you
how to use that for fluid Sims, for crowds, for
general setups where maybe the simulation process
can't be run at real-time. And so you can blend
the power of Houdini with Niagara to create
some pretty cool stuff. >>Victor: Sounds exciting. Paul, would you like me
to kick off the slide? >>Paul: Yeah, of course. >>Victor: All right. We are good to go. >>Paul: Awesome. So just a quick
disclaimer for people so they know why I will be
saying "next slide" a lot. Victor is actually
running the PowerPoint. I'm just looking at a screen
share of the PowerPoint I sent him. So please ignore me saying
"next slide" a bunch of times. So first off, shout out
to Crab Champions, which is the image
you see here in front, which is a really cool
game that's in development. Check out their Twitter to
see more information on that. So let's get started. Next slide. So the topics we'll
be covering today in the first segment of the
livestream, like I said, is wave function
collapse, which we see in the top left, which is
used for a dungeon generator inside of Unreal. We'll be talking about the
building generator, which can generate a city,
streets, trees, and lots more cool stuff, which
you see in the bottom right. We'll be talking about the
point cloud exporter, which can bring, let's say, a LIDAR
scan or photogrammetry scan or anything else you generate
in Houdini straight into Unreal. Let's go to the next slide. So before I start talking
about those tools, I need to talk a bit about
where we can find those tools and what they're part of. So these tools are
part of SideFX Labs. And SideFX Labs, as you can
see, is a completely free open-source toolset geared
towards assisting Houdini users with a variety of
tasks commonly used with digital content creation. Our primary focus
with this toolset is still game development,
because the tool used to be called the
Game Development Toolset with Houdini,
but we gave it a rebrand to allow us to expand
a bit on the sort of industries that are using this toolset,
because we've seen lots of use recently in film, TV,
archviz, AR, VR, and lots of other industries. Once again, it's open-source. It's free, so you can
download it, check it out, see how the tools
work, even extend them or build something
else completely from it or learn from it. It has over 150 tools, which
is a lot, on top of what Houdini already has to offer. And the tools mainly
focus on workflow and UX. So it tries to make your life as
a game developer a lot easier. So if you were to,
for example, have to create networks of nodes,
inside of Houdini yourself or a very
repetitive task, there is probably a tool that does
that for you in SideFX Labs. Let's go to the next slide. If you want to learn more about
SideFX Labs besides what I just told you, you can
go to our website. And then under the
Products tab, you can find a new entry
called SideFX Labs where you see this overview, which
also links to a bunch of videos talking about the tools. And there's also a
another tab, which shows the actual tools that
are part of the toolset. So here you can find
those 150-plus tools and a quick description
of what it is they do. So if you want to find a
tool for a specific purpose, you can probably just
go to this website to type in building generator. And it will show you,
OK, here's a tool that generates a building for you. And, of course,
it also has a link to a video that shows you
how to install SideFX Labs. Next slide. So before I begin,
lots of the stuff that I'll be showing
today uses Houdini Engine. And most of you, if
you're a Houdini user and you've been
using Unreal, you've probably used Houdini Engine. I want to give everyone a
quick update on Houdini Engine version 2, because that's
something that people have been waiting and asking for. I'd like to let you know
that in this stream, we're not covering that plugin. The stuff I'll be
showing is using the version 1 of the plugin. But the short
summary is that we're expecting to go into
alpha very soon, but the development is
progressing really nicely. We're testing lots of stuff. But if you want to
know more updates, please go to the forum post
that we have on our website. I think Mike will be posting a
link to that forum post talking about the development
in the chat for you. That's just some information for
those that were looking for it. So what is Houdini Engine? Houdini Engine is
basically an API, which allows you to create plug-ins. In this case, a plugin
for Maya, Max, Unity, and, of course, Unreal
that allows you to take all the procedural assets you've
built inside of Houdini-- let's say a fence generator
or a building generator, which we'll be
talking about today-- which allows you to
play with sliders to modify the properties
of that building without having to modify any
vertices manually yourself, which is the power
of Houdini, right? Proceduralism. If you were to export that
to, say, Unreal with FBX, that would be a static mesh that
you can no longer modify inside of Unreal. Well, with Houdini Engine, you
can take that proceduralism that Houdini has, and bring
that power into Unreal, so that now you have
access to those sliders to modify the number of
windows or anything else like that straight
inside of Unreal. So that reduces the
number of round trips you have to make between
your engine and the DCC you use to create content. Next slide. So first, we'll be talking
about the building generator. So here we see a quick image
of what the building generator could produce for you. So if you go to the
next slide, we'll be seeing this tutorial page. So everything I'll be showing
today about the building generator-- all the files
that I'm showing and running through, which is just an
overview in the stream-- you can find more about
on this tutorial page. If you go to our SideFX website
and look for city building with OSM data, you'll find this
tutorial series or course rather, which has,
I think, four videos talking about all
the different steps that I'll go through today. And it also gives you
all of the project files that I'll be showing today. So if you want to replicate
exactly what I'm showing today, you can. Just go to that website
and download the files. Next slide. All right, so how does that look
of Unreal and what do you get? So as we can see
everything you see here is generated with
Houdini and Houdini Engine. I haven't placed a
single asset manually in this scene that I'm
currently flying through. And on top of that, none
of the object placements, the street placements-- none of
that I've done myself as well. This is actually the city
layout of, I think, Toronto-- of small part of Toronto. Next slide. So how does that look
like and how quick is it? Well, first of all,
let's download the data we need from OpenStreetMap. So go to OpenStreetMap.org, zoom
in to a location like New York, LA. In this case I'm
going to Toronto because that's where the
SideFX headquarters is and I just pick a
random place in Toronto. I don't have a specific
reasoning for this location, it's just where I
landed when zooming in and then click Export. And that allows me
to marquee select a region that I'd like to
export all the street data from. So the streets, the rivers,
the buildings, street names, and everything else
that's on there-- where OSM has data for you-- from. So then, of course, we want
to bring it inside of Unreal because we know just
when to look at a map. We want to have a 3D
city we can run through. So we create our HDA, or
Houdini Digital Asset, which is using Houdini
Engine to cook everything. And we tell it to use that
OSM file to generate our city. And then, of course,
need to do some cooking. It needs to calculate how
do I process these curves into streets and how do I
convert these building profiles into actual buildings
and then, voila, we already have
that exact location that I just downloaded
from the internet as a 3D-playable scene
inside of Unreal. So that, of course,
was very easy. It's, of course, a premade
HDA that you can download, like I mentioned before. So let's take a somewhat more
deeper dive into those HDAs in the next slide. All right, so what is it that
Houdini generates and brings insights of Unreal? It didn't generate
one giant mesh because that wouldn't
be optimized. You'd want instances
using assets you already have inside of Unreal. That's what Houdini
Engine allows you to do-- generate a point cloud and
then assign something called an attribute value, which is
just a property of a point-- of all the point you see there. And then it tells Houdini
Engine this point, at this location
needs to instantiate this piece of geometry which can
be a window, it can be a wall, it can be a piece of street,
it can be a light post, it can be garbage can,
can be anything you want. And then, of course, as you can
see, there's also some polygons and those are just
pieces of unique geometry that Houdini's
generating and it's also exporting into Unreal. So we go to the next slide. All right, so let's see
how we can build that from the ground up
because that still does not tells a whole lot. First of all, we need
to import the data we downloaded from
OpenStreetMap into Houdini. So we just go to
our download folder and we bring in
that file we just download from the internet. And that gives you this using
the OSM input mode, which, once again, is a tool from Labs. And that has lots of attributes
talking about the properties of those curves-- the width of the road,
number of lanes, how tall is this building, and so forth. Then with the OSM filter, we can
filter out data we don't like. So, in this case, I just
want the building profiles, so I just jumped down OSM
filter and tell it to only keep the building profiles. Next up, we plug it into
the OSM buildings node and that already
gives us 3D block outs of every single building. Next up, we're going
to need the streets. So for that we drop
down to filter again, we turn off buildings, and, in
this case, we want the roads. I can then specify what type
of roads I want to keep. In this case, I just
want the primary roads and the secondary
roads, which is, let's say, a highway
and then a main street. And then, of course, we need
to generate streets from that so that's what I'll be using
the road generator for. Plug that in and,
voila, it already resolves all the
intersections for us. And, as you can
see, that gives us this vertex color intersection. And what the
vertex colors mean is it-- those allow us to blend between
multiple UV sets that it uses to blend between
textures so that we can use a tileable texture
on this intersection without having to create unique
textures per intersection because that wouldn't
really be optimized. Then we can modify
any of the properties on the primary
interface for the road. We want to modify how
wide is the intersection, how curved is it because
that's not something that the OSM data holds. And already you can see
that we have something that looks like a
city inside of Houdini with, maybe, five
minutes of work or less. Next up, we need to process-- oh, we're not done yet
with that video I think. >>Victor: Is there a way for
me to go back to that? This one's not-- >>Paul: I think you just-- yeah, if you just put
your mouse on the slide, then you have a time bar. So just play and then go
towards 3/4 or something. >>Victor: Are we about there now? >>Paul: Yeah. Perfect. >>Victor: Cool. Sorry about that. >>Paul: All right, no worries. So then we of course
need to process every one of those blocks. I think you need to hit
Play again on that-- on the left side. Yeah. Oh, that's the Houdini play. That one. Yeah. >>Victor: That was not
confusing at all. >>Paul: So then, of
course, we need to process every one
of those block outs to become a building, because we
don't just want that block out. And for that we can use
the building generator. Plug that block out in and that
converts it into a building. In this case, it just uses
some default properties to already cut it up
in a default preset that I've created. But that's something that we
will look at on the next slide. So what does the building
generator allow us to do? It allows an artist to
create building modules or, say, a window module, a door
module, a wall module, a corner module, a ledge module,
and of course, lots more. And then it allows you to
specify in the parameter interface-- which is the
part you see in the middle, the gray strip-- it allows us to tell
the tool that we want to use a particular
module for the generation of the building. So we can, for example, say use
wall A for all the wall pieces and just repeat it. Or use wall A and then
wall B and then repeat that sequence of Wall
A, Wall B, Wall_A, Wall B for the entire floor. It also lets us modify how
tall we want the floors to be. It allows us to
specify whether or not we want ledges, because
a floor might have a top ledge and a bottom ledge. And it also allows us to
override specific properties for a specific floor. So let's say you want to have
the first floor be twice as tall and use different modules. Then that's something we
can also specify here. So as you can see on the left,
it just basically cuts up that block out we feed into
the tool into building modules the artist created. So these can be
procedural modules, like a procedural window
or a procedural wall. But these can also be art
direct or highly art directed art pieces that
an artist crafted to achieve a specific look. Next slide. All right, so how
do we control that? I'll be running through this
one building as an example and then afterwards
I'll show you how to do that for the entire city. So first thing,
because, you know, it's easier if you're just
working on one building that is just is oriented on the origin. I can drop down a
node called axis align, which will
just take whatever geometry I have and center
its pivot on the origin. Next up, because I like
it to be aligned more, I can drop down a
straighten node. And that allows me to select
a face that is pointing up. And it allows me to select a
face that is pointing sideways. And then Houdini will,
based on those two normals of those primitives,
realign that module so that it's perfectly aligned with
the world grid of Houdini. So now let's take
a look at what we can do to modify
this default look, which doesn't look as
great, and replace it with building modules. So the modules
I'll be using here are just some quick
prototype modules. They're not the
highest quality art, like you saw inside of Unreal. But it allows me to
demonstrate the purpose. So first thing, we import,
let's say, an FBX from disk. And then we use a
building generator utility to specify that we're
dealing with a piece that I'd like to call
Wall_A. This is just a
name that I picked. You can call it My Beautiful
Wall, My Beautiful Window, just whatever you want. I then want to import a
different module, which I'll be calling
Wall_B, so that I have multiple pieces of
geometry to work with rather than just one piece,
because otherwise it would look quite boring and repetitive. So then we plug it into
our building generator so that it knows about the
existence of those tools, of those pieces of geometry. And then we just let
it cook so that we can see what we're dealing with. All right. There we go. And then nothing has
changed yet because we haven't told the building
generator to use those modules. It knows they exist
but it doesn't know what to do with them yet. So that's when we start tweaking
with the parameter interface. So first of all, I
don't really need ledges for this demonstration. So I just turned them off. And then I tell the
building generator, for every single
floor by default I just want you to use
Wall_A, which in this case just happens to be a window. All right. And then now, like I said,
I want to use an override. So I want the first floor
to use a different module, because you know, I don't
want the first floor to look the same. I want that to be
taller, as well. So I can go to my
floor height parameter and set that to a value
of 3.5, in this case. And you can see it
modifies the first floor and adjusts
everything above that so it can make sure that
that override that the artist specified can fit. Next up, our walls-- or our windows or walls-- I guess they're called walls-- are quite repetitive. We'd like to add some
variation, perhaps some spacing
between the windows. So we'd like a small piece
of wall between those. So then what we do-- I just needed a piece of wall. So I'm just going to create
a grid operator, which just generates a grid or a
quad, in this case, once I reduce the resolution on that. And I then make
sure that I offset the pivot on the geometry so
that the pivot of the module is placed at the origin-- in
this case, the bottom right of our module. So I write a little expression
so that it automatically offsets the geometry so that it fits even though I modify any of the procedural
properties above, right? So I don't have to worry
about scaling it later on. I then named that one
Wall_C and plug it into the building generator,
as well, so that it knows about it. Then we're going to
use that facade module pattern to just add a little
dash between Wall_A and Wall_C. And already it
modified the generated building so that we now have
a little wall between it. But I don't quite
like the spacing, so I just reduce it
by size and half. And voila, we already
have something that looks a lot better. Of course, it's still
quite repetitive. You can modify, you know, add
different variations of modules or create a different
pattern, like for example Wall_A and then a
different type wall and then a Window_C to
make it look like a less repetitive building. But you know, you saw the
result of that in Unreal, how to make it look better. It's just a quick demonstration
of how to use the tool. So on the next slide-- so now we've generated our
basic building, right, which is still quite repetitive. And it's procedural. Artists will still want
to have manual control. You know, they want
to fine tune things, for example, specifically
place a module in a specific location. And that's what the hand
placed overrides are for. So an artist can just take
a module-- in this case, it was a door-- and place it anywhere
they want on the building. And a building generator
will take that into account while generating the
building and make sure that that placement that
the artist placed down takes priority over whatever
the building generator will generate. Another interesting fact
about the building generator is that it's not gridlocked. Your building modules
don't all have to have the same dimension. You can put them anywhere
you want on the building generator-- on the floor-- and it will adjust
everything around it to make sure that the
thing you placed down fits. In this case, it couldn't
fit any more windows there, so it just generated a piece
of wall that, you know, you can later on swap out to use
any custom geometry you have, or just keep the wall and
have that be just a wall. So that's that. It gives us some
more art directable controls over our building. But that will become quite
tedious if, for example, we want to place fire escape
ladders on our building, right, which is just a vertical
strip of modules we want to place
in our building. So that's what we have
volumetric overrides for, which you see on the next slide. So volumetric overrides allow
you to basically just place a volume-- in this case,
I just took a box-- and told the box that
you are a fire escape. And that will make sure that
the building generator looks at that volume and
knows, aha, the person wants to place fire escape
ladders on the location that's within that box instead of you
having to type in the building generator I want wall, wall,
wall, wall, wall, fire escape ladder, wall, wall, wall,
wall, because that would not be a lot of fun. And of course, like you can
see, that gives you lots of art directable controls together
with the hand-placed overrides in a procedural fashion, because
typically if you want to take a procedural system and give
the user controls of overrides, that's typically where your
tool would become destructive, meaning any changes you make
you can no longer undo in the-- you know, with the
proceduralism you did before. So that's what this
does allow you to do. So it's more volumetric rather
than replacing things manually or hard coding it, rather. All right, next slide. So that was a pretty
boring example, right? But it can look quite cool if
you have more building modules. So in this case,
here's an example that you can also
find on our website-- this file-- where we
have multiple modules. And we have this block out,
which is once again just a box. And then I plug it into the
building generator and voila, it will assemble all of
those building modules into something that
actually looks quite cool. And as you can see with the
vertex colors in the window, that is one of the features
that the tool allows you to do, which is pick variations of a
module that you fed into it. So if you have different
variations for a window, you can tell the tool,
here is-- you know, I want to have a wall. But you can pick any of
the variations of that wall that I fed into the tool. So it gives you quite
a lot of control over what you can
achieve with the tool. Let's go to the next slide. All right, so we
have one building. We have a rough idea
of how that works. So how do we do that
for an entire city? Well, that's what
we can use something called a foreach loop. And a foreach loop in
Houdini is essentially just a network that allows you
to specify a procedure you want to do on every single
input that shares a property-- in this case,
it's 3D connectivity, because none of these buildings
are all connected-- and do that for every single object. So in this case, since I have,
let's say, 30 or 40 buildings, It is going to do that
operation-- or 106, actually-- it's going to do 106-- it's going to generate
a building 106 times, once for every single piece
of geometry that I feed in. And already you can see,
without all too much trouble, I now generated an entire city
with that building generator. Of course, all these buildings
look the exact same now because I didn't tweak
any of the settings with the building generator. But imagine that you give
every single block out that you had in a city a
property-- for example, this is a glass building. This is a metal building. This is a brick and
mortar building-- and have different style
variations for those buildings. Then already you can create
a pretty cool looking city. You might even be able
to use any of the data that the OpenStreetMap
data contains, right? That might have some
properties that you could use to
automatically figure out what type of building it is. So that's that. And then next slide. So how do we then
replace these modules that we have instead of
Houdini with pieces of geometry that we have inside of Unreal? Because of course, we
don't want to generate one big piece of geometry. We want instances so that we
can efficiently render them and call them and whatnot. So inside of Unreal,
what you can do is right click any object
inside of the content browser-- be it a Blueprint, piece of
geometry, a particle emitter, anything else like that. Try to click it and
click Copy Reference. And then on the
next slide, we can see that we can use a tool
called attribute value replace, which allows us to replace the
values we fed into the building generator with a path to
that object inside of Unreal. So all the modules I called
Wall_A I can just tell Houdini, don't use Wall_A because Unreal
doesn't know what that is. But instead, tell it use
this piece of geometry you find in the
content browser, which means that we can instantiate
anything we want inside of Unreal. Next slide. All right, so there we go. And this is what
that result would look like instead of Unreal. So you know, we have our
buildings, our trees, and a lot more. So let's take a
look at the trees. How hard is it to
generate trees in Houdini. Next slide. We actually have
a tool for that, which is called Quick Tree,
which is not necessarily a photorealistic
tree, like something like SpeedTree
generates, for example. But it is an example showing you
how you could generate a tree. So you take that
and expand on it and improve it, which some
people already have done. But it basically
takes in a curve and generates a tree
from that curve. So if I were to draw a curve,
then in real-time Houdini would generate a tree
from that, like you saw. We can then modify any of
the properties of that tree, because once again, it is
a procedural asset, right? That's what we want. We can modify the
number of branches, the curvature of those
branches, how many we have, the bend that those curves have,
whether or not we want leaves or how many leaves we want
so that we can modify it so an artist gets the
look that they want. But that's one tree. How can we generate a forest? Because that's the
next step, right? Going from one tree
to multiple trees. Well, instead of
taking your tree and duplicating it and
modifying any of the branches, we can actually-- on the next slide-- we can actually automatically
do all of those processes like we saw with the
building generator with a foreach loop
for every single curve. So the first thing we need to
do is generate some origins where we want to scatter
or place our trees onto. So I'm just using
a node in Houdini called scatter, which allows
me to randomly scatter points on the surface. In this case, I just use a grid. But you can scatter
it on a row-- on the sidewalks, like you saw. Or you can scatter it on the-- in the forest or on rocks
or whatever you want. And then I'm going to use a
node called attribute randomize, which is going to
add some variations to each one of those points. It's one of those
properties or attributes like we call them in Houdini. And that will just
automatically scale all of our lines to
have that value that we gave it to create variations
or more randomization, right? Next up, we don't want
exactly stray trees, so I'm going to use a
node called point jitter to sort of jitter the
position of those points a little bit to make it
look somewhat more organic. Perhaps not that much
jitter, but like it's-- like, you know, it's
a procedural network, so we can go back and change any
of these properties later on. Then we're going to loop over
every single curve we have there and just plug
in that tree generator so that we have a
tree being generated for every single curve. And voila, without
all too much work, we now have a forest of trees. If we want to have a different
surface to scatter trees on, in this case, I just want
to have a larger forest, I can go back to my
original input shape and just increase its size
and everything downstream will automatically update
to account for that. If I want to have more
trees, I can increase the number of scatterers. In this case, I increase
it from 21 to 25 and already,
we now have 25 trees. If I want to change how crooked
the trees are, then once again, modify any of the properties
of that point jitter, and already we can see that we
have a different looking result. All right, next slide. So that was that for
the building generator and the tree generator,
which were the two most obvious things in the scene. Of course, we have the
building generate-- the street generator, as well. But that was just
exploding point clouds, which once again you could
replace the asset references for. Next up, the pointcloud exporter. Let's go back one slide. So that's a ROP inside
of Houdini, which is a Render Output Driver. And that allows us to take
any pointcloud in Houdini and export it in the XYZ
format so that Unreal can read it using their new plugin. So in this case, I just
had a 3D scan of myself, scattered points on there
with the scatter SOP, and gave it some color
so that it was pretty. And I clicked Render so
that it wrote it to disk. And then on the
left side, you can see if we can put
that into Unreal, we have that exact
same point cloud in our engine, which is
very useful, for example, terrain generation or
visualizing a LIDAR scan or a geometry scan or anything
else like that, or just for, you know,
pretty art purposes. Next up. There you go. So here we see that plugin, that Epic recently acquired
and now distributes for free with Unreal. And next slide. So if you want to use that
plugin inside of Unreal, just look for this plugin on
the Epic Marketplace, which is called LIDAR point cloud. It's free. You can just add it to
your plugin for free. And that then allows you to
bring in those point clouds from Houdini. Next slide. All right, so that was
the point cloud exporter. Pretty simple,
but pretty useful. So I wanted to show it. Next up is the
WaveFunctionCollapse tool. So this is a dungeon that
we generated using that wave function collapse node-- and of course,
some other Houdini magic-- to create something
that looks pretty cool, if I may say so myself. So this was created
by an artist called Simon Verstraete who
used the SideFX Labs tools to create this content. Next slide. So as you can see,
it's a giant level. All right, so if
you-- once again, if you want to find the
specifics on how it works and replicate it
yourself, or even get the exact project
files, you can open up that project in Unreal. Go to our website,
sidefx.com and look for WFC Dungeon
Generator which has that tutorial series on there
and also has the project files. All right, so let's take
a look at wave function collapse on the next slide. All right, so what is
WaveFunctionCollapse? WaveFunctionCollapse is an
algorithm that is basically a constraint solver. And cutting things
short, it basically allows you to create
content or more content from a small example that you
provide it using constraints. So I imagine you have one of the
small little bitmap images you see on the left,
those 16 by 16 images, or you have a few
buildings or a few plants. And then on the
right side of that we can see that we have
larger versions of that and multiple variations of
that that automatically got generated from it. So it's a really useful
algorithm that allows us to create more content by
just showing the computer-- or Houdini, in this case-- an example of what
it is you want. So that allows an
artist to be super lazy and just generate a
level once on an image and then tell it,
OK, now generate me 1,000 levels of
that using Houdini. And you'll get 1,000
levels off of that. So how it works, I won't
describe it all too much because I have a separate
presentation on our SideFX website you can find. But it's basically
a Sudoku solver. So imagine you have
your 9 by 9 Sudoku grid and you already have
a couple of numbers placed in a Sudoku grid. All of the other cells can
only have a very specific value so that it doesn't
collide with any of the other rows in your
Sudoku network, right? So that's what this tool does,
in short, but then with pixels. And next slide. All right, so inside
of Houdini, of course, we don't want to deal with
any Python code as an artist. So we've taken that
logic and converted it into a tool for us. So we have three tools I'll be
showing to you, one of which is the initialized grid node. And that allows you to take
an image, feed it into it, and it will already import
that bitmap image for you as pixels that the WaveFunctionCollapse
algorithm can use. Next slide. I think we missed the
part on that video. Can we go back? I think it wasn't playing. >>Victor: Didn't auto play. There we go. >>Paul: OK, there we go. So this allows us to basically
pick any of those example images-- in this case,
it's just a maze-- and then we plug that into our
WaveFunctionCollapse solver, which will take that example
and generate more from it. So to visualize that, I'm
just going to place some grids and expand on the pixels so
we can more easily see them. So as you can see,
we already generated an output that is similar to the
input image, but not the same. But it looks similar, right? So as you can see, by just
modifying any of the properties of the solver-- for example, how
big we want an output to be-- we can already generate
more interesting content by just showing the
algorithm an example. Here is my small
16 by 16 example, generate me something that's a
lot bigger at this resolution. And then voila. As you can see, it
generates something cool. And already, with
this specific example, we can see that we could perhaps
use this for level generation. And that's what we
did for what you'll be seeing on the next slide. All right, so here
I have an image which I think is 32 by 32
that is just black and white. And the black in this case just
resembles a room or a corridor or anything else like that. It's just walkable space. And the white pixels represent
areas that the player cannot access-- for example, walls or
lava, because that's where the player would die. And then we can already
generate a level from that or something
that looks like a level. So if we plug in what we
see on the right side, we could get
something that looks what we see on the left
side, which you know we can process further
in Houdini to generate actual level data from. Next slide. All right, so there's
something interesting that we did for generating
the levels, which is doing a multiple of
these WaveFunctionCollapse solves because if we
were to just generate a black and white image,
we would just have the skeleton of our levels. So we would have the
floors and the walls, but there is no interesting
content in there yet. So then what we did is we
generated another WaveFunctionCollapse solve inside of the
WaveFunctionCollapse solve-- like Inception,
right, because it's fun. So we wanted to do that-- where we have this other bitmap
image where, for example, the white pixels
represent obstacles. They can be crates. They can be, I don't know,
something else that you can run into and not go through. We have red pixels that
might represent enemies. We have yellow pixels that might
represent pillars or something else like that. And then it will automatically
do a WaveFunctionCollapse solve with that, which produces
what we see at the bottom. And if we look at
those two images, we can see that
on the left side-- on the left-- bottom left
image, every red pixel is positioned next
to a white pixel. And this basically tells the
algorithm all of the red pixels have to be next to the white
pixels, which in this case, in our human mind,
tells us enemies have to be next to an obstacle. If you then look at the
output that it produced at the bottom middle, we can
see that all the red dots are, indeed, placed
next to the obstacles. So without writing
any complicated logic, we can already tell
Houdini or the algorithm, I want enemies
next to obstacles. And the same goes for pillars. So it basically
extracts relationships between objects from that
image and generates content from that. Next slide. All right, so once we have
those pixels, we of course can't use pixels
because that's 2D. We want 3D stuff from that. So what we can do
is take those pixels and process them further
inside of Houdini. So on the room surface, for
example, we can scatter boxes or on the yellow
pixels we saw before we want to instantiate pillars. So to visualize what we're
working with inside of Houdini, I just extracted points
from those pixels and placed objects there. So every white box you see
represents a crate or a barrel. Every purple pillar--
tube is a pillar. The change you see there-- I'll just generate a
change in Houdini-- and a little-- what is
it, beige or brown blocks are little pillars that have
to hold those chains together. And then the yellow is just
lava so that the player dies if they walk into it. Going to the next slide. But of course, that's not
what we want inside of Unreal. So instead of exporting
those boxes and pillars, we want to instantiate
the pretty art that an artist created, because
that's what their job is. Our job is just to create
cool levels from this. So that's where our point
cloud comes back in again. So if we go to the next slide,
we want to go inside of Unreal. We find our barrel that we
want to instantiate and just do copy reference and then go back
inside of Houdini to replace the thing that we
called obstacle in there with a reference
to the bucket or the barrel or the crate or the pillar
or the fire or an enemy or whatever, right? Next slide. And that allows us
to place everything you see in this
level automatically, without any input
or any manual input. So we can generate the
level skeleton, the walls, the floors, anything
else like that. We can instantiate
functional enemies by just instantiating
a prefab, right? Just a prefab of an enemy. We can instantiate
dangerous lava, which is just a polygon
with a material assigned. And even that material was
generated inside of Houdini, because Houdini can
also generate textures either by using something called
COP, which is our compositing network in Houdini, or
using any of the plugins like our substance
plugin, for example, because your
procedural assets can be combined with other
procedural systems like substance in Houdini. And then because
it's Houdini Engine, all of the stuff that got
calculated in Houdini, which is the geometry and
the procedural network shader from a substance,
just get exported to Unreal and assigned automatically. We can also generate
unique geometry, which is what the chains are. Those are not little chain
links I instantiated. They're just unique
geometry that I generated. We can instantiate obstacles. We can instantiate effects
like particle emitters or even the Niagara emitters that
Mike will be talking about. We can instantiate
audio components. We can generate and assign
textures, materials. We can generate and modify
terrains and any Blueprint. That I generated all
of this procedurally doesn't mean that you can't
modify any of this afterwards, right? We're not taking away any of
the jobs from a level designer. This could, for example, be a
really useful starting point where you first
generate your level and then a designer
can go in and delete any of the parts they don't
want or completely rearrange a level, or perhaps
even generate-- use a smaller tool which just
generates one single room and use that and place
it around manually. That's also a workflow
that someone could use. Next slide. So because it's
Houdini and we're dealing with parameters which
allow us to change properties of what it is we're
generating, we can generate tons and tons and
tons and tons of variations of a level automatically without
having to do all too much work. So in this case
what we could do, a useful approach might
be, is use something called PDG instead of Houdini which
allows us to create variations of a value we feed
into a parameter and tell it I want to have 1,000
variations of the parameter that controls the
width of the level or that controls the
layout of the level. And then we could
just render that to disk giving us these images. And then we can cherry pick
whatever level we want. So if we find one that
we particularly like, we just pick that one, take the
parameter values that Houdini used to generate that,
plug that into the system we have inside
of Unreal, and we will get that exact
level instead of Unreal. Let's go to the next slide. There we go. So to show you that I'm not
lying or making all of this up, here is that exact HDA or any
digital asset inside of Unreal. And like you saw,
I'm modifying the seed parameter for the layout,
and already it completely regenerates the entire level-- the materials, the geometry,
the Blueprints it instantiates, the geometry, the effects
emitters, the light placement, everything you see there. And to also show you how
that looks in Houdini, here is that exact output in
Houdini, so just once again, just that point cloud. To prove to you that
it is playable-- and I did not make that up-- we can just right click
and save Play From Here. And then we can already play
in that level that we have. So imagine how fast you
could prototype your levels using Houdini Engine. It makes it really,
really, really fast. So in this video I'll be
looking for some enemies, which takes a couple of seconds
because I got stuck. That's what the obstacles
are for, of course. And there we go. We have our enemies. And by clicking on our
screen, we can just kill them. I think this is the
top-down Blueprint example that Unreal ships. So there we go. That was that. Next slide-- and that's it. That's it for the first segment
of the livestream talking about SideFX Labs. And yes, that is indeed
me and Louise on screen. Shout out to Sasa Budimir for
creating this graphic for us. >>Victor: We'll play it
a little bit longer. Thank you, Paul. And we go ahead and stop
the screenshare here. All right, and we're back. Quite a lot covered. Let's see. Should we see if there
were any questions related to your presentation before
we move over to Mike? >>Paul: Of course. >>Victor: Let me just go
ahead and bring that up. I wasn't able-- I couldn't
leave focus on the presentation. That's when it stopped
playing so I didn't notice. Let's see. Will you be sharing
the slide deck online? >>Paul: Let's see. I don't think we have any
sensitive information in there. So I could share
those probably, yeah. >>Victor: OK. Cool. We can go over that
after the stream and get it up as
soon as possible. Sorry, I didn't have the chance
to look through the questions earlier. Did you have to
follow some guidelines for width and height, or will
it scale to fit the space given? >>Paul: I'm assuming that's
for the building generator. Yeah, so that's something
I did not cover. But you might have
seen it in the video. So that utility
node that I created when I imported the
module, that allows a user to specify what the width
and height is of a module. So if you want it to always
be a width of four units wide and three units tall, then
that is the dimensions that the building generator
will use to fit all the pieces. If you don't specify
that value, it will just automatically measure
that for you and make it fit, but no, there are no guidelines
on how wide or tall you want your modules to be. The tool will adjust it for you. So if a piece is,
for example, too tall and it doesn't fit
in the floor, it will automatically
scale it down so that it does fit,
which of course will increase the repetition
on the horizontal axis. But yes, that is possible. The tool does it for you. >>Victor: You mentioned that
there are some unique triangles per building. Are you using the
undocumented pack node to separate these into
different static meshes, or are they
collectively one mesh? >>Paul: Yes. So that's a trick you
can do with the engine where you can pack all the
different pieces of geometry into a unique packed
primitive in Houdini and then use the
Unreal split instance attribute, which you can
find in the documentation. And that will
automatically split it up into unique static
meshes, meaning they will get culled if
they're not on screen instead of as opposed to having one
giant mesh, which is not what you'd like. >>Victor: How applicable are
the workflows, et cetera, we learned with these
tutorials to version 2 of the UE for Houdini Engine plugin? >>Paul: So everything
you can do in version 1 you will be able
to do in version 2, plus a whole lot more. And what you can do more with that, I'd like to point you
towards to the forum post. >>Victor: Had another little bit
more specific question here. They were wondering, how
do we instantiate lights? I had no luck when trying it. >>Paul: Right. So for lights what
I would recommend is create a Blueprint out of it. So create a light,
place it in your scene, create a Blueprint of it, or
just create a empty Blueprint, place the light in there,
and instantiate that. But you might also be able to
instantiate just the component itself. But that's something you
can find in documentation. >>Victor: Awesome. I think they were all the
questions related to your part of the presentation. Mike, are you
ready to take over? >>Mike: Yeah. Let's see if we can figure
out how to make this work. Share my screen-- OK. Make sure we have
everything set up. >>Victor: Looks all
good over here. >>Mike: OK. So very quickly, probably
should have said this before I started
sharing screens, but I'm not going
to switch back. So basically what I'm
going to be showing today is kind of a high level
overview of the new Niagara-- Houdini Niagara data interface
and a bunch of examples in both Houdini and Unreal. And because it's
pretty high level, I have a feeling there's going
to be a number of questions. So please feel free to ask
the questions in the chat. I'll give some time
after one or two examples to check in with Victor to see
if there are any questions. And we'll also have some more
information at the end of this where you can find some
resources to kind of help you out with that. So that's the one thing
I want to mention. The other thing is as
you can see on screen, this is using 4.25. So in order to use the
new data interface, because a whole
bunch of cool stuff has happened on
the Niagara side, we're pretty much
4.25 and forward. And you will be able to
use preview 7 to test some of the stuff out as of today. And the other thing is this is
also Houdini 18 going forward because some of the tools, some
of the pieces to this workflow are in SideFX Labs, and
that's where you'll find them. And lastly, a lot of what
you're going to see today is program art. What Paul showed
was really pretty. What I'm showing is not
by any means as pretty. But hopefully the ideas
will capture your attention and we can kind of
take it from there. OK. So let's get into it. So at its most basic, the
Houdini Niagara data interface is a way to get point cache
data from Houdini into Niagara. Now, the previous version
that we showed of this was basically what you just
see in the bottom right, which was the static point cloud. And the idea was you could use
it for triggering, emitting particles for things like rigid
body destruction, simulations. But the number one request we
got was animated point caches. People wanted to do
fluid Sims and all sorts of other crazy stuff. And for the longest time I said,
no, that's a terrible idea. And eventually we gave in. So what you're
seeing on screen is four ways or four types of
data that you can bring across. The first one is a very
simple particle simulation. And the render node that
will write out this data will actually
automatically calculate the age and life
attributes, which is what the data interface means. But you can also
explicitly set that. So if you do want to control
that, you can add it. And then in the bottom
left hand corner, we have kind of this
all-in-one example. You can literally take a
point cloud animate it either as particles or procedurally,
add a bunch of attributes to that, and get all of that
information across into Unreal. And then lastly,
like I mentioned, is your static
point cloud example, which is very similar
to what you had before. And that still is
pretty much the same. So this is kind of just a very
quick look at what this is. Once I've exported
that data from Houdini, I can bring it into
Niagara as a Houdini point cloud cache, sample
that data, and then feed it into things like
position as well as color and a bunch of other attributes. The microphone keeps
changing on me. So this is kind of just
a high-level overview, but I'm going to
start breaking it down so we can kind of see
what this looks like. OK, so the first thing
I want to talk about is how we're exporting this
data, because in the past, we were using a CSV file,
which is like using XML. No one wants to do that. No one wants to use it. And so we're now using JSON as
well as a binary JSON format to export that data. And you can use it
for, like I said, static point clouds,
procedural animations, particle simulations, rigid body Sims,
any kind of point cloud data and you're good to go. This is a much easier process
than what you had to do before with the CSV workflow. And hopefully it means that you
don't have to think about this. You can just drop
down the ROP, write out your data, and
be right back into Unreal. So very boring code slide,
but very quickly, I just want to give you an overview. This is the data that's being
written out to the JSON file. We've gone through a format
that has a header in it. So you actually have some
information about how long the sequence is, what
attributes are in there, what types of
attributes are in there. And then it writes that all out. And we've designed
it in such a way that you can write
it out as ASCII, so if you want to debug it
and see what's going on. And then when you're ready,
you can also convert it to a binary JSON format. And the beauty of that is it
will export really, really quickly and it will import a
lot faster than if it was ASCII. So you can now do much
larger amounts of data and kind of give
that feedback as you jump between the two packages. So let's talk a little
bit about the modules. You'll see a little
bit more of how they're being used
as I go through some of these other slides. But basically,
what we've done is we've modified the
modules so that they're very similar to how other
Niagara sampling modules work. In other words, just like
with sampling a texture or sampling a skeletal
mesh, it will sample data and store that in a namespace. In this case, we have our
own Houdini namespace. But in order to apply that
to a position or a color of a particle, you would
then set that variable on the particle emitter. So the modules will both
sample through the update or also the spawn set up
for the particle emitter. See if I'm missing
anything over there. I think that's the basics of it. Something else that
I wanted to mention, because if you have used the
previous version of the Houdini Niagara data interface, the only
way to sample data from the CSV file was by using
the index, the column index of the attribute,
which was a pain because if you started writing
out in a different order or if you had a
lot of attributes, it was difficult to tell the
difference between the two. So what we now have
is you can actually use the name of the
attributes, the attribute that you wrote out
for Houdini, you can use that name in a node
inside one of the modules in Niagara, which makes it a
lot easier to find and call the data that you want
from those point caches. So starting with
our first example, this is the thing that everyone
asked for over and over and over again, which is, how
do you get a particle fluid sim into Niagara? So this is how you
go about doing it. So I'll talk through
this a little bit. [AUDIO OUT] >>Victor: Let's see. I think we lost you there. Are you still with us, Mike? >>Mike: I'm here. Can you hear me? >>Victor: Yes. >>Mike: OK. Yeah, something decided
to freak out on me. OK, let me share that again. Portion of the screen. There we go. We're back. Are we good? >>Victor: We're good. >>Mike: OK. So as I was saying,
the particle fluid sim is what you're seeing in
the top left hand corner. This is actually
using a tool that was created by
one of our interns and is part of SideFX Labs. It's the labs spatter tool. And once I've created
that simulation, I can write that data
out to a JSON file and then bring it in and
run it directly in Niagara. So this is actually controlling
when the particles are spawned, the position of the particles. And if you have a
look at the top left, you'll notice that the particles
are lying on the ground. But on the right they disappear. And they're actually not
passing through the floor. There is a kill attribute
on those particles. And that is being
read by Niagara. And it's killing the Niagara
particle in the simulation. So you can now
finally get it in. In terms of actually
meshing those particles, we don't have something
for you there. But I believe Epic is
working on something. I know that Ryan Brucks
posted something on Twitter a couple of weeks ago. So hopefully we see
some cool particle fluid meshing stuff happening in
real-time soon from the Epic side. I'll check now to see if there
are any questions, because I've just done a quick overview. If not, I'll start going
to some more examples. >>Victor: Let's see. Is the animated attribute
cache deterministic? Will multiple renders
look the same? >>Mike: Is the cache-- it is deterministic. So the simulation coming out
of Houdini is deterministic. And the particle IDs-- Niagara now does have a
deterministic ID per particle that you can turn on as a flag. And so that's
deterministic, as well. So in terms of the data
coming from Houdini, yes. In terms of how it
connects to Niagara, yes. What happens in terms of
Niagara's simulation stuff, that's a separate part to this. But yeah. >>Victor: I think that was it. There were a few more questions
related to the level generator, but we can take them at the
end of your presentation. >>Mike: OK. Cool. So let's go back to this. OK. So the next example
that I wanted to show is remember that
what we're doing here is we're just dealing
with point clouds. And once we have attributes
on those point clouds, we can do a number
of different things. So as an example, what
I did was I took a box and I used it multiple times
to build this simple bridge prototype. On the top left, you can
actually see each point that a box has been copied to. And then each point
has a scale attribute that then changes
the size of that box. And so what I get
out of that is what you see on the right, which
is kind of like this bridge prototype that I mentioned. So all I'm exporting to Niagara
in this case is a static point cloud. It's just a single frame
with position and scale and a type attribute
that says whether or not it's the floorboard
or the side pieces or the rail pieces
along the top. So what I can then do is I can
use those attributes in Niagara to basically use a mesh
renderer on the particle emitter and change each particle mesh
to have a different scale using that type attribute that
I mentioned earlier. So that's kind of what's
happening over here. And then from there, instead
of running a simulation from Houdini, we're
actually just using this as goal positions. So in this case, I've
spawned a bunch of particles within a box location and I'm
using those original positions of those points you saw
earlier as goal positions. So as the player gets
closer, we're just using the user position here
to then drive the simulation to move those pieces
to their goal position. So I think this is pretty
cool if you want to kind of do formation and destruction
simulations where now we're kind of just using those
points as underlying data to drive the Niagara simulation. OK. So kind of carrying
on on this idea, this is something else
that we came up with was remember I said
earlier that sometimes your simulations are not going
to be able to run in real-time. I mean, in Unreal,
you can use Niagara. You can use Chaos. But even then, if it's a
really complex simulation, you might start running
into performance issues. And so the idea
here is, well, what if you could take the
Houdini simulation data and combine that with
Niagara's simulation data? So this is a relatively
simple rigid body destruction simulation. The idea here is I've started
with some goal positions. I move these spheres
towards the center pillar. And then they all click into
place in this structure. And so that simulation,
that animation data I can write out and
bring into Niagara. But then I can do one
more thing with it. So I can-- first of all, I
can instance a different mesh to those. But I can also have it interact. So what you're seeing
here is initially it plays a Houdini animation. And then when the player
gets close enough, I trigger something that
then uses Niagara's physics to create an upward force, have
them collide with the ground, use gravity, all
that kind of stuff. So I think this is what I mean
by blending the two together, where you would do
some stuff in Houdini and then the other
stuff in Niagara to kind of create these more
complex types of setups. Let's do one more-- crowds. So this is something I
literally put together in the last couple of days. It wasn't going to be
in the original talk. But thankfully, it wasn't
too bad to put together. I've always wanted to be able
to do this because crowds can get quite expensive. It often requires an engineer
to kind of implement it. And sometimes you just need
kind of background crowds that are relatively cheap. So this is a crowd simulation
running in Houdini. And what I've got is three
different animations. I've got a standing
animation, like just standing in one position. I've got a walk animation. And then it actually converts
into a zombie walk animation. So we've got these three
different animations playing back. And then you'll actually--
you can see there that I've taken the
environment's geometry from Unreal and brought it in
and used that as a collider so that the crowd sim knows
not to go past those bounds. And this is it
running in Unreal. By the way, I really
shouldn't have done stat FPS on this build because
this is a Dev build and the numbers
do drop slightly. I checked it this morning
and it sits a, like, 120 FPS the whole way through. So just a little note there. But essentially
what I'm doing now is I'm running that crowd
simulation through Niagara by using particle meshes and
vertex animation textures. So now we get into
some interesting kind of blending of different
pieces of information. So the thing that's actually
just coming into Unreal from Houdini is
just these points, which are then-- we're
instancing the mesh. And the reason I go into
wireframe is because the mesh, in fact, is just this t-pose
that is then being changed by a vertex-- a raw position offset of a
vertex animation texture. So hopefully this makes sense. And if it doesn't, feel
free to ask questions and you can dig into
the example files, which we'll be providing, as well. So essentially I
have here my mesh. And I apply to that a vertex
animation texture material. And what that means is I have
written out the animation to the textures. That's what you can see in
the bottom right hand corner, the three different textures. And then I'm lerping
between those textures. So as I change my
clip blend value, it's driving the
animation of the mesh and so I can drive that
value from my Niagara sim, because I'm getting that
data from the Houdini crowd simulation. So a couple of caveats that
I want to point out here. I deliberately chose
three animations because it was
easy for me to lerp between three different values. You could add more
to that, but it would start getting a
little bit more complicated. But I think, like
I say, for kind of relatively straightforward-- I say relatively
straightforward-- crowd setups that you want to
bring into your game, I think this could
be a good way to go about it that doesn't
involve you having to build an entire AI logic
system directly within the game engine. OK, I'm going to stop
there before I carry on and see if there's any
questions about the stuff that I just showed. >>Victor: For sure. We have a couple here. Let's see. Does the simulation go
through Houdini Engine plugin or a different import situation? >>Mike: Yeah, a good question. So this is not using the
Houdini Engine plugin. This is a little
bit confusing, I'm sure, for someone who's not
as familiar with Houdini and how it works with Unreal. This is, in fact, a Niagara
data interface plugin. So this is a separate plugin
from the Houdini Engine plugin which you can
install with Unreal. And so that's how the
data is coming in. So it's being exported from
Houdini as a JSON file. And then you would import
that into Unreal the same way you would an FBX or a texture. And as long as you have the
Houdini Niagara data interface plugin installed,
it will pick up that JSON file which we've
named HJSON or HBJSON binary as a specific type of data. And then you can use that data
in your Niagara simulation. >>Victor: And there
was one question regarding the low frame rate. And I think we had a bit of a
combination of a GIF playing, as well as Mike's upload
speed just tanking right as we started your presentation. >>Mike: Yeah, so like I
said, I shouldn't have captured the numbers in there. I thought that would
be a good idea. But I was using a
developer build for that. And like I said, I was using
4.25 preview 7 this morning with the plugin and it was
running perfectly at 120 FPS. >>Victor: Yeah, we were definitely
not even seeing 30 FPS on our end here, unfortunately. >>Mike: Oh, really? >>Victor: Yeah. It was tanking quite
heavily due to the bit rate being a little bit higher. That's OK. Hopefully we can share
the presentations later and then you can get
a good look at it. Let's see. Someone is referring to the
formation thing looks good. Do you make it in
Houdini and do you simply transfer it to Unreal? I'm not sure what
the formation-- maybe the formation of the crowds? >>Mike: Maybe the rock formation? So yeah, that's
done in Houdini. And the idea there is it's
a rigid body destruction simulation that's directed. In other words, I'm
starting from a position. I give it a goal
position to go somewhere. When it gets there,
then go somewhere else. And so we're mixing kind of art
directed rigid body simulation. So that's all being
written out as a cache. That's brought into Niagara
and then played back just by moving the
position of the particle. >>Victor: What is the max points
to cache from the flip sim? >>Mike: That's a good
question, and I don't have a good answer for you. The reason for that is I
think this is something that we really kind of need
some feedback from the community about. We've done some tests
and it's run pretty well with thousands of points. And by the way,
the data interface will work with both CPU
and GPU particle emitters. So there's probably things you
can do for performance reasons. But I think this comes back
to kind of making games. You need to decide what
you want to focus on in terms of where you want to
let your performance be used up. And I think you could
probably push this pretty high for a cinematic or
something in the background. But it's really up to you to-- what your entire
setup is, rather than just looking at how
many points you can get in. I mean, the number of
points theoretically is as many as you want. >>Victor: Is the JSON Houdini
and Niagara exported live? >>Mike: Yes, it is live. So at the end of this,
I will be providing a link where you can go
and download the plugin, the example file. And I should have a content
plugin ready and available by the end of today. >>Victor: And I'll make sure to
add them to the announcement post, as well, under the
little resources section there so that
everyone can get it. Let's see, another
performance question, I guess. How many crowd characters
can be rendered on screen with ray tracing? >>Mike: Good question. So this is the interesting
thing, actually. Oh, I wonder if it works with
vertex animation textures? One of the beauties of--
from what I understand-- with Niagara right
now is that you can do mesh instancing
with ray tracing, and it's really, really fast. In fact, an example I'll
be showing in a second deliberately plays
into that example. So it's only a single
draw call because there's only one material. I guess the big question is
because it's a vertex animation texture, if that breaks the
performance attributes of ray tracing with Niagara
particles, that might be a question
for the Niagara team rather than myself. >>Victor: I have Wyeth
coming on in a couple of weeks. So perhaps he'll be
able to cover that. Are the vertex animation
textures for the zombie folk shared
per character or is it unique across
a giant mesh that has all the crowd Geo? >>Mike: So for this
example, I've actually written out three
separate textures. Each texture represents
an animation. But you can-- I think just given more time,
I would have actually put that in a single texture because each
animation is maybe 24 frames that loops. So that's 24 lines of a
texture, 24 pixels in y. So there's definitely
a lot of optimizations here we could have done. We would have had one
texture, one mesh, and played it all together. >>Paul: I think the question,
Mike, was not so much if you had a texture per animation. I think it was, did you
have all the geometry that the crowds combined saved
as that, or is that just one character and reinstantiate it? >>Mike: Yeah, thank you. Yeah, it's a single mesh. It's a single, static mesh
of a character in a t-pose. And that is being instanced
to each particle in Niagara, just with a mesh renderer. And I've just had confirmation
that vertex animation textures don't work with
RTX in terms of performance. >>Victor: OK. Let's see. So the crowd sim-- and I think it's worth
clarifying-- so the crowd sim is a static sim meant for
backgrounds without player interaction, question mark? >>Mike: I would say yes,
for the most part. It's something that
I wanted to do, I just kind of ran out of time. If you think of the rock
formation simulation, there there is a combination
there of the Houdini cache, and then also interaction
with the player. When the player gets
close, we can exert a force on those particles. Something that I wanted to
try with this crowd simulation is actually have the player
run in between the crowds because if I just check the
position from the particle to the player, then I can
figure out a direction and kind of divert them. So you could kind of do a-- like I say, a cheap
version where you might not get a different animation
playing on the crowd when the player gets close,
but you could at least have the player moving through the
crowd and keep-- the crowd would kind of keep
their distance. They would divert
around the player. >>Victor: How is motion blur
computed with crowd using WPO-- or position offset,
I think, is what-- >>Mike: It's a good question. I can't-- I believe-- I mean, I know Niagara now
does feed into the motion vectors for temporal AA. I'm not too sure. Yeah, I'd have to get
back to you on that one. >>Victor: Sounds good. That was all the
questions for now. >>Mike: OK, cool. Let's keep going. I think I've got
one more example but there's a reason I
wanted to keep this one. OK, so this is kind of a
combination of everything that I've shown so far
bundled into one idea. So the idea here was we
wanted to take an instanced mesh, static mesh, that we could
put onto the Niagara particles specifically for the uses
of ray tracing and RTX, which isn't in this demo,
but hopefully something you'll see in the near future. And so what I
wanted to do is have a wall of these objects
that would almost create like a panel of mirrors. So what I've got
here is a number of points with just an
orient attribute that orients each one of these
panels to those points. So that was kind of the
initial setup that we have. The other thing
that I have got here is a relatively complex
rigid body simulation. So there's a number of
things going on here. The balls that you see here
are those same mirror panels that I showed in the
previous example. I'm actually driving the
position of those balls by an animated separate mesh. Those are the light gray-- sorry, the dark gray templated
geometries that you can see. And then we've got this
really complex chain setup which is all running
through a rigid body simulation. And what I've also
done is I've created a constraint between two
pieces of these chains so that what I can do is I can
separate these out, drop them down, and that's going to be
like my starting position. But then I can animate
the constraints to pull them back together. So there's a lot of different
stuff going on here. And you can see here
it's playing back at 12 frames per second. That's not even
simulation speed. That's just playback speed. So this is the
kind of thing which would be really
difficult, I think, to try and do at
real-time because of the number of interactions
that are happening between those chain lengths
and various other things that are going on. So the idea here
is we want to go from that initial
wall, burst things out using the Niagara simulation
side of things, transition to something like
this, and then still be able to interact with these
because they are still particles in the world. And that's what this is. So there was that initial wall. Now we're seeing the particles
are following the player. Then they actually move into
their rest positions from that. And so now we're playing
back that animation but they're still interactive. So if I go in there, I can
add a force to those particles in order to shoot them apart. And then I also
added a trigger so I could have an event which
spawned some sparks, as well, for that system. But this is kind of what I
mean by there's a-- like, we can now combine a lot of
different things going on here. Ignore the stat FPS, by the way. But we're starting with
a Houdini rest position. We're then transitioning
into a Niagara simulation of moving the particles around
back into a Houdini cache simulation. And at the end there, we're
actually adding forces from the Niagara simulation. So I think this is the
kind of complex stuff that you can now do with
the combination of these two packages because of the
Houdini Niagara data interface. I'll let it play once
more because it's cool. >>Victor: It is cool. >>Mike: So imagine RTX
turned on with ray tracing and those mirrors actually
showing the player. That's the idea. Hopefully it will happen soon. OK. Yeah, and so that is everything
that I wanted to show for the data interface today. You can find more
information here. I will give Victor
the link, as well. That link will take you
to one of our forum posts, where there is a link to a
Windows version of the release for 4.25 Preview 7. There is also the
example file, the Houdini file that I used to show all
the stuff that I showed today, as well as a link
to the source code. So if you do want to build
from source with your engine, then you can do that, as well
if you're doing Mac or Linux or you just want to keep
up to date with Windows. And we'll be posting
more information at that link over the
next couple of days as we kind of flesh things out. And that's all I got today. >>Victor: Perfect. Thanks a lot, Mike. We had two more
questions come in. Someone was wondering, any
way to get example scenes you have from Houdini
and the JSON cache slash Unreal project? >>Mike: Yes. So that's the link that
I just showed on screen. And I don't know if
you're putting it in chat, or we'll put it on the
Epic forums, as well. If you follow that link,
you will get access to all of this stuff. You'll get access
to the Houdini file, the data interface
plugin, as well as the-- I'll be-- finishing up
a content plugin today which will be available as
well, which has pretty much most of this stuff in there. >>Victor: Sweet. And perhaps we can head over
to the forum announcement thread and update that there, as well,
as it becomes available so that everyone of you who
are watching this not live will be able to-- it might possibly be
live then already. That's great. Let's see. There were a few
questions that came in. I believe there
was one for Paul. Oh, this was a good
question, actually. On the dungeon
generator example, can it generate level
streaming instances that are dynamically
loaded slash unloaded instead of just static meshes? >>Paul: I think what
you're asking is you'd like for the
pieces of geometry to be instantiated in a
level that you can stream in and stream out. Currently with v1, no. With just a single
HDA, I mean, you could have multiple one at
each level and it generates that. That is one of the features
that V2 will allow you to do. So it will allow you
to directly write to a system that
world composition, you know, accepts, so
write to different levels. >>Victor: All right. I think that was it. Oh, so another
question in regards to the vertex animated crowds. Could you use this to create
a large crowd for a stadium? >>Mike: I think so. Yeah. There's nothing stopping
you from doing that. Houdini actually
does come packaged with a bunch of
crowd animations, so standing up, cheering,
booing, all that kind of thing. Pretty much what
I showed today you could write out those as
vertex animation textures. You could run the crowd
sim to have it in a stadium and then bring it into
Niagara to run it there. >>Victor: Sounds great. Lots of comments
from chat saying you guys did a great
presentation, so I just wanted to pass that
forward, since you might not be looking to that. Anything else
you'd like to leave our viewers with before
we leave the stream? >>Paul: If you're
using SideFX Labs, we now have a
Twitter @sidefxlabs. So if you're using Labs and you
can show us what you're doing, show us. We'd love to see. >>Mike: Yeah, it's a good point. We're always keen to
see the cool stuff that is being done by the community. We are often hard at
work making these tools and we don't always
get the benefit of seeing the great work that
everyone is doing with them. >>Victor: That's great. Well, with that said, I'd like
to thank both Paul and Mike for coming on the stream
today and spending quite a bit of time preparing
these presentations. Hopefully the quality came
through all right for everyone that was watching at home. And before we end I'd like
to mention that next week there will be no
Inside Unreal stream. However, tomorrow we're
starting a new initiative that's called our
Educator Livestreams. Now, these systems
are specifically tailored for educators
and students, but there's no reason
why you wouldn't be able to tune in if you're
just interested in what they are about. They're done by
the education team. And you will be able to see
them almost every Friday, I think now for a
couple of weeks forward. I posted the schedule down in
the Twitch channel description. A week after that, though, on
April 30 we will be covering highlights from 4.25
with James Golding. I'll have Marcus Wassmer and
Simon Tourangeau on there. I think they are inviting some
of the animators, as well, to go in depth in
that a little bit. With that said, I'd
like to thank everyone for watching today. And hopefully I'll have both
of you back at some point, perhaps once the V2
has been released and we can go over some of
the new features of that. I think that would be great. That said, it's
time to say goodbye. I hope you all have a
great rest of your week and we will see you all
back on Inside Unreal again in two weeks. Bye, everyone. >>Paul: Bye.