DARKO PRACIC: Hello. Hello. Hi, everyone. My name is Darko Pracic. I'm the Environment Art
Lead at Embark Studios. This presentation is about how we
create landscapes at Embark for UE5 using Houdini. Embark Studios was founded in
2018 with the explicit intent of revolutionizing how
both games are created and how they are
experienced by players. At this point in time,
we are four years old, and we've already
grown to 290 people. We're a studio that is filled
with curious developers who dare take big risks, and we all
strive for real breakthroughs when it comes to automation
and proceduralism, where we've made huge
progress already. Let's go through the agenda
for this presentation. We've already had the introduction. Next, we'll go into the
background of this talk. I'll talk about our general needs as
a studio, what the current landscape tools and workflows look like, and
list our implemented improvements for enabling any artist to create
realistic landscapes with ease. Then we go in depth. The first part is about
how we treat and work with LiDAR and real-world data. And the second part will be about
our everyday tools in Houdini that artists use when
they work on landscapes. And lastly, a short conclusion. So going into the background
section of this presentation, we need to see what our
needs are as a studio and what steps we should
take to get to a position where anyone can create
landscapes-- more specifically, anyone can create
realistic landscapes. First, we're making
free-to-play-games as a service. This means that we want to give
our players frequent content drops and expand and deepen
our game over time. We should be able to reshape
several square kilometers in a short timespan. Second, our studio aims for
realistic high-end visuals. We love pushing quality
to the next level. And that will, of course,
influence production time. Our landscape workflows
have to encompass everything from grand vistas in our creators
to smaller parts in the finals. And any artist should be able
to use those tools effectively and efficiently. Then we want to stay small. Our projects will max out
at around 100 to 120 people. Each individual should
be a T-shaped developer. In other, larger studios,
you'll have a specific team for landscape creation,
and one for vegetation, and yet another for architecture. At Embark, each artist
must be able to contribute in several of these fields. And lastly, iteration times
need to be as fast as possible. Our projects evolve
and expand quickly. Whichever workflow we
choose, it should easily abide to design or
art direction changes and deliver updates post-haste. How have we been making
landscapes previously? Let's take a look at a few of the
traditional tools and workflows. First, we have the
landscape-specific DCC tools. They give artists a
fair amount of power, but are difficult to work with. The next point seems minor,
but matters from a UX point of view. Previous landscape workflows would
involve processes and filters on a two-dimensional heightfield. We believe that transforming
and shaping landscapes is more intuitive when done
in a 3D context with gizmos and handles that artists are used
to from other 3D applications. And then I'll talk about why
LiDAR and real-world data are key to fast production when
it comes to realistic landscapes and why it currently isn't as fast
as it should be for high-end games. The biggest factor is
the steep learning curve for highly technical
geospatial applications, which are often required when
processing high-resolution LiDAR and imagery. Let's go into some details when
it comes to current landscape DCC tools. Our landscape artists have been
using the traditional landscape tools for years, and they
are great in plenty aspects. They are powerful in what they do. They are often very well optimized. And in some cases,
you can even see real-time results while adjusting a
slider, which is great. You're often working
with large landscapes which, of course, is a lot of data. And being able to process that
quickly allows you to iterate more. Another nice aspect is
their erosion models. If you do become a master
at the tool and then you are an artist with a good
sense for what looks realistic, then you can achieve
satisfactory results. Yet another nice thing about some of
the applications like World Creator and Instant Terra, they also
have Unreal Engine plugins that allow you to send
data directly into Unreal. But let's talk about why we
feel that we want to move away from those traditional methods. So landscape tools come
with a few problems. Attempts to recreate reality
through generative methods are likely to fail. Adding a landscape
DCC to our toolset will add yet another software that our artists have to master. We also don't want to
invest in narrow software. And the lack of scriptability,
it doesn't mesh well with our goals for automation. So we strive for realism
in our games at Embark. And historically, it has proven
to be difficult to achieve a realistic and believable result
with traditional landscape tools. Here we have an example
of a piece made with Gaea. And it is believable at a
glance, but the artist had to probably understand a
multitude of node parameters, filters, and erosion settings. And they would still be fighting an
uphill battle trying to replicate something organic and natural. These tools get their output
strictly from a mixture between pseudorandomness like
noises and simulations like erosion. Using only those methods, it's,
of course, very difficult or perhaps even impossible to
fully capture what happens in reality
where landscapes are shaped by a myriad
of factors over millions or even billions of years
in highly intricate ways. One more reason for
wanting to move away from utilizing traditional
landscape tools is that we want to reduce
the amount of software that our artists have
to learn and master. Also, one of our passions is to
release our tools and findings to the public open-sourced. And that becomes trickier
when your process involves many licensed products. The third reason for using
traditional landscape tools is that we'd like
our applications to fill as many
purposes as possible. Landscape tools are only
built for one thing. At Embark, two of our main
tools are Blender and Houdini. And those two applications combined
provide a large content production platform. They're used by 3D artists,
environment artists, VFX artists, character artists, and animators. Here's an example of why it's
nice to have a landscape tool that can do other tasks as well. We wanted to try out a new look for
our in-game map for our creators. And it was easy to send our gameplay
location from Unreal to Houdini and combine it with the heightfield
and then utilize Houdini's rendering to generate a nicely rendered
orthographic image with Karma XPU. After which, one of our concept
artists could do their work and give it that old black and
white satellite image look. And another production example,
we could take the heightfield that we had in Houdini
and convert it into a mesh with baked textures that
is used in our frontend. Being able to convert our entire
landscape into a simple 3D prop complete with texture baking
all in the same application is very valuable. And the final important
reason for us to move away from landscape-specific DCCs
is their lack of scriptability. They're difficult
to make plugins for, and you can't easily
place them in any pipeline or workflow without friction. Automation is very important for us. We want to automate any
and every step that we can, since our artists should
be allowed to focus their time on being creative
and working on the things that matter to players. Next is the subject
of struggles artists face when working with
the two-dimensionality of traditional tools. Traditionally,
when artists work on landscapes, they are working in a 2D context. And how they manipulate
landscapes will often be by dragging sliders,
inputting numbers, or adjusting something in a top-down 2D view. What we want to do is give the
artists the same manipulation tools that they are used from
other 3D applications-- a [AUDIO OUT] and a
rotation gizmo for starters. And anything extra that
we add should at least follow a common
language that they're used to from Blender or Unreal. Like I mentioned previously,
this might feel minor, but our artists were
able to shape and reshape our game world in our creators
in a much more intuitive manner and do things that we previously
had no idea on how to solve. As soon as we added tools to
manipulate landscape assets as 3D objects, everything fell
into place and we could [AUDIO OUT] and we could
shape the world as we pleased. I will showcase later
in this presentation how we exactly we shaped
landscapes in this 3D way. So the probably most important
aspect to previous workflows when it comes to achieving realism
is LiDAR and other real-world data. LiDAR is a technology used to create
high-resolution models of ground elevation. This technique has been
absolutely essential for us when striving for realism, just in
the same way that photogrammetry has been a standard practice
for studios and artists that want to create
realistic nature assets. When looking at
nature asset creation, then photogrammetry is
an obvious solution. You wouldn't sculpt the
rock, would you? Both from the aspect of how hard
it is to match reality, but also from the aspect of how
long it would even take, even if you were the best sculptor
of realistic rocks in the world. The exact same logic
applies to landscapes. Trying to generate or
sculpt realistic features is a no-go for us. If we look at this
image, for instance, it's a section of a LiDAR data
set from the Grand Canyon. The terrain we see here is
believable right out of the box because it is the real thing. Now, of course,
looks aren't everything. We need to build landscapes that
are traversable and technically feasible. But a sensible workflow would be
to start with a real-world base. Then you do the
large-scale level design to make landscape work for gameplay. And then when you have the
rough gameplay edits ready, you can replace those features
with matching real-world ones and get the best of both worlds. So let's look at what the
LiDAR data actually is. And then we'll talk about one
of the popular GIS applications that you'd use to
work with the data. Let's see. If we look at this
image, what you see are the points that
the LiDAR system will gather into a point cloud
representing the ranged surface. For geographical LiDAR,
it's often gathered with an airborne system where
you have an airplane that has a laser camera
in the bottom that shoots a laser
towards the ground, measuring how long it takes
the laser to bounce back. Depending on the intervals, you'll
get a dense or a sparse point cloud. And when we look for
data sets online, we try to find those that are
at least one point per meter. Here we see one of the products
that you can derive from LiDAR, which is DEM-- Digital Elevation Model. It's basically a grid of rows and
columns that build up a surface. In this image, we can see an example
of a Geographic Information System application or GIS for short. In this case,
it's an open-source application called QGIS, which we still
use in some rare cases. But I recently put some time
into automating away those steps as well by simple Python
and command line scripts that we can run from
Houdini directly. We don't want artists have
to learn this software, but the main benefits of
it is that you can easily layer different data sets
on top of each other, each having a different
map projection model. So let's look at the
required improvements, and here's a short
list on what we have to cover if we want to reach our
goals for realistic landscape creation. Processing LiDAR is
quite cumbersome, slow, and requires a bunch of
knowledge that you'll never need in any other aspect
of game development. The GIF shows you how easy
it is-- wait, I'm jumping. This should be fully automated. The only manual job the artist has
to do is to find the data they want. And of course, make sure that it's
under a Creative Commons license and free to use. The GIF shows you how easy it is
for someone to initiate a LiDAR conversion to usable assets. The artist clicks
on one shelf button, picks the folder of the LiDAR
data set they want to process, and press OK. Then a PDG process
will automatically spawn and start cooking, converting the files into
usable heightfield assets. More on this soon. Then we need the tools
and utilities that enable artists to adjust,
mix, remix, and manipulate LiDAR-based heightfield
assets in Houdini without compromising the quality. A couple of them would be
the heightfield lattice HDA that we've made,
which allows artists to transform their heightfields with ease. We also need a way to procedurally
generate believable colors. Previous workflows involved
a lot of manual work trying to calibrate the colors
on an 8-bit satellite image. And removing buildings,
and low vegetation, and the shadows as well
was a complete nightmare. So we need something
that is generated. We also need ways of creating
interesting masks for our landscape materials in Unreal Engine. Getting a wide range of
masking and distortion features will be critical to make masks that
don't pop out and look unnatural. And lastly, the export and import. Currently, we're not near
where we'd like to be. We'd like to have a direct
connection with Unreal where we can send and
receive data effortlessly through a plugin or a bridge. But what we have now
is a simple export HDA which can split
everything into tiles, and you select layers
that you want to export like the height layer, or the color
maps, or the different material masks. So onto some actual Houdini tools. And our first main section
will be about LiDAR processing and a first implementation
of an asset browser. There are three main processes
that we need to automate. We've only finished one of them,
which is the one listed up top. And I'm currently
working on the other two. The first one is where
an artist has downloaded LAS or LAZ files and wants to have
the data ready to use in a Houdini work file and in an easy manner. LAS or LAZ files are
common [AUDIO OUT] types that you'll get when you're
done and download LiDAR assets. So where we'd like to end up
with our LiDAR assets is-- so we want artists to be
able to just drag and drop LiDAR heightfields onto their
landscape that they're working with. So we want to make the
steps from LiDAR source to a usable asset in our library
as automated and easy as possible. Gathering the data from sites
online can be automated as well. Most sites provide you
with FTP servers and lists where you can batch-download
everything with a script. The asset browser you see
is just a Photoshop mockup. We haven't had time to
build something like that. We have a simpler version. But this shouldn't be more
than just writing a few pyqt5 requests in ChatGPT, and-- [LAUGHTER] --it should finish the job. But I'm told you're
not allowed to do that. But still,
the main idea is for us to be able to grab a processed
LiDAR asset and inject it into your work file like so. So imagine you're
having a grabbed feature and you can just put
it into your landscape. And that's basically how it
works today in our tools. Here we see the main PDG network
that will be read the LiDAR files. The first steps
convert the LiDAR files into high-res heightfield
files in BGO format. Then it will take those
files and resample them to a much lower resolution
and save those on disk. The reason for that is that
we need a way for artists to quickly preview things. Loading in high-res
heightfields is slow. So we output the low-res files. And then further down in the PDG
process, we import them, merge them, and output another heightfield that
is composed of the entire data set in a low-resolution format. Here's an example of what
a complete data set output in a reviewable resolution. And what I mean by
reviewable is that you can navigate in the viewport,
where usually you're working with a lot of data. And this, I think,
is 16 million points. So you can rotate
around and find the area that you want to use
for your landscapes. The main nodes for this
process in the beginning is the LiDAR input node. And I don't know when,
but it has been updated a lot to allow you to select the different
classification filters in your LiDAR points, like buildings,
low vegetation, medium vegetation, and high vegetation. And that can be super
useful for later when you want to create different
material masks or mask away something that you don't like. Here we see the
imported LiDAR points. And you transfer those
into a heightfield using a volume from Attrib SOP. So this is the first
iteration of a asset browser that I mentioned previously
for our LiDAR content. You click a shelf
button, and the list will be dynamically populated by
the available assets on a network location. So that went a bit fast,
but that was a shelf button click. So this whole preview mesh-- I lied before. It's 56 million points. And we can still
review it in real time. The entire Tenerife data
set is 2.5 billion points. And we want to-- being able to read the data by
lower-resolution proxies is crucial. I'm going to pause a bit. Let's see. Went by a bit fast. So let's see. Being able to read data by
lower-resolution proxies is crucial. We don't want artists to
be stuck in 30 minute load times and the applications
crashing on GPU memory. You should always be able to work
fast in a lower-resolution mode and get the high-resolution content
in loadable chunks when needed. There are even volume
compression possibilities to reduce the memory
cost of your loaded data, but they should be used carefully
since they may introduce artifacts. So here's an example of a few of the
classifications on the LiDAR data set. We've transferred those as mask
layers on each heightfield. And the ones we see here are
buildings, low vegetation, medium vegetation, and high vegetation. That was a brief look
into our LiDAR processing. Let's move on to our
heightfield tools in Houdini. So we're using the Gaea bridge. It is a useful utility. And that's something that
isn't anything we've created-- the Gaea Tor processor
that SideFX Labs has built, which acts as a bridge in Houdini. I know I've talked about
how we try to stay away from traditional landscape
DCCs, but we still need a powerful erosion
model from which we can get nice looking deformation. And even more importantly,
we need different masks like erosion flow and sediment
deposits for our in-game landscapes as material masks. We will mostly rely
on real-world data, but there will be cases
when we need to fill the gaps with procedural solutions. And having a bridge to Gaea
in Houdini fills that gap. For the masking part,
we are using both standard Houdini heightfield masking nodes,
which offer a great range of feature-based options. To supplement that, we also
add some flow maps from Gaea. Using the erosion
in Gaea with a very short duration in the simulation will give you these nice
and thin flow lines. Two of our other favorite
nodes are the distort by noise and distort by layer. Especially using the distort
by layer node on masks, with the height as a
gradient, will give you a directionality to the mask. Here's a short demonstration on how
we utilize the Gaea bridge for cases where the artist have to manually
shape out a piece of the landscape. So I'm just using
the nodes where we've set up for masking and adjusting
the landscape with a levels-ish node and distorting it a bit with noise. And then we add the Gaea
process, which applies erosion to the masked area like so. I mentioned that we need
ways to generate colors. Let's take a look at
our current solution. On this slide, we see a few
results from our heightfield color nodes, which procedurally generates
colors for our landscapes. Our main goals have been to produce
both large-scale variations that feel natural but also a
high-frequency breakup, which is what the players will
see in their near proximity. Here's an overview of the HDA.
It relies on an input image that gets converted into a color ramp. This is made possible by an
HDA made by James Robinson. And it's available
on GitHub and Orbolt. After that, we generate a
bunch of masks with Gaea which will be used in several of
our coloring categories or layers. We have a base layer, a slope layer,
a plateau layer, and a flow layer. This is just the current
state, but we feel that can you go infinitely deep with this and
define a multitude of features for coloring. So we get the
large-scale variations. The way to get the
large-scale variation is to map a low-frequency noise
or a large-scale occlusion mask onto the generated color ramp. In this case, I think the noise
is set to a scale of 900 meters. To break that up,
we map something that has a higher frequency of
microgradients to a color ramp. And when we blend them together,
we get these large variations with detailed breakup. So align and lattice-- onto a few of our tools that will
fall into the work in 3D category that I mentioned earlier
in the presentation where I compared 2D tools versus 3D. I'm going to play a video that
will probably to be too fast again. First, we locate the area
that we'd like to improve. We're looking at about a square
kilometer of gameplay area here. We load some LiDAR
data that we intend to use as a replacement for
the heightfield in that area. And we crop out a specific
section that we need and load the high-res version of it. Next, we crop out the section a bit. And then we align the LiDAR
data with our in-game landscape using our heightfield align node. And we then make the shapes fit
more neatly by using a lattice. This allows us full 3D
control of the landscape. So at the end there,
I'm toggling between the before and after, which shows
us the improvement that we get. And here's what it looks in Unreal. That's a nice change. So onto Houdini Engine. We're using Houdini
Engine in a few cases. And one of the HDAs that we have
is the heightfield cliffs HDA, which takes the landscape
in Unreal as input and then generates a mesh that
gets displaced by a tiling texture. And then from that
high-resolution displaced mesh, we create a low-resolution
mesh, and then we just apply the material
in Unreal on that. So let's look at the demo. It's a bit sped up. Trying to find a good sun
angle is the main work here. Here's another example of
that heightfield cliffs video. And we have another HDA that
we use quite frequently. We're using World
Composition in Unreal, and we have a 64-tiled landscape. And at a certain streaming
distance, those tiles will load in the level LOD instead. And in that LOD level,
we just keep a static mesh that is a rare LOD, essentially,
for the landscape that is generated with this HDA that also
just takes the landscape as input. It makes a nice low-poly model,
and it bakes down the normal map. We've also rebuilt a simpler version
of our entire landscape material with all the layers. So that gets applied in
Houdini as well when you input the landscape into this HDA. So here we see the landscape tile. And you just cook the HDA. And you get a low-resolution LOD. And just to say, Unreal has
built-in features for this, but we felt that we wanted
more detail and more tweaking. And it looks good at a distance. We rarely see any popping
or any differences when you're playing in the game. So we're already at the conclusion. This was way faster than I thought. So we have some nice
things in place today. It is much easier for
artists to build landscapes with our current tools,
but these are just the first steps of a bigger journey. We've gone from having work files
that take 30 minutes to sync in Perforce and 30
minutes to open to being able to reshape several
square kilometers with highly realistic data and to have
it in-game within 30 minutes. We've also gone from
a point where we would have to be satisfied
with the base LiDAR data and try not to alter it
because that could ruin the features to a position
where we can both alter it to our visual liking
but also adjust any area to gameplay requirements as well. And I'm going to end
this presentation with a video montage of the world
we've created for our creators. [SUSPENSEFUL MUSIC] [APPLAUSE] Thank you. I just wanted to say that me and my
brilliant colleague, Erik Hallberg, will be presenting
tomorrow as well at SideFX. And those who come will get
a six-month indie license. That's all. Thank you. [APPLAUSE]