LUKA LAKIC: Hi. We wanted to make a
tutorial to help you edit the textures of your MetaHumans
once you have downloaded them. This workflow can be used
for adding tattoos, scars, or whatever else that is residing
in the superficial skin layers. This tutorial is meant for those who
are 90% or more happy with what they can extract from the
MetaHuman Creator, but they still have specific
other extra requests that would give their
character a final touch and make it absolutely unique. In order to do this, you will need
to have access to textures editing as well within software
of your choice. Although you can already
easily edit color maps provided within source materials
in any image editing software, this is not the case
with the normal maps. To have complete control
of the normal maps, an artist would need to have a
high poly sculpt of the character so they could sculpt anything
they want on top of it. Once you have the base mesh from
the rig and the displacement, you can reconstruct any
character within ZBrush to have complete control. Once we are inside Quixel Bridge you
can download your custom MetaHuman. For this tutorial, I'm going to
use one of the MetaHuman presets. I chose the Sook-Ja character, as
she's one of the oldest available within the presets. The reason I chose an older
character rather than a younger one is the fact that wrinkly
skin will be harder to reproduce accurately due to
many landmarks on an older face. Young characters would not be as high
a benchmark for this demonstration and it is fair to
assume that whatever works on wrinkle rich characters
will also work on smooth ones. As you can see, I have already
downloaded this character. There are a couple of options to
keep in mind before exporting. Firstly, make sure to
have the right resolution. Apart from that, you can
select all the textures you might need in the Download Settings. Some of these, like the displacement
map, are currently unavailable, but you don't need to worry
about them for this tutorial. The next section in the Download
Settings is dedicated to models and here, you can choose which
source assets to download. Some of these settings
are just preferred options and again, you don't
need to worry about them. For this demonstration,
the most important of those would be the source Z2. While we don't have the Z2 or
displacement as source files, we can recreate them using
normal maps and the base mesh. After downloading the
character, make sure you have installed the appropriate
plugin for exporting. I'm using Maya and plugin
is already installed so I can just hit Export and have a
fully rigged character open in Maya. For the source files, the
quickest way to access them is to right click on the
downloaded character in Bridge and select Go to Files
from the dropdown menu. In the Explore window you can
navigate to the Maps folder within the Source Assets folder where
all the character specific textures live. Inside the Maps folder, we will
need color and normal head maps. Both sets contain four maps, one base
or neutral map and three animated maps. The first thing I'm
going to demonstrate is editing on top of the color maps. This edit contains simple tattoos
and it is a straightforward edit as it does not involve
editing of any other maps since the tattoo information is
present only in albedo or color. After loading the neutral head
mesh in Mari, the next thing to do is to load all color maps as well. As you can see, these color
maps have 2K resolution even though we have
checked 8K for export. As color maps do not contain
high frequency information, we provide a lower
resolution for optimization. The normal maps which do contain
high frequency information are indeed 8K, something you can
see from the file size. Once I have imported and
renamed all four color maps I can start performing the edit. Here in the Image
Manager, I have already prepared the alphas for the tattoos
I'm going to add this character. I've picked a simple tribal design
for the purpose of this tutorial. The reason for working
in Mari is the need to have these designs displayed
correctly without any skewing or stretching. This would be hard to do by
working only in UV space, as the UV is increasingly stretching
further from the center of the face. To avoid that I'm using
projection in 3D space. The workflow assumes that any
edit done on the base color is going to be applicable in
the animated color maps as well, so I'm turning those
off for the time being. And once I'm done, they're all going
to be exported with the tattoo layer on in the top layer. Here I am using flat color
to simulate ink color. In the mask stack I'm arranging
the prepared alpha projections. Once I have projected the masks,
I'm introducing a bit of blur to make it blend better and to
simulate the look of aging ink that dissipates over
decades, especially if they were done with traditional tools. Finally, I need to edit the blending
mode and color to adjust the end result. Now that I'm
happy with the result, I'm going to re-export all color
maps with this tattoo layer on. Having updated color
maps with these tattoos, there is no reason not to test
them right away since this edit is independent of normal maps. I'm turning off selection
highlighting for better visibility and bringing on Hypershade
to access the head shader. In the shader, I found the slot
for base color labeled diffuse map and I replaced the existing map
with the one I just prepared. You can instantly see
our change taking effect. However, this is not
enough as this only influences the neutral expression. But as soon as we activate
any other expression that has effect around these
tattoos, you might notice the tattoos disappearing. This is because for
any given expression, there is an adequate
animated map activating and that map also needs to
have the change present in it. So after loading the
animated maps you can see everything
working as expected. For the scar reference, I googled
a couple of images and this is the one I liked
best for the purpose. In order to be time
efficient I'm not going to show the sculpt
process of the scar. Instead, I already
prepared it and I'm just going to save it as a 3D brush. Once we have the character ready, I'm
just going to apply the scar on top and do small edits to
blend it with the skin. Before we can move on
to sculpt, we need first to generate displacement maps
from provided normal maps. For that I'm using
Substance Designer, as I find it to be the most
efficient tool for the job. In the Substance
Designer, I have already imported the four
normal maps as resources and they are being used in the
graph to be converted to height maps via normal to height HQ nodes. HQ stands for high
quality and we should be using the HQ version as it
will give us more accurate pores and wrinkles reproduction. The settings are the
same for all four nodes and they are mostly default settings. The only thing differing from
the default node settings is the quality option, which
is switched from normal to high. If you wish to play with the
settings, feel free to do so. For example, relief
balance influences the contrast of the details. However, while testing, I have
determined that the default settings give me the optimal outcome. When we have the height maps ready,
I'm checking all four of them for export. In the Destination folder
I can find these maps which are going to be used
in ZBrush for reconstruction. There is one disclaimer to be
made regarding this process. As it is also stated in the
Substance documentation, you can never get 100% accurate
displacement from normal. Before I export the
necessary OBJs from the rig, I will quickly adjust the shader
so that the detail in the face is more prominent. You can play with the
settings on your own but for me, usually adjusting
the specular and occlusion does the trick. For the low poly OBJs, I need
to export at least four meshes-- one neutral and three meshes
corresponding to the animated maps. This image shows how the
maps are arranged universally on every MetaHuman character. The WM1 map contains brow raise,
blink, pucker, squint, chin raise, and jaw open. The WM2 map contains brow down, nose
wrinkle, mouth and neck stretch. Unlike the first map where some
expressions cannot be active simultaneously, WM2 does not
contain conflicting expressions. Likewise, the WM3 map
contains only smile and cheek raise expressions which usually
get activated together, so there is no conflict there either. I have already animated
the rig controls to form the exact expressions that I need. Here you can see what I mean
by conflicting expressions. We can store brow raise, blink,
and pucker in a single shape but if I added jaw
open or chin raise, they would influence the pucker. Squint would influence
blink in a similar fashion and these influences
would look unnatural. This is why I have
split WM1 expressions into three different shapes. Depending on where the
Edit is going to be placed, we are only going to use
the affected expressions. The WM2 and WM3 sets are
stored in a single shape each, which is quite convenient. Now that all these
shapes are prepared, we can go on and export them as OBJs. Here we have covered
all possible scenarios. But if the edit you will be doing
is small and more localized, for example only on the eyelid,
then you would use only the neutral and blink. Similarly if the edit is
just affecting forehead area, you would need neutral and two
forehead expressions, brow raise and brow down. Here in ZBrush I have
imported the neutral mesh and subdivided it six times. On a new layer, I am applying
the previously prepared the displacement for the neutral. One thing that is a consequence
of deriving displacement from the normal maps is the fact
that we need to eyeball the intensity as mentioned earlier, this
conversion assumes a bit of data loss since normal maps do not contain
information about normal intensity but only the normal direction. This is why we need to use our
artistic judgment and reference images from the MetaHuman Creator
to get as close a match as possible. Now that we have recreated
the character in ZBrush, the next step would be to do all
the intended editing on a new layer. In this case, I'm going
to add the scar that I already prepared as a 3D brush. I'm getting the scar on a new layer
while having the underlying skin layer turned off so
it doesn't interfere. Once I turn both layers on,
I can use the morph brush to reveal or hide certain
portions of it to blend it nicely. Finally, when I'm happy with
the look of the edit on neutral, I can propagate the same edit
to the animated expressions that I already exported. Judging by the placement of
the scar and comparing it to the expression distribution
on the animated maps, it will only partially
affect the pucker expression from WM1, nose wrinkler, and
slightly affect mouth ridge from WM2, and both expressions
from the WM3 map. This means we need to use
three of the exported OBJs, but not all of them so I can
disregard the squint, chin raise, and jaw open. Before updating the Z2 with
the WM1 expression set, I'm turning off all
the existing layers. Once I have imported
the WM1 shape I will be repeating the
process for the neutral, only this time using the
corresponding W1 displacement map. Now we can just reuse
the edit layers. To have the layers stack
orderly and easy to manipulate, I duplicated the scar layer so that
each WM set has its own associated scar layers in case it needs to
be edited further still depending on specific wrinkling beneath it. In the layer stack I'm using caps
lock for WM1, WM2, and WM3 sections for easier navigating, as they
are meant to be active only one at a time with their
associated layers. I am repeating the process
two more times for WM2 and WM3 just as I did for WM1. I'm doing a bit of additional
editing on the scar with the smile expression active as
scar tissue is sclerotic and thus behaving differently
than ordinary skin, so I need to make it more
persistent on the face regardless of the heavy
wrinkling that surrounds it. Although the quickest way
to bake out these changes would be to do it directly
in ZBrush, those normal maps would differ a bit from
those in source materials as the source maps have
been baked in xNormal. The reason being is because
ZBrush uses subdivision method while xNormal uses raycast. To ensure best results, I suggest
using xNormal or similar raycast baker. To prepare these
sculpts for baking, we need to export them
first as high poly mesh. As this process may
take a long time given that each head has
24.5 million polygons, I sped up the process by isolating
only the relevant portion of the sculpt where
the change occurred. This process is repeated
for the three expression shapes in the same manner it
was done with the neutral. Once we have all high and
low poly meshes prepared, it's time to feed them
to xNormal for baking. In xNormal, aside from feeding
low and high poly meshes to it and specifying output path, we
need to invert the green channel to negative y as this
corresponds to direct x. Since these baked maps
contain only scar area because we clipped most of
the face in high poly mesh, I'm combining these bakes with
the original texture in Photoshop using a mask. If we quickly compare the bake from
ZBrush with the one from xNormal, you can see that the
xNormal looks much stronger. For that reason, the scar
will be more readable in Maya. The normal maps are ready,
but the scar change is still not reflected in the color maps. To do that quickly, I'm going
to derive height and curvature from an updated normal map that now
contains scar information in it. These two maps are both mostly
gray with value variation that can be utilized
easily on top of color map by using different blending modes. Now I'm just putting these
on top of a color map and isolating the
scar area with a mask. For both of these layers
I came to conclusion that soft light blend
mode works best. It already looks better, but it
still lacks a little bit of a punch. To achieve this, I'm going
to use the same normal map I used to derive these two and
I'm going to take one of its RGB channels to use in the same way. The red channel seems to
have nice scar detail in it. After pasting it on top of
everything with the same mask and soft light blend mode, I
decided to take a bit more from it. So I duplicated it and changed
the blend mode to color burn. After adjusting the opacity, I'm
happy with the color of the scar. So we can group all these layers
together and just add the color maps beneath for export. I'm using the same
scar color information as I don't expect the
color to change much in animation, as this tissue
usually has fairly poor blood flow. Now that all editing is done, we are
updating color maps in the shader once more. Right away you can
see an updated shader with a much more prominent
scar on the cheek. This time around it's enough
to change the version, as everything else is the same. So that will be it
regarding map editing. Before we wrap the video
I have one final tip. All the edits we have done so far
are contained within the color and normal maps exclusively. This means that the scar shape
is limited by the normal map. If the edit is too large and
changes the silhouette of the model, this will not be represented
with the normal map. This workflow would be adequate
for small interventions that do not require changing global mesh in
the rig in order to be visible, but there is the sweet
spot when it comes to the extent of the edits
you can do in the geometry before it starts influencing
the rig in unpredictable ways. Since this scar does displace
base mesh in the sculpt because it is a fairly
large edit, there is a way of incorporating
that too in the rig. The way to do that is to export
the edited mesh from the ZBrush at the lowest subdivision level
and add it on top of the rig as a blend shape which
will always be on. To do that I need to find the
Blend Shapes node in the rig and copy its name. Then select the exported mesh
and rig mesh respectively and in the Deform dropdown
menu in the Edit section, choose Blend Shape, Add,
and check the context box. Then you need to specify a
node and paste its name below. After you click Apply
and close, you will be able to find new blend shape
added at the bottom of the stack. Turn it on and now the
change is applied globally. Since our image from
ZBrush is reconstructed using displacement from normal,
it may have some small vertex offsets throughout the entire mesh
relative to the original in the rig. To eliminate this, you
can just paint out all but the relevant scar
area in this Edit Blend Shape or you can just prepare it
in that way before you add it. If your specific edits
are larger than this and require propagating them
through multiple expression shapes with custom
intervention on many of them, then this global one size fits all
solution would not be appropriate. In the end, to show and compare
the results inside Unreal Engine, we have imported the original
MetaHuman alongside her double with edited textures. This concludes the tutorial. Thank you for watching.