Welcome to MetaHuman
for Unreal. This is a quick start
guide to show you how to convert your character
meshes into rigged MetaHumans. First, make sure that
the MetaHuman plugin is installed and enabled
inside your project. Keep in mind this is
an experimental plugin, and the assets it uses might
change between releases. Don't worry, though, the
MetaHumans it generates are tried and tested
rigged assets. Make sure you're logged into
to Quixel Bridge, that you have a MetaHuman Creator
account, and that you have accepted the terms
of the license agreement. The first thing
you want to do is import the mesh you want
to convert to a MetaHuman. If your model is made
of multiple meshes, make sure you tick
Combine Meshes, as the eyes are an important
component for the process. Static meshes and skeletal
meshes are both supported, and you can import
FBX or OBJ files. For dense meshes, say
over 200,000 vertices, we recommend OBJ for
quicker import times and static meshes
for simplicity. But any combination
you prefer will work. If your mesh doesn't
come with the material, make sure that you create
one and that you bind it. Import the albedo
texture for the skin and connect it in the Material
Editor to the base channel. Save it, and you're good
to inspect the mesh next. The mesh doesn't need
to be watertight. Holes are perfectly acceptable. But it needs to be manifold,
and overlapping vertices and shared edges
should be merged. We rely on the rendering
of the mesh for tracking, so it's important
that it doesn't have any visual artifacts. Next, you create a
MetaHuman identity asset. It can be found in the
MetaHuman asset submenu. For the time being, you
can ignore the Capture Data asset in the same submenu. These assets
encapsulate the workflow to submit your mesh for rigging. Open it, and if you hadn't
accepted the MetaHuman license agreement, you might be prompted
to accept it at this point. The GUI has several sections,
but for this tutorial, you only need to know
about some of them-- the parts tree, very similar
to other component-based trees in UE, the guided workflow
toolbar, where tools become active and highlighted
depending on what point you're at in the
workflow, the promotion timeline to manage the track
frames, and the viewport for the meshes and frames. For now, you can safely ignore
the markers outliner, which is part of a more
advanced workflow, and the asset detail panes. The asset provides
several components, but they tend to be the
same for most tasks. Components from Mesh
is a useful shortcut to set up everything by
simply selecting a mesh. The Neutral Pose is a reference
to the mesh you imported. It is also the selection
context that activates the frame promotions timeline. There are a few details
to a good neutral pose-- an unobstructed view
of facial features. No hair accessories should be
covering the facial features. The eyes should be open,
but not overly wide. The mouth should be closed
with no teeth showing, and relaxed facial features. Next we need to promote
the frame for tracking. This effectively takes a
screenshot of your viewport and tracks some facial features. You must have the Neutral
Pose component selected, and we recommend you
respect a few guidelines. Use a long lens. An FOV of 20 or
less is recommended. Start from a frontal
view, with good symmetry, and presenting the
inside of the eyelids and an even view of the lips. Lastly, good frame
occupancy helps. Make sure the face
takes most of the frame, with little padding below
the chin or above the head. Once you have a
good frame, it can be promoted with the plus
button in the toolbar or the one in the
promotion timeline. Name it if you want,
and right-click it and set Autotracking to on. The very first time
you're tracking the asset, you'll have to wait
for a few seconds-- maybe up to a minute. But after that, tracking
will be instantaneous when you release the mouse
button from navigating in the camera view. You can make adjustments
to your framing, and once you're happy
with the results, right-click and
lock it to prevent accidental manipulation. Tracking is a 2D
process that works best with bright and even lighting. If you have an albedo
texture, unlit mode is ideal. Now that you have a frame
and that it's tracked, the Identity Solve button on
the toolbar should be active. Press it, and the
template mesh will be fit to the volume of your scan. You can click the free roaming
camera mode in the promotions timeline to unlock the
camera without compromising your existing promoted
frames, and toggle between frame buffers A
and B up top and center. You can set the contents
of each frame buffer independently to compare the
meshes by toggling quickly, or check the overlap if they are
both visible in the same frame buffer. This is the only
part of the process where the MetaHuman identity
asset details are actually necessary. The very last thing
you need to do is visit the body part
component, set the height and body preferences
from there, and then the Mesh to MetaHuman
button in the toolbar should become available
to submit your data. Our template as it
was fit to your volume will be submitted to the
backend, and in a few seconds to a few minutes-- partly depending on
your connection-- you should receive
a notification. At that point, your
mesh is now a MetaHuman. It will be immediately available
for you to inspect and further tweak in MetaHuman
Creator, and shortly after to download as an
asset from Quixel Bridge. We're done with the Unreal
part of this tutorial, and we have only one
optional step left. MetaHuman Creator
has received a lot of improvements and updates. We'll look at two that
are specific to the Mesh to MetaHuman workflow. MetaHumans generated
from Mesh to MetaHuman have a special icon
in their thumbnail. This indicates their provenance
and that they have an influence attribute. We introduced an
additive offset that can be blended through the influence
slider across the whole head and across regions
both symmetrically and asymmetrically. Mesh to MetaHuman
performs two functions. It finds and configures
the MetaHuman most similar to
your mesh, and then introduces the differences
between the specific volume of your submission to
the vanilla MetaHuman. Most of the time, these
differences are desirable. They're what makes
your mesh unique. But they might also
include undesired elements. In this demo's case,
the actresses we scanned had the customary scanning
skullcap on her head, and you can see
it coming through in the volume of the forehead. When that happens, you can
simply select the region, choose to operate
symmetrically or not, and decide how much
of that difference you want to blend in. This can also help
when the model has very unique or strong
or maybe even exaggerated features. By changing the
region's influence, you're in control of the
balance between the likeness of the volume and the proximity
to a standard MetaHuman, which determines the
cleanliness of the rig.