VICTOR: Hi, everyone.
Welcome back to the stream. I am here with a
couple of Mixed Reality developers from the Microsoft team. We are going to talk
about how to develop a HoloLens 2 application
using Unreal Engine and Microsoft Mixed Reality
UX tools here today, and to help me in this endeavor
here on the stream today, I have Vanessa Araujo,
software engineer at Microsoft as well as Carl McCaffrey, also software
engineer at Microsoft. With us, we also have Sajid Farooq, principal software
engineer at Microsoft, and last but not
least, Luis Valverde, senior software
engineer at Microsoft. Welcome to "Inside Unreal",
special episode today, I would say, since we are
technically doing it on Friday which we have never done before, and it's also the second
part of a two-part stream, I would say.
I am not going to talk anymore. I would like to hand it over to
Vanessa for her presentation. VANESSA: Hi, everyone.
I didn't say hello before. So this will be a quick
step-by-step guide on how to get started
with Mixed Reality where we will be creating
an app using Unreal Engine and the Microsoft Mixed
Reality UX Tools plugin. The main objective
for this tutorial is for you to get more familiar
with Mixed Reality development and also to know more about
any Microsoft resources that can help you
create some awesome Mixed Reality experiences
for games during the jam. We're going to start by showing you some Mixed Reality references, the UX Tools plugin and
the project requirements. We will go through how we
can configure the project to make your level
interactive in Mixed Reality. Then my colleagues, Carl and Sajid, will show you how to use
some cool UX components from the UX Tools plugin
and also give you some tips on how to improve and
optimize your experience. Then we'll finish with
a small Q&A session where we can clarify any
Mixed Reality concept that we are covering here today, and for this, we have our
colleague, Luis, with us. The UX Tools plugin
that I mentioned and that we are using today
is part of a set of components that together make the Microsoft Mixed
Reality toolkit for Unreal. Inside this Mixed
Reality toolkit plugin, you can find code,
Blueprint, samples, documentation and
the UX Tools plugin. It's basically a
bunch of UX controls made specifically
for Mixed Reality. Those controls are like 3D
buttons or grabbable objects that can be interacted with
or manipulated with your hands by using hand tracking or
simulated hand tracking. This will let you spend
your development time on the experience itself or
gameplay logic and level design rather than to have to develop those UX controls and
interactions from scratch. So let's go through
some of the resources that you can find online and
that can help you to quick start on Mixed Reality
development and design. So if you're new to
Mixed Reality development or even if you already
have some experience, you can find a lot
of useful information on the Microsoft Mixed
Reality documentation page, so everything that we
are covering here today can be found there, and there are some very important Mixed Reality experience
design concepts that are also covered together
with some mini tutorials on how to use Mixed
Reality services from Azure or other code stuff and features
that you can add to your project like QR codes,
image capture or anchor sharing. From the Microsoft App Store, you can also download this app
called Designing Holograms. It explains the basic UX concepts and best practices on Mixed
Reality and experience design regarding things
like spatial awareness, hand tracking or head
and eye tracking. So this app can be installed
on your HoloLens device or if you don't have one,
you can use the HoloLens emulator. The Unreal documentation page also has some getting
started guides on HoloLens, and the content is very similar
to what we are covering here. You can find info on
how to use the emulator or deploy your
application on the device. Now let's go through the
tools that we need to install in order to develop our
HoloLens 2 app with Unreal. So this is what you
need to make your app. Basically, you will want to
have Unreal Engine 4.25 or later with the installed
HoloLens supporting files, a HoloLens 2 device
configured for development or an installed emulator
on your computer. You also will need
a Windows 10 1809 installed on your machine
and Visual Studio 2019, the latest version with
some extra workloads. If you already have Unreal,
just go to the Epic launcher. Then go to the Library tab
and under Launch and Options, make sure you have the HoloLens
2 toggle on and hit apply. You will need these
supporting files in order to deploy to the HoloLens. If you don't have a HoloLens
device, again, as I mentioned,
you can use the HoloLens emulator, and all information about
how to install the emulator can be found also on the Microsoft Mixed Reality documentation page. With the Visual
Studio 2019 installed from the Visual
Studio download page, you need to run the
Visual Studio installer once more to modify Visual Studio and add some extra workloads
that you're going to need. You can also add those workloads by the time you are
installing Visual Studio, so if you're modifying
your Visual Studio, just make sure you have
selected the workloads '.NET desktop development', 'Universal Windows
Platform development' and 'Desktop development
with C++' selected. And also under the
individual components, you will need to choose
the MSVC V142 for ARM64. Just choose the
latest, and with this, you have Visual Studio
set up for the HoloLens 2. And this is pretty much
done with the requirements, and we can launch Unreal and get started with our
Mixed Reality project. Again, you can go through
this presentation afterwards and follow along. So on the Unreal project browser, we're going to select 'Games'
under the New Project Categories and just click Next. Select the Blank template
and again hit Next and for the Project Settings,
just make sure to select 'C++' as the UX Tools will need
this in order to build. Also select 'Scalable
3D', 'Mobile / Tablet' and 'No Starter Content'. Choose a directory and
a name for your project and click Create Project. So now with the Unreal Editor open, we will start by
enabling the plugins that will allow the Mixed
Reality development. So go ahead and open
plugins under the Edit tab and search for "Augmented Reality" under the Built-In plugins list, and then look for and
enable the HoloLens plugin. Just enable it. Now search for the
"Virtual Reality" category and then look for 'Microsoft
Windows Mixed Reality'. So both plugins are required
for HoloLens 2 development, and you will need to
restart the Editor. With the Editor open,
we'll need to create an ARSessionConfig asset, so go ahead and in
the Content Browser, click Add New >
Miscellaneous > Data Asset and select ARSessionConfig. Then you can name it
'ARSessionConfig'. Double click on it. Leave it as is and just hit Save. So this ARSessionConfig
would be responsible for storing augmented
reality sessions, specific configuration like world alignment or occlusion settings. So let's create a basic
level for our app. Just choose New
Level > Empty Level, and drag a Player Start
actor to your scene, and reset its location. And just to go to
File > Save Current and name your level
'Main' and save it. Now we will edit this
main level Blueprint, so we can use the
ARSessionConfig asset we have created during runtime. Click the Blueprint dropdown menu and choose Open Level Blueprint. Drag the execution pin from
the Event BeginPlay node and search for the
Start AR Session node. Under the Session Config option, choose the ARSessionConfig asset, the one that we have created, and now you can add an Event
End Play node to the graph and drag the execution pin. Just search for Stop AR Session. Now you can Compile, Save, and the AR session will start
and end with the level play. Let's configure our pawn.
Under Content Browser, click on Add New and
then Blueprint Class. Search for 'defaultpawn'
and choose the DefaultPawn. Now you're going
to name it 'MRPawn' and double click to open in edit. We're going to add a component. Just look for Camera component,
and once the camera is added, I want you to select
the CollisionComponent and change the Collision presets under the Details
tab to NoCollision. So we are doing this to
prevent any collisions between the user and any level
content in augmented reality. So do the same for
the MeshComponent and again, compile and save it. And now we're going to
create our game mode, so under Content Browser again, click on Add New > Blueprint
Class, and Game Mode Base. You're going to name it MRGameMode. Double click it to edit,
and on the Details tab, just choose the MRPawn
that we created. Add it to the Default Pawn Class. Then compile and save. Now just open Project Settings
again from the Edit tab and under Maps & Modes, expand the Default Modes dropdown and set our MRGameMode
to the Default GameMode. And now add our Main level
to both Editor Startup Map and Game Default Map. Then you can close the
Project Settings window. So now we have those basic
assets and a very empty level. We're finally adding the UX
Tools plugin to the project. The UX Tools plugin can be downloaded
from the release page. I'll link it to the
UX Tools GitHub. I didn't post it here because
it's weird to read the link, but if you go to github/Microsoft and search for "UX Tools Unreal", you can open the UX
Tools GitHub page where you can find the
release documentation and how to install and
use the UX components that comes with the plugin.
So UX Tools is open-source, and although we're
not yet accepting any community contributions, you can add issues
through the GitHub page, and this will also
help us understand where to prioritize
features and bug fixes. So go ahead and look
for the release link on the right-side column. Open it and look
for UXTools.0.10.0, the ZIP file. This is our latest release,
and click to download. Just as a heads-up,
currently UX Tools only supports the
HoloLens 2 device, but the UX Tools team,
which is my team, has made a huge effort to put
together some work-in-progress. It's very work-in-progress
to anyone who wants to use UX
Tools with the Windows Mixed Reality headset or
the Oculus VR headset. So, still on the UX
Tools GitHub page, just look for the
feature/megajam branch, and you can work from there.
And this project should work on the latest Mixed
Reality headset. Again, this is not supported,
but it's a very good start base. Now go ahead and create a new
folder under the project root, the project that you created,
and name it 'Plugins' and then copy the UXTools folder unzipped to this Plugins folder
and just restart the Editor. This should be enough to add the
UX Tools plugin to your project. And now it's time to make
your level more interesting. Let's check what UX Tools
have in the project. So we're going to start
by adding simulated hands to our level. The hands can be used for debugging the Mixed Reality interaction
or to manipulate objects, so double-click on
the MRPawn to open it, and from the Event BeginPlay node, just drag the execution pin to
a Spawn Actor from Class node. And then under Class,
select UXTHandInteractionActor and then expand the node to
make the Owner property visible. Now drag the execution pin and create a duplicate of
this node for the right hand. Again,
select UXTHandInteractionActor and expand the node and on the
Hand property choose Right. So now we have right
and left hands, and just organize the
graph as you like. Now I want you to add a Make
Transform node to the graph and link it to the
Spawn Transform property of both UXT Hand Interaction nodes. Also link a reference
to the MRPawn self node to the Owner property on both
UXT Hand Interaction Actors and just go ahead
and compile and save. Now we will need some level content to interact with those hands,
but the hands are ready to go. So go back to the level window
and add a cube to the level by dragging it from
the basic actors tab, and on the Details panel,
reset its transform. Change the Location to be 50
centimeters in front of the user and change its Scale to 10
by 10 by 10 centimeters, and let's just change the
cube Mobility to Movable. Now under Content Browser,
just add a new material and name it 'Red' just
to make it more visible. Double click the material and then from the color
property of the material node, just drag a link to a
Constant3Vector node and change the R value. We're going to change the
R value to 1 and hit OK and just save the asset. Now we can just add this
material to the cube that we made and also, we're going to add a
directional light to the level. Just place it above
the player start. Now let's make the
cube interactable. Let's select the cube,
and we're going to add a UXT Generic Manipulator
component to it, so this basically will
make this cube interactable and allow hand manipulation
with the hands that we created, so under the UXT Generic
Manipulator details, select Rotate About Grab Point on the One Hand
Rotation Mode property, and now we have an
interactive level. Now go ahead and hit click Play. Let's try the level with
the simulated hands, so you can move the
right hand by clicking, holding the left alt key
and moving the mouse. The mouse click will
trigger a hand pinch that you can use to grab
the cube and move it around, and you can also use the left
shift key for the left hand and then just use WASD for
controlling the player position, and you can see the far
interaction hand raise. Now let's test it on the
device by connecting our editor with the Holographic Remote
app on the HoloLens device. So first, we need to
enable Holographic Remote, so go to the Project
Settings under the Edit tab, and under Platforms, just search for
Windows Mixed Reality and then toggle the Enable
Remoting For Editor. Now you need to restart the
Editor to apply the change, and once the Editor reopens, you can go ahead and insert
the HoloLens 2 device -- the IP address of
the HoloLens there. Just remember that
the device and the PC, they must be in the same network. Just click Connect and now
expand the Play dropdown above the level window
and click on VR Preview. And then this should be ... Of course, you should see
your house and not mine, and now you can manipulate
the cube with your own hands by using the hand tracking
coming from the HoloLens, and there you go. We have this very basic
Mixed Reality application running on the HoloLens, but it is already very interactive. So I'm very happy with my cube, but things are going to start
to get more interesting. I'm going to pass it onto Carl, and he will show you some very cool UX Tools controls,
more fun than a rotating cube, that you can add to your
project to make it more awesome. Thank you guys. VICTOR: Thanks, Vanessa. All right.
Carl, do we have you on the line? CARL: You do.
I will get my screen share going. VICTOR: Perfect. CARL: Okay.
Are you able to see that? VICTOR: Yeah, looking good.
CARL: Perfect. So Vanessa walked you through how to set up your
Mixed Reality project for the HoloLens in
the Unreal Engine, so now we're going to look
at some of the controls that come with UX Tools
and how to use them. So UX Tools comes with a lot
of controls out of the box. The main one you'll
be using are buttons. Buttons are the foundation
of most Mixed Reality apps and the primary way you'll drive
interactions with your users. Because of this,
UXT has a wide variety of HoloLens 2 themed
buttons included. These buttons are configurable.
You can set up different sizes, icons, labels and also
different variants. If you don't want the HoloLens
2 theming on your app, we expose all of the logic
as a separate component that you can use to
build your own buttons which we'll talk about
a little bit later. Another common control
would be sliders, so we also have a built-in
HoloLens 2 themed slider that is also customizable,
and similar to buttons, they also have their logic
in a separate component to let you build your
own themed sliders without relying on
customizing ours. Finally, for the sort of
primary interaction methods when it comes to UI,
we have touchable volumes. This one doesn't have a concrete
implementation within UXT. This is just a component
you can add to an actor, configure which parts of the actor you want to trigger an interaction, and then every time
your user touches it, it will fire off an event. Then as sort of helper
controls, two of the big ones you'll see a lot are UI
elements and toggle groups. So a UI element is a component
you can put on an actor that will allow you
to control that actor as part of a UI hierarchy, so what that means is that if you
hide, say, an actor, it will hide itself and all
of its child UI elements, and if you show that actor, it will restore the state
of the child UI elements, and the other components
visible here are toggle groups. A toggle group is essentially
just a group of toggle buttons. It limits only one of them
to be active at a time and also provides
an easier interface for querying which of
the buttons is toggled instead of having to query
them all individually, and when you combine them together, you can make things
like tabbed menus like you can see in that example. We also have native UMG support, so this is a very simple component that you can just
add to your actor, and that actor, any of the UMG
widgets that are on that actor, you can now drive using the
hand controls from the HoloLens. And the last kind
of somewhat control but is usually used
around UI is tool tips, so we have tool tip actors that
can be static in the world, that can point to a
specific part of an object, or there's also a spawner component that can allow you to spawn them and hide them depending
on what the user is doing, and the tool tips
themselves are UMG, so you can put menus within them or anything you really want. So there are the
controls that we have. How do I actually use them? This is an example from
the lander Blueprint that you can find in
the UX Tools plugin. Here, we can see the lander itself. The slider powers its legs, and we have the event
graph beside it. Using the built-in
actor controls -- so our slider and our buttons, are actors -- you
can take a reference. In this case, our lander Blueprint takes a reference
to a slider actor, and then we just
bind a custom event to the sliders on
update value event and then update the landing
legs in that custom event, so they're very
straightforward to use. But if you want to
build your own controls that aren't HoloLens 2 themed,
you can do that, as I said, with just the kind
of logic components. So, this is another example
taken from the UX Tools plugin that you can look up later. This is the SimpleButton Blueprint, which is a very simple
button you can press. It's made up of three components: the button component for the logic; a base static mesh,
which is the gray circle around it; and the moving visual static mesh, which is the pink center, which is the bit of the
button that actually moves. And then you can see also, the various configuration options exposed on the button components. You can tweak your push distances, how far you depress it
before it activates, how fast it releases, its different recovery
speeds and so on and also configuring
your visuals components, so what that is, is you need
to point the button component to the bit of the button
that will actually move so that can create
its collision boxes for listening for
the players' hands and also so it knows
which part of the button to actually press when the
user interacts with the button. To use the events that
come with these components, you can just put them
into your event graph as you would with any
other component events using the plus button on that
component's list of events. And also, you can build
controls using both C++ and Blueprints using
these logic components. There are examples of both of
these in the UX Tools plugin. The Pressable Button actor, which is the HoloLens
2 themed buttons, are an example of a
C++ implementation, and the SimpleButton is
this example we see above. So we have our controls, and we want to put them
together into menus, so I'm just going to show two
of the most common methods of putting together menus, which are near menus
and hand menus. A near menu is better
for a longer lived UI. It kind of floats near the user. It can often be pinned into
the world so that it'll stay, and if the user walks
away, you might have it then catch back up to
them and stay with them. And hand menus are much
better for faster UI, where the user pulls up
their hand with the palm facing the camera or their head. The menu appears beside their
hand, follows their hand. They can press buttons on it, and then they put their hand down, and the menu goes away. So examples of both
of these could be if you had a painting app,
you would use a near menu, say, for your main menu,
selecting new image and so on, and a hand menu for
your color picker because the user just wants
to be able to bring it up, pick the next color,
and go back to painting. So now what we're going to do is
build a small basic hand menu. So you can see a demo of
what we'll be building here. It's a hand menu with a lander
that we can fly up and down. So we're going to
jump into the Engine. This is the level that
Vanessa just built. The only difference is
that I have imported the Lander Blueprint
from the UX Tools game. This was just done using the built-in Unreal
Asset Migration tool, and then I just kind of flattened the folder structure. So we'll start by plopping
a lander into the scene, and we get can get
rid of this cube. So we have our lander.
We now want to build our menu. We want to add a
new Blueprint Actor, which can be our menu Actor,
and we can just call it BP_Menu because I'm not very
good at naming things. So we want to turn
this into a hand menu, and the first thing we're going
to want is the ability to show and hide that menu with the hand. And so as I mentioned earlier, UI elements are a good way of
controlling hierarchies of UI, so we're going to create
a UI element on this Actor and set that as the root component and then come over here to
our UI Visibility property and set that to Hide so
that the menu will be hidden when the user starts the game
because you don't want it floating in the middle of nowhere until they bring up their hand. And so that's the
show/hide part done. The next thing we
need is the ability to actually show the menu when
the user raises their hand. And conveniently UX Tools
also has a component that does exactly that, which is the UxtPalmUpConstraint. And so what this does is,
it watches the user's hands, and when they bring up their hand so that their palm
is facing the camera, it activates the constraint
and will bring the Actor that it's attached to over
to the part of the hand that's set up in its configuration. So if we look over at our
configuration options here, we don't need to do much. We want to set this
Require Flat Hand option because that will help
prevent false activations. It means that the user, if they hold their hand up in a
fist, it won't activate. It makes it a more
deliberate movement so they don't get
the menu appearing when they don't want it. And then coming down to
our Hand Constraint itself, this is almost good to go. It's set up to work
with both hands, the ulnar side of the hand,
which is the side along here, and it's set to make the
Actor face the camera. The only thing we'll want to change is to set the Goal Margin to 3 just to move our menu
slightly out from the hand. And there are all the components we will need to have the actual
hand-menu logic be applied. We do need to hook those
components together, though, so that when the constraint
activates, it will show the UI. And the way we do that is
using the PalmUp constraints -- On Constraint Activated event here. So we can just add one of
those to our Event Graph, and then from that,
we want to toggle or enable our UI visibility. So we can come in here
and get our UI visibility. Just note: you don't want to use
the *Rendering* Set Visibility. You want to use the
*UIElement* Set UI Visibility, because the rendering
visibility won't do things like disable collision on the Actor,
which means that the user will still actually be
able to press buttons, even though the UI isn't visible. So we can pull one of
those into our scene, and that is already
set up to do Show. And then we just want
to do the opposite when the constraint is deactivated, so we can take that,
duplicate this, and hook that up and set
this this to Hide. Excellent. So that is our full
menu logic done, so now we want to put
some actual buttons on it. And so the way this works is, as before,
our button Actors are Actors. They're not components,
so we need to reference them from our menu through a
child-Actor component, so we can add a child Actor
to our Blueprint here, and we can just call this UpButton, since it's going to
hold our up button. We will want to
select our Actor class to be the UXT buttons, and you can see the
variety of default HoloLens 2-style buttons
that we have here. We're just going to go
with the regular one. We don't need any toggle
logic or anything like that. So I can come over here and take
a closer look at our button. It doesn't do much to suggest
that it'll move our lander up, so we should go into our
button configuration here, and we can set up a new
icon under our Icon Brush. We can open our editor here and get a view of all
of the icons available, and I think this up
arrow will do nicely. VICTOR: Hey, Carl,
sorry for interrupting. CARL: Yeah. VICTOR: Bit rate is a
little low on the stream. Would you mind go ahead
and restarting real quick and see if we can increase that. CARL: Can do. VICTOR: It has been
the magical solution. Let's see. CARL: Yeah. How's it look, any better? VICTOR: A little bit, yeah.
A little bit. Let's hope it stays that way.
Thank you. CARL: Okay. Yeah.
It just so happens to be the day you need it that the
network goes a bit dodgy. VICTOR: That's how it
goes when we're live. CARL: Okay, so we'll be doing
the last bit again for our second button,
so I won't repeat it, if it was blurry,
so we're going to go ahead and set our label here to
just something like Up, so that's a nice button we
have for moving our lander up. We now want to add a down button, so we will want them to
sit just above each other, so we will move our
button up half its height. We can see here our
button's millimeter size is 32 millimeters. In Unreal units, that's 3.2,
so we want to move it up half of that, which is 1.6,
and for our down button, we can just duplicate that
and name it 'DownButton' and move it to negative 1.6. And we can go through back to
our icon brush editor here, select the down arrow,
and change the label to 'Down'. One thing you'll
notice on these buttons is that they have this kind of seam between them which
doesn't look very good. So what we can do to make
a kind of single slate that contains both of our
buttons, is remove these backplates
from these buttons, which can be done using this Is Plated toggle,
so that's nice and easy. We can just turn them both off and then go back up
here to our Actor and add a UxtBackPlate component, and this is essentially
just a Static Mesh that is the same as
the slate back plates. So we add that,
and we want to double its height so that it covers
both of our buttons, so double 3.2 is 6.4. And I think that looks good,
so that is our UI built. Now we just need to to
hook that up to the lander to actually make it move. So we will go back
to our Event Graph, and we're going to start by needing a reference
to our lander. Now, we can just add a new
variable, call it 'Lander', and make it Actor Object Reference
since it doesn't really care whether it's actually
a lander or not. It just needs to be an Actor, and we also want
to make that public so we can configure
it from our viewport. Next, we will want to
get our button state and move the lander based
on the button state. So earlier,
the example with the lander and the slider had the lander taking the reference to the slider. We're just reversing that here,
and also, we also were binding the interactions to the
events on the slider, which is also
possible with buttons, but we don't really
care about the moment the button was pressed. We just care about if the button
was pressed for this frame and if we should move the lander. So we're going to ignore BeginPlay and BeginOverlap and
go for Event Tick and just query the
button state every frame. So we're going to start
with our up button so we can get a reference
to our Up Button component. Because this is a
child-Actor component, we need to actually
get the Actor from it, so we want to call Get
Child Actor on that. From that child Actor,
which is now just an generic Actor, we want to get the button Actor from that,
so we're going to cast to UxtPressableButtonActor. And then now that
we have our button Actor instance, we can call Get
State on this button, which will give us the
current button state, be that focused,
pressed, or released. So from here, we can just do a
nice, easy equals and set our check against
the pressed state. Now we have a nice easy bool. We can just branch on if
the button is pressed, and if the button is pressed, we can do an AddActorWorldOffset. And we are going to then want
to manipulate our lander Actor, not our menu Actor. We don't want our menu
flying off into space when we press the button, so we can hook that
up to our lander. We're also going to
move the lander up by one on the z-axis every frame. In a real game,
you'd want to actually do this based on your delta
time, but for this demo, we're just going to do it
by a set amount each frame, and then we just hook
all of these up together, and that is our query
done for the Up Button. We can just take
this, duplicate it, drop it just below, and connect it, instead of to the Up
Button, to the Down Button, and instead of moving the
Actor up on the z-axis, we can move it down. And then we just need to connect
these up to our Tick Event, so I'm just going to
use a sequence for this, connect all these
together nice and messily, and that should work, so we will
compile our Blueprint, save it, and go back to our
main editor viewport and add an instance
of this to our scene. I'll just drop it slightly
above the lander for now. From here, we can see our
Lander variable that we exposed, and we want to set that to
our static lander Blueprint that we put into the world, and now we can see if what
we did actually works. So we can hit play to
enter the input simulation, Play In Editor mode.
We now have our hands here. As Vanessa explained earlier, you can use a left shift
to control the left hand, and if you want
flip the hand around so it looks at the camera with a flat palm to
activate the constraint, you can press the home key
while pressing the shift key. You can see we have
our nice menu here, and then we can use our
other hand to come over and press the button, too
far, and our lander flies, and that is our amazing
HoloLens 2 game. So now you have your game. How do you get it onto your device. There's a couple of things
you'll need to set up within the editor
to create a package that will actually go onto
the HoloLens device itself, and they're all in
your Project Settings. The first two are in
your project description. You will get warnings for
these if you forget them, so don't feel like you
need to remember it, but the first thing,
your project needs a name. Without a name, it doesn't know what to call your
app for the HoloLens, so I'm just going to
call it 'MyAwesomeGame', and then you will also
need for the certificate used to sign your app, you'll need to set a
company-distinguished name. If you are using a real cert, this will need to match
the name on that cert, but for this, we're just going
to use a self-signed cert, and you can use self-signed
certs for your side loading and development, so there's no need to really
put anything important here, so I'm just going to put CN
equals 'My Game Company'. It will need that
CN in front of it. I can't remember the
name of the spec, but it needs to fit the distinguished
name specification. And the last thing we need
to do is actually get a cert, so if we go down to our
HoloLens platform settings here, you can see we have
this certificate option. You can set a cert here
if you have a real one, or you can press
the Generate button to just generate a
self-signed cert, and we can just select none.
We don't need a private key, and that's all we need. The app is ready to
go to be packaged. You can also see the other
options here for packaging it, like your tile backgrounds
and your splash screen colors. There is also the option for
building for the emulator or the actual device here, but we're just going
to build for device. So once this is set
up, you can just go in, package the project as you
would any other project, select where you
want to package it, and it will build.
I'm not going to build it here because that's going
to take too long, and I have a version
I built earlier. It's like a cooking show.
Here is one I prepared earlier. I'm going to boot up the HoloLens that's sitting beside me so that we can access
the device portal, so I'm not going to go over how you actually set up a
HoloLens for development. There are docs on
docs.Microsoft.com. It mostly just involves
enabling developer mode and also enabling a check
box for the device portal. Then we can go through
to our Views > Apps, which shows us our
apps, installed apps and the running apps,
like your task manager. And if we want to actually
install our new package onto the device, we can go up here to Choose File. This has remembered where
I built the package. But you navigate to
where your package is. You select your appx here.
You press Open. Select an optional package,
you do want to check this, as well. Press Next. Choose the file and then choose
the ARM64 VCLibs to go with it. Select those. Hit Install.
It'll upload that to the device, and once it's finished installing, it will appear in your
apps list on the device, and you can just open it
as you would any other app. And that covers
deploying to the device, so I am going to
hand over to Sajid, who is going to cover
some best practices for when you're building your
HoloLens apps using Unreal. SAJID: Thank you so much.
Let me set up the presentation, and we are going to start. VICTOR: Thank you very much, Carl. SAJID: All right.
Can everyone see the presentation? VICTOR: Looking good.
SAJID: Okay. Excellent. Right. So first of all,
my name is Sajid Farooq. I'm a principal
engineer at Microsoft working in Mixed Reality. I particularly specialize
in Unreal Engine and how it works in
mixed-reality situations. I'm going to be talking
about the best practices for using UE4 in MR. Normally
when I give this presentation, I start with the Project Settings. However, by now, the Project Settings are
very well documented, so I'm going to skim over
them a bit to try to quickly get to the meat of the section,
which is more of those things which you may not
normally hear about. They're still documented
but not things that you would normally hear
about, so the Material editor. How do you use that
more effectively again, especially when it comes to MR?
What are some of the settings that will make Material
creation more performant? Then I will go to debugging
and profiling, again, one of those things that
is not mentioned anywhere, specifically because debugging
and profiling on the HoloLens with Unreal Engine is still so new and
still being developed. And then at the end, I want to talk about
a couple of tips, leave you with some
tips, a couple of things, a couple of patterns, if you will, of how to use Unreal Engine to
make making MR apps quicker, better, more reliable,
and more performant, right? And, of course, if you hear
any noise in the background, I think that's perfectly normal
when we're working from home, so you'll have to excuse me in
advance for that. All right. So let's start with the
recommended Project Settings, so first of all, the usual: We are trying to create something which is going to be on the
mobile, the tablet. It's going to be scalable 3D or 2D because it's a mobile device, and when you're creating something, we want to make sure
that we are using the Forward Shading renderer
rather than the deferred, and again, don't sort of think
of this as a binary choice. Forward Shading is always faster, and deferred rendering
is always slower. It's not really like that. Forward Shading, by default,
is not necessarily faster. What Forward Shading
really does is, it provides you with
more flexibility, and that's really the idea that you can turn features
on and off individually. That's really what makes
the Forward Shading much more appealing to use when you're using it
for MR development because you can turn
off all of the features that you wouldn't otherwise use,
and so that's why we recommend going with the Forward
Shading renderer. So if you see,
now there are some features which you can then turn on and off, and again, the basic idea
is, you should turn off all of the features that
you don't really need. If you're not going to be
using any sort of fogging or vertex fogging,
just turn it off. If you're not going to be using
shadows, turn it off. Set an upper limit on the
number of cascaded-shadow maps. Generally,
shadowing is obviously slow, so avoid it if you can. Then there's the
thing of occlusion. Normally when you're
talking about a AAA game, if you're coming from that end, the first thing that
comes to your mind is, "Let's turn on occlusion. It's going to make my game faster", because occlusion basically
means if something is hidden behind something else,
you are not going to see it, and so the computer does
some extra calculations, computations,
in order to figure out, "What are the things
that I cannot see?" And then it doesn't draw them. The problem is, in AR,
really, there is a limit to how many virtual objects
you can display anyway, and generally speaking,
they're not that many that they're going to
be occluded anyway. Most likely, they're going
to be right in front of you, and so hardware occlusion,
generally on a limited GPU, on a mobile device is slow anyway, so in general, you'll see that
if you do turn on occlusion, especially if you don't have
things that can be occluded, it's actually going to
make your performance dip rather than improve, so look into turning
off hardware occlusion. An alternative if you
must use occlusion is to enable software
occlusion, right, and so that offloads some
of that work from your GPU, and again, but that's not really
going to make a big difference if you are specifically
GPU bound anyway, so just keep that in mind.
Another thing, as I said, some of those things which you
don't need, don't use them. One of the specific things
that I want to call out is translucency. It looks fun. It's interesting because translucency
is one of those things that probably we associate
most with holograms, right? They're translucent, ephemeral things that
you want to touch. However, unfortunately,
translucency is very slow, and it should be
avoided if you can. Again, the depth stencil,
if you turn it on ... So if you look at,
for example, this, you'll see there's a
custom depth stencil pass. Generally speaking,
I haven't really seen a use case, a strong use case
for needing to use a custom depth stencil pass for MR. And usually,
there are other ways of doing that, so if you don't need it,
which you most likely don't, then turn it off because a custom depth stencil
will require an extra pass. And even if it isn't
doing anything, it is going to slow
down your performance. Same thing with anything which is like a
post-processing effect. Again, if you're trying to do MR, if you're trying to
do augmented reality, one of the nicest things
you're thinking of are some of these
smoke-screen effects or particle effects or
whatever else it is. Any of those sort of effects, especially full-screen
post-processing effects, they should be kept to a minimum because they require an extra pass, so again,
if you absolutely need them, great. Otherwise,
you should just get rid of them. Now, one of the things that
you should be doing is, you should always be turning
on Mobile Multi-View, Instanced Stereo. Again, without going into the
details of what these are, we have two eyes,
and the way that we see, there's a lot of overlap
between what we see in stereo between the two eyes, and so what your
holograms normally, your devices have to do in
order to create VR or AR, is render twice,
differently for the different eye to provide a proper
three-dimensional effect so that your brain
is interpreting it as it's coming from the real world and different eyes are
seeing different things. However, most of the time,
as I said, there's a big overlap between what you're seeing
between two of your eyes, and so enabling Mobile
Multi-View and Instanced Stereo allows Unreal Engine to reuse
some of that information, and it makes a huge difference, so if you have something
that can be instanced, it can be reused, then Unreal Engine will
be able to perform much, much faster if you have
turned on Instanced Stereo and Mobile Multi-View. And then a little tidbit,
again, on the Project Settings, again, not super important, not particularly
related to performance, but if you're running into
lots of shader compilation ... you're waiting for your shader
to compile a lot of times ... there are certain
things that you can do, and one of those is Mobile
Shader Permutation Reduction. So if you know that you will
not have a light switch move independently of the camera ... If they're moving with the camera, that's a different thing, but if they're not going to move
independently of the camera, then you can actually just
safely set this value to zero. And what that does is, it allows Unreal to cull away several
shader permutations, and that speeds up your
shader compilation a lot, so that will benefit. Now arguably the more
interesting part, so after we set up the project,
what I find is that there are a couple of things which
most people don't know, and I see a couple of
mistakes people make and especially when
it comes to Materials, particularly because Unreal Engine doesn't allow you to
create Materials easily ... it does, but not easily ...
directly using shader code, and so you will use the Material
editor, which is great, but it is important to understand what the Material editor is
actually doing behind the scenes in order to understand how you actually want
to optimize things. So the first thing is, and the most basic
thing that I see is, when you are doing things
which require UV computation, this is the sort of thing
that I see happening very, very commonly,
so there's a texture coordinate. You want to have it tile,
and so you'll do something like, you'll have a
constant four-by-four, and then you'll
multiply the text coord by that number,
pass it into the UV values, and then pass that into the normal or the base color or the specular. And again, this tip that I'm
giving is equally applicable to general non-VR-related
stuff as well. Now, what this does really is
that, remember on the GPU, there are two primary passes,
if you will, or modules. So there's the vertex shader, and then there is the
fragment shader, right? And so the vertex shader only
happens for every single vertex, and the fragment shader
happens for every single pixel. Now, generally
speaking, on the screen, especially for MR/VR,
if you have something like a cube, you have only eight vertices, but the number of pixels you
have, they're huge. And so ideally, a lot of calculations
like this one, you want to move them
from the pixel shader or the fragment shader
to the vertex shader. If you do it this way, if you do it the way that
you are normally doing, you are going to be computing
this per every single pixel, quite literally,
and so you don't want to do that. So an alternative way that
Unreal Engine provides is what are called customized UVs. So if you look at, for example,
I have enabled customized UVs here, so you can see
the regular base color, metallic, specular and so on, but you can see there's
these two extra pins that are normally not there. I'll show you how to get
those pins in a second, but you see that they are
there, right? And so what you want
to ideally do is, you want to take that texcoord and you want to multiple,
do whatever you're doing, but then you want to move
it into the customized UV. And that customized UV
is then used in the UV that you have here so that
when you then change this, everything else is going to be actually using the
customized UVs instead. And again, the benefit of that is that that goes through
the vertex shader, so it's going to be
much, much, much faster. Now, to show you how
to actually do this. What you want to do is,
your noncustomized UV value that you can actually
see over here, under your Material settings, if you set that to more than zero, then you will see
it pop up over here. So if you set it to
one, you'll see one. If you set it to
two, you'll see two. There's a limit, I
believe, and then again, that is the value you want to actually
use, so a little tidbit. I have not seen too
many people use it. Most people just go ahead
and use this technique, which I strongly discourage using. Now, another thing is that
if you look at your materials and so on, you'll see that your light
map computation again can be dialed down. So your light map has
a lot of information. Other than your
flat-diffuse information, it also has directional
information. Now, again,
because you have the real world, that directional information
is probably not as noticeable, but it is taking up some
of your computation time, so you can actually turn that off. The other thing is that
you don't necessarily need full precision, right?
Full is 32 bits. You can go ahead and turn that
off to give you single precision which, again, most of the time
you're not even going to notice. Again, if you do notice,
then you can move it up. If you're working on medical
visualization, medical cases, sometimes you'll need that, but most of the time you can
get rid of it, and again, mobile GPUs by default will
have to do an instruction which will take twice
the amount of time if you're using full precision,
so just keep that in mind. Another main important
thing is shader switches. So if you are trying
to, in a shader, trying to turn something on and
off, you have a switch, and you can do things like a
Boolean, right? Now, what you will notice that
Unreal Engine has a prefix. It'll stay static
Boolean, static switch. So every time you
use some switches, you should prefer to
use static parameters instead of the nonstatic
ones, right? The dynamic ones. Why? Because when you are
actually running, even before running,
at compile time, the compiler, Unreal Engine, can create a permutation
of that particular shader with that particular
Boolean baked in. So it's not a dynamic value.
It's literally as if it creates two different versions
of that shader, one with the Boolean on,
one with the Boolean off. Right? And so depending on
what the static Boolean is set, it will just go through that without having to do a
conditional at runtime, so that will be much, much faster. The downside,
as I mentioned, right, is that it means that it
has to create a permutation. And so if you are worried about how many permutations
and shader compilations, so then your static parameters
are going to balloon that up. So again, in general, reduce the
number of parameters overall, but if you have to use parameters, then go for static parameters
rather than dynamic parameters. So I guess that's
sort of a balance, so reduce the number of parameters. Try to use static parameters
rather than dynamic parameters, unless you don't really worry
about shader compilation. All right. And then there's
material instances. So there are two types
of material instances. So if you have a material
instance, and ... Which is great.
So you have a base material, and then you apply your
different material instances, and what you want to do is, you want to just
vary the parameters so that you can use the same base. Remember that most likely
what you're looking for is a material instance
constant, MIC, right? Rather than an MID, which is a material
instance dynamic because most of the time when
you set that parameter value to whatever else it is, you're not really going to be
animating it most of the time. And so if you're not going to
be modifying it dynamically, then what you want to do is, you want to turn it into a
material instance constant. Now, if you're worried,
you're thinking, "How do I know if it's
a material instance constant that I'm using, or if I'm by mistake using a
material instance dynamic?" it's pretty easy to tell. If you had created
the material instance via the content browser by
right-clicking and saying, "Create Material Instance",
that is, by default, a material instance constant.
It is a constant. But if you actually went ahead
and you created it using code, then you will know if you
created a constant or a dynamic. Most likely, if you're creating
something via code, then you are creating a
material instance dynamic. Now, one of the things, again,
if you do want to animate, so we talked about this, we said, "What if you want to animate? What if you want to do things
like have something get covered in snow over time or wave using
some vertex manipulation?" then that's fine. Then you will need a
material instance dynamic. That's perfectly logical. You may be able to actually even
use a material instance constant if you just need to
turn things on and off, but if you need to actually
do dynamic changes, then you might need a MIC, right? However, if you are using a
material instance dynamic, even in general, I strongly suggest that you look into material
parameter collections. Sorry. This was one of those things
that I was talking about, that I don't really
have control over. So the material
parameter collections basically are an asset, and they store an arbitrary set
of scalar and vector parameters, which can be referenced
in any material. Right? So normally what you
would need to do is, you would need to set each
individual parameter value in your material instance,
and that is slow, right? If you're doing it individually, and so basically Unreal Engine
also has to do it individually. It doesn't know that these
things can be grouped together and that they can
be cached if needed. And so the material parameter
collection, in essence, is a way of forcing
that caching to say, "Okay, these are all together".
If I want to change something, I don't want to have to go
one by one and change them. I can take all of this
as a collection one time and then change it. Right? And, of course, try to use things
like animation paths and the timeline or the editor in order to actually do animation if you actually do need to do it. So that covers the materials, some of the things that
I think are important vis-a-vis materials. Now I want to quickly talk
about profiling and debugging. Again, this is relatively new because a lot of tools
are still being worked on, and so the primary tool, actually, to do profiling in Unreal Engine, if you want to figure out
how fast something is running or, more importantly,
where you're seeing slowdowns, you want to use Unreal Insights,
which is the built-in tool. Unfortunately, until 4.25,
Unreal Insights was not working. It is now available in Unreal 4.26, which was literally just released, so if you find any bugs,
please do let us know, whether in the release
version of Unreal Insights is actually working or not. So Unreal Insights
is an excellent tool. It basically provides you with
detailed timing information on both the CPU and the GPU. It tells you what the CPU is doing and when and what the
GPU is doing and when. And you can arbitrarily
record your own bookmarks to do time stamps.
You could say, "I want to know how long it is taking to
do exactly this and this", so you can put in a
bookmark, and you can see, and it'll show you that inside
the Unreal Insights window, and then you can see. As you can see,
it's a bit small here, but you can see how it's
actually telling you how much time each of these
different threads is taking. And if you look at this side, you'll see that it provides
you the full-blown call stack, and this one can actually show
you both the CPU and the GPU, and as you can see, it can show
you the integers and the floats. You can see how much,
what the count is, how much memory it
is taking and so on, so excellent, excellent thing.
If you want to go a bit beyond, if you need to because
Unreal Insights is not giving you
what you need to do, there are a couple of things which Unreal Insights
won't give you. On your device, on the HoloLens, there is a tool which allows
you to create ETL traces. So you can actually
create ETL traces, which are basically traces
from within Windows, and those traces provide
you with far more.... They're operating-system level. They provide you with far
more detailed information, and you can view those
ETL traces with tools like either GPUView,
which is over here, so you can actually
see the hardware queue, even, being presented,
or you can use Windows Performance Analyzer, WPA,
and both of these are free. You can download them.
WPA is part of the SDK, which you can
download, Windows SDK. GPUView, you can find online as
well, download it, and it will provide you
with far more information. And if you want to
do some debugging, then there are some
excellent tools. Pix, again,
part of the Windows SDK, but you can find it online as well. Pix will provide you information
to debug GPU commands. It works, by default,
with the HoloLens. You do have to keep in mind
that in 4.25 and before, it is primarily using DX11. The Unreal Engine
implementation is using DX11, so you'll need to use the
DX11 compatibility mode, and then on 4.26 onwards,
I believe, in OpenXR you can choose DirectX 12,
so that should be fine. And Pix, by default, is DirectX 12. You can also use ETL
Traces via GPUView or WPA for a complete view,
as I said earlier. Just remember to use
your debug symbols, so the PDB files,
you will need them. Normally, you ignore them, but you will need those PDB files in order to be able
to map the ETL Traces to the correct function
calls within Unreal Engine. Otherwise,
it's going to be very difficult. Another alternative which I've
sort of put as an alternative which a lot of people
love to use is RenderDoc. The reason I put it
as an alternative rather than one of the main things is because right now the plugin that comes for RenderDoc
inside of Unreal Engine only works for the
desktop, by default, and you can make it
work for the HoloLens, but it does require
jumping over some hoops which, again, for beginners
or even not for beginners, but in general it's a bit harder. So still,
I think even the desktop version is still very good for debugging because the pipeline is
still going to be the same, even if you're not
interested in the timing. The timing is going to be off because you're running
it on the desktop, but it still gives
you a very good idea of where your pipeline
is bugging out, where it's making mistakes.
All right. So now I'm going to move onto
the last section of my talk, which is basically some tips. So first of all, I say this often. I'll say it again. Use the built-in tools
as much as possible rather than reinventing the wheel. So if there's something
that you want to do and you have to write a lot of
code, remember, Unreal Engine is a
full-blown engine. Most of the time,
it'll have something there which can do what you're
trying to look for, and most often these
will be more optimized than what you would handcraft.
So to give you an idea, a lot of people don't know about
the Variant Manager, right? I'll talk about what
the Variant Manager is. A lot of people don't know
about the spline component. Now, I'm not talking about
the spline mesh component, which allows you to create
meshes from a spline. That's a different thing. I'm talking about
the spline component, just a spline with a curve which you can use as an input
for animation and stuff. And then there's a
timeline component as well, which a lot of
people do know about, so I won't talk about it too much, but again, the spline component
and the timeline component together work really
well hand in hand. So the Variant Manager: the Variant Manager
is basically a tool. If you have a program -- which is
a very common use case in AR -- is that you want to see
different variations of the same thing. You might want to look at
how your car might look like with a certain color or with certain wheels
or certain customizations or, for example,
you're trying to buy furniture and you're trying to
buy, say, some appliances and you're looking
at a refrigerator and trying to swap in and
swap out different types. Interestingly, luckily, that can be done without
any coding whatsoever because Unreal Engine
has something called the Variant Manager which
means that the artist can preassemble a
lot of these assets, and you can even go
ahead and actually ... So let's say for example we
have this bike, and we have red, and then it has a front rim.
It has a gas tank. It has all of these
different things, and then you can
click the yellow one, and it'll have a different
set of actors, for example. And even those actors,
within the actors, you can have a different
set of properties, and then even those properties can be manipulated via
values or functions, even events so that when I go from, say, this one to this one,
it could even call a function to allow me to
prepare this properly. So all of this is super
performant in the sense that the Variant Manager
has a lot of information about what you're trying
to do at compile time because you actually set up
all of this at compile time and potentially optimize a lot of
it, right? But if you are doing
all of this by hand, if you are writing
these large Blueprints to allow you to switch, you're most likely
reinventing the wheel. Now, there are cases
when you don't want to use the Variant Manager. So you should use
the Variant Manager if your project has many
assets with different variants. If you just generally need to
simply do in-place property swaps or if your artist wants
an easy way to use the GUI for configuring asset variants
which, again, Unreal provides. However, in general, if you need to call
complex functions and you need to do a lot
more than generally speaking, you're not really looking
at the Variant Manager because I don't know.
The performance benefit is probably not going
to be as pronounced, and then you also may not
necessarily want to use it if you have a lot of metadata
that you have for each variant, and it's not just properties, but you have a lot of
overhead of some actor data that needs to be processed every
time you switch to something. So again, Variant Manager, use it. This goes hand in hand with
the Datasmith which, again, is another thing I
haven't mentioned here, but I strongly
recommend you look at. So Datasmith is a great
way of importing different things from different tools, like, say, Maya or 3D Studio Max without having to
go into your engine and then changing the properties because it's sort of
a toll-free bridge. The spline component,
as I was talking about, so the spline component, it basically allows you to
visually create splines. Again, fantastic way
of creating animation without really doing anything.
So you create an animation where you have something
that flies in or flies out or just like the lander demo you
saw where that was interactive, so you click it,
and it goes up and down. But if you knew that
it would always go up and down according to a
particular complex path, you could use that
lander demo modified so that the button
click would actually use the spline component as a
path and allow your lander to then move from the start to
the end of that path, right? And again,
the spline component can be created within the engine
very, very easily. So you could use it as an
interpolation value, as well. You can do things like changing
the smoothness of movement or the speed and
everything visually, so it's very, very artist friendly, so strongly recommended
that you use it. I mention this because I've
seen some people go ahead and use Blueprints to sort
of handcraft all of this and do some funky math
in order to figure out how to get something to
move, and in the end, I figure out they're just
doing move from A to B. Now, if you're thinking, "This doesn't work a lot of times because you don't know where
A and B are going to be", remember, this is
very, very flexible. So you could have a spline
component where the starting and the ending are
driven by another actor so that it's the
child of that actor, and you can move it
wherever you want. And you can update it
dynamically as well, so do use these things. The other tip is to do as
much work at compile time or start-up caching as possible. So a lot of people don't do that. They are doing things like a find or a get a lot of the times, and that can be very
slow, so do not do that. There are things like, for
example, the OnBeginPlay, right? And remember,
that's part of the level Blueprint. I see people shying away
from the level Blueprint. I see people using the
actor Blueprint a lot, but I see very few people
using the level Blueprint. I think it's an opportunity missed. You can use a level Blueprint, especially it's OnBeginPlay,
to cache a lot of the stuff that could then be used
by the actor Blueprint. It's a good catchall for that
entire level that you can use, so do use it.
Don't shy away from that. Yes, it's a global, but it can be used appropriately
to speed things up. And then another thing I
see people sort of not using as much is the construction script. A construction script is
really, really useful. It can be used to do things like
run code to verify stuff, right? To say,
"Is this actually going to work? Is this going to look okay?
Is this in the right location?" And interestingly because
it's a construction script, it can do all of that
even at compile time, so that if you've written some code which relies on placing certain
things beside other things or at a correct location, putting it in a construction script allows you to run that in editor, and then as you move stuff around, it'll update alongside with it, and it will always give
you immediate feedback on whether you are doing
something right or not. All right.
So that's my 30 minutes up, so thank you very much, and I am now going to
stop my presentation and basically pass it
on again to Victor. VICTOR: Thank you very much, Sajid. Plenty of good knowledge there
that doesn't only apply to, HoloLens development,
but also in general when it comes to your
projects in Unreal Engine. I did link a tips-and-tricks video to the Variant Manager in the chat. I'll go ahead and paste that again in just a little bit here as well, and a lot of the stuff that
Sajid was talking about, if you're interested in
diving in a little bit deeper, like Unreal Insights, et cetera, there's plenty of documentation
on docs.unrealengine.com. Great. Let's see. I think Luis ... LUIS: Yes. Heard my name.
VICTOR: I was just making sure. Was there anything else
on the call that you ... Sorry. It's been a long day. I can't remember if you had
a presentation or not for us. LUIS: I don't mind. No, I don't. I'm just here to
answer any questions that my colleagues can't answer and I know how to answer,
which won't be a lot. VICTOR: Perfect.
We haven't received that many questions in regard. I think, honestly,
all your presentations were so at a great speed and on point that
we're pretty clear and we haven't received any
questions in regards to them. I also think people
out there are excited about being able to
receive a HoloLens 2 which, at the moment, is sort of just
becoming available for us, which is exciting. I, thankfully,
have been able to get one myself, and I'm excited about
diving into these topics. It's a whole new world, right?
Mixed Reality. A lot of us designers haven't
been able to experiment with this yet, right? We've done a lot of VR.
There's hand tracking involved, but the opportunity to actually
be able to see the real world and then augment it
with holograms is new, so there's a lot to experience
and experiment with out there. So maybe a couple
questions for you guys. As you've been playing
around with the technology, what are some of the things
that stand out to you most, perhaps something that
you didn't realize was going to be awesome
or strange or weird when it comes to actually
designing for the device but also using it? LUIS: I think for me, we've been
working with HoloLens for awhile already, internally, so it is not as fresh as
it is for other people, so you cannot get used
to the awesomeness. I have to remember when
I started with HoloLens the things that really struck me, and I think one of the
things that I really ... It's very silly, but one of the
things that impressed me a lot is the way when you have ray
shooting out of your hand, that ray hits the
furniture in your room. That connection between what
the computer is generating and what exists in real life, that was the first time
that I could feel it, and that really blew my mind. It's very silly. It doesn't have
that much usefulness, let's say, except connecting both the
computer with the real world, and that was a moment
of inspiration for me. SAJID: For me, it's interesting. I've been doing graphics
for a very long time, and most of the time
you do things ... [CHILDREN'S VOICES
IN BACKGROUND] Sorry. This happens all the time.
Sorry about that. It's okay. All right.
So I was just saying that's what's interesting
is that you ... So I've been doing graphics
for a very long time, and I've seen all
sorts of experiments that we do with image processing, overlaying things on
top of each other, and most of the time it's
two-dimensional information because it's coming from a
camera that you're used to. And what struck me the first time that I used the HoloLens was,
"Oh, wait. It has depth". It was mind-blowing
because I was always used to doing things like tricks, right, to get something to appear
behind or to occlude, and you totally forget
that it's amazing that you have a
device that is mobile, that is quite
literally on your head, and it is doing all
of that in real time at 60 frames per second
along with the depth and then knowing what's in
front and what's behind it. And it opens up so much
in terms of possibilities because you don't really
realize until you start working with traditional tools
which don't have the depth of how much you
actually depend on it. So even doing things
like, for example, you were talking
about virtual surgery. We've seen people who are
overlaying tools on a body in order to teach other people,
other doctors about surgery, for example, students. But then if you don't
have that depth, it is very, very apparent, and then, as I said,
with the HoloLens it just works because you have it,
and it recognizes that. And we've seen people in our team, like Nick Klingensmith, would experiment with
even doing things like take information
from the real world in terms of lighting and
then integrating an object so that when you look at
it or when you rotate it, it's not just lit differently
from the rest of the world but that it's lighting actually
matches with the real world, and that's the information
that it's taking from. But again, as I said, to me,
it's just super fascinating that something can do that
at 60 frames per second, so for me that was mind-blowing. VICTOR: I remember the first time
I tried the "Mission AR" demo that was presented at SIGGRAPH, I almost felt like my
brain fills in the gap, and I feel like I'm
actually touching something when I'm grabbing and
interacting with the menus, even though it's not there. Right? And it's interesting
how your primitive brain is just sort of filling that in because that's what you're used to, and I think that comes
because of the fact that the depth is so real that
your reptile brain just goes, "Well, this is supposed to be here, and you should feel something when you're interacting with it". SAJID: Absolutely. Yeah. Julia Schwartz did a
presentation on that, and that was actually
one of our goals, to actually make it so that the
interactions that you're doing, they translate over from
what you would normally do, that it is just expected so
that you can actually grab stuff and do it rather than having
to switch your brain and say, "Oh, I'm in AR, so I have to
interact with it differently". And that was one of the
goals that I think, yeah, I think she said it
well that our brain interpolates a lot
of that information and because it's running
at such a fast rate and because there's
all of that information that our brain normally
requires anyway, we just fuse it so
that even though ... You're right. Even though
you're just holding something which doesn't exist,
you sort of feel, "I have something in my hand", even though you are not
really holding something. It's interesting, that feeling. LUIS: I think articulated hand
tracking was a game changer. It's something that when you
were wearing the HoloLens 1, you felt like instinctively
you wanted to touch things. You really felt like
it should be there, and I think having it in HoloLens 2 really changes the experience and makes it so
much more intuitive. It's brilliant. And I am looking
forward to see hand tracking being used in
other places as well. A big deal of what we
do in UXT is create UI that can be driven by hands, which is something
that's relatively new, and it's really interesting, and I love the way I
can give HoloLens 2 to someone for the first time, and they're going to know
how to navigate the menus. They don't need any
sort of instructions. They can just go, and you do what you think
that you have to do. They just go poke
something, and it reacts, and I think that's a really
good part of the experience. VICTOR: That's a dream for
any UX designer, right? That people just understand
what they're supposed to do and you do not have to explain it. So, then something
that's been rather tricky when it comes to, I should say,
6DoF interaction design coming from motion controllers,
even 3DoF where generally when you're not just
using your hands, letting the users
know which buttons do what, can be kind of tricky. Especially if the person isn't
used to this particular device that they are using. In this case, motion controllers, and so having your hands,
and that's a button. I click the button.
It's very intuitive, and not a lot of
explanation is required. It's super exciting. Well, we are getting a little
bit short on time here. Oh, never mind.
There was actually a question here. Oh, right. So, Sajid,
you mentioned the opportunity to be able to gather the
real-world lighting data. Is there a function called
as part of the plugin to get that information
from the HoloLens? SAJID: Not as far as I'm aware.
So I will provide that link. I'll have to find it,
but it has been done by, as I said,
Nick Klingensmith in our team, and I can't remember if
it's part of Stereo Kit, but basically he has a
tool called Stereo Kit which basically
allows you to create OpenXR software for the
HoloLens and any other device. It works with the Oculus and
all of those other devices as well very, very easily in C#. So Nick also created, as I
said, that lighting information. I will look for that and
then post it somewhere, but, yes, you can achieve that but as far as I'm
aware not natively. Luis, would you know? I don't think it's part of
natively our tool kit, right? Yeah. No. LUIS: No, I don't think so. SAJID: Yeah, so not natively part
of the tool kit, but it is information
that you can obtain. The libraries are open-source,
and they are available, so I will try to
find a link to that. VICTOR: For any
future conversations in regards to the livestream, you can go ahead and go to
the forum announcement post if you have any questions or want to talk about the
specifics of this livestream. I pasted the link
in chat right now, and it will be in the
YouTube description as well. It's exciting. [Indistinct] asked, "So is there
a realtime-generated cube map for reflections and ambiance
for objects in the holo world?" SAJID: Again,
not natively as far as I'm aware. Again, Luis can correct me if I'm
wrong, but not natively, and these are, again, things which are
certainly possible, certainly doable, but they
don't fall under the native UXT or the native experience
that we need to provide, and there are lots of features. Everybody will have their
own idea of what they want and what they need. What we want to do in
general, and again, Luis, you can correct me,
but what we want to do is provide a very
performant small subset of what we think will be the
core of what everybody needs, and then all of these other
things are certainly things that people can add and
download and package. There's several examples out
there, several tool kits, a lot of them by us at Microsoft, some of them even by people
outside of Microsoft, but they do exist.
And more to your point, Victor, yes, I will keep that in
mind that there's a forum, and I'll try to post
them in the forums. I think that'll be
helpful for people who are even tuning in
later and watching this so that they can go to the
forum and find those links. VICTOR: Yes,
for those of you tuning in, the VOD of this
presentation from Microsoft will also be available on YouTube within the next 24 hours.
That's great. Is there anything else y'all
would like to leave chat with before we end the stream today? Go play with the Mixed
Reality UX Tools, and remember that the
special modifier ... Oh, I can't even talk right now. Special modifier category,
"Is This Real Life?", Microsoft is generally
sponsoring a HoloLens device as well as an Azure, sorry,
Kinect development kit for all of the winners
of that category, up to five people,
which is very exciting. I hope we get to see plenty
of XR submissions to the Jam. We'll go ahead and play them. And, yeah,
thank y'all for coming on and doing your
presentations here today. I must say they were very,
very, very well thought through. I'll make sure to get
this up on YouTube so that people can
enjoy this or learn as soon as they get their
hands on a HoloLens device. I have pasted some links on the
forum announcement post as well. You can find them under
the Resources section. So we have the Unreal
Development Overview on the Microsoft doc side as well as the Unreal sample apps. Are there any other
resources that you know exist that you would like to point to? SAJID: I just wanted to
quickly add not directly in terms of resources, but, I think Carl mentioned
this in his presentation, probably Vanessa as well, but you mentioned that
when you get your HoloLens, then you can experiment with these, but you don't
necessarily have to wait because we do have the emulator. The hands, you can see them,
the button clicks and so on, so I would say experiment with it, even with what you have. A lot of the times
you will see that when you do export it
to an actual device, it's a pretty painless experience, so the emulator does give
you a pretty good idea barring some small problems. In terms of resources,
the primary resources -- so these slides will
be available as well. Victor, do you know where
we can put the slides? VICTOR: If you go ahead
and send them to me, I will upload them and link them
in the forum announcement post. SAJID: Great, yeah, because the slides
I particularly have, and I'm pretty sure
Carl and Vanessa's too, they do have links,
and every time I mention something, there's an actual link
to where they can go and find more information
on that particular thing, so similar to what
you were talking about in terms of the Variant
Manager and so on. So the information right
now, the way that it works, it's a bit scattered between
Microsoft as well as Epic. We are trying to unify that effort so that people know where to
look for what information, but it has improved significantly, but over time, what we're trying
to do is trying to provide a very guided way of
finding information, like this is step number one,
step number two, step number three, and it will be at Epic's
and Microsoft's website, and you will be able to go
from there to other resources. But other than that,
this particular slide that Victor will publish
will also have at the end some links to some
really cool tutorials that I have found over on YouTube which I think are very relevant. So as I said,
one of the examples that I gave was the spline component, again,
not the spline-mesh component. Most people will
confuse that with it. The spline component,
there is very little information, but there are one or two really, really interesting tutorials on it, which I've found to be very useful for giving you an
idea of how to use it. The other thing that I think is
overlooked a lot is from Epic. Victor has probably mentioned
this so many times to people, but there's an actual showcases sort of package that
comes with Unreal that you can download which
shows you the samples, and it's so very well done
that it has these scenes to show you how to exactly use, as I said as an example,
the spline component but also things like
basic math nodes or Blueprints or the
terrain and whatnot. So if you need an actual example
of how to use a particular thing but you're not really finding
some information on it, I would just strongly
suggest that you go and make a habit of looking at
that, browsing through it. The other thing that I
would strongly recommend is once in a while looking
at the best practices. Again,
that link will be in my slides. Those best practices for Unreal are on the Microsoft website
for Unreal for the HoloLens, and they mention all of the
things that I mentioned, what your project settings
should be and so on, but they do get updated. Unreal Engine gets updated
very quickly obviously, so what we tell you what
the best practices are also get updated very quickly
in order to match that, so I would say don't just
look at it once and say, "Oh, I already know that," but try to keep an eye
out on it once in a while. So those would be the resources
that I would point to. VICTOR: Sajid is referring to
the content examples projects which you can download
from the Learn tab in the Epic Games Launcher under the Unreal Engine
category, and you're right -- they do get updated for
every engine release because tools do change, best practices improve,
and we do try to make sure that the content
examples follow along. It's a great resource.
It's kind of interesting. You explored a whole
content examples as a third-person character. It's cool.
Make sure you check out the levels. That's the trick. You got to find the folder for
the maps and then go in there, and you can discover
everything from Niagara to Blueprints to math library. The math library is
particularly cool, and with that shout-out,
we did do a livestream with our principal
mathematician earlier this year, maybe last year. I don't know.
It's been a strange year. I'll go ahead and link that in chat as soon as we're done here as well. Awesome. Well, with that said, thank y'all again so much for
coming on the stream today. If you guys out there have been
watching from the beginning, thank you for joining us.
I know it's been long. We've done sort of
covered two very different but important topics.
2020 MegaJam kicked off. The theme is available
on the Itch.io page. You can go there and check it out, and thank you all for joining. I do have a couple
things I want to mention before we tune out here. Next week's livestream,
we're going to have Patrick Wambold on
talking about the new, improved DMX plugin that
shipped with Unreal Engine 4.26. If you're unaware,
DMX is sort of an interface to be able to control
real-world lighting. There's a plugin in Unreal that allows you to do
both of those topics. You can either go ahead
and control real-world lighting inside Unreal Engine, or you can go ahead
and use DMX controls to control virtual lights
inside Unreal Engine, and Patrick has prepared a
really cool presentation for us. We're going live Thursday
next week on our typical time and not Friday like
today, which is special. Make sure that you go ahead
and check out our forums. Communities.unrealengine.com is a great place to
find meet-up groups. Clearly no one is seeing each
other in real life right now, but the groups are now doing
virtual meet-ups primarily using Discord and some other event
platforms that are available. Make sure you follow
us on social media, and if you're watching
this on YouTube afterwards, make sure you hit
that notification bell in case you are interested
in all the things that we are producing on
the Unreal Engine channel. There are a lot more than
just our livestreams. We do produce other
kinds of tutorials, tips and tricks,
webinars, et cetera, and you'll find all of
that inside our playlist on the Unreal Engine YouTube page. With that said, once again, thank you all for coming
on the stream preparing. I will make sure to make
the slide decks available and the links in the
forum announcement post, and you can go ahead and
dive into all the resources that are available there. Cool. SAJID: Thank you very
much, everyone. Have a good rest of your day. VANESSA: Thank you, Victor.
LUIS: Thank you, Victor. CARL: Thanks. VICTOR: Bye to all of you and
bye to everyone out there. Have a great week.