[MUSIC PLAYING] [SINGING IN NON-ENGLISH] [SIGHING] [MUSIC PLAYING] [YELL] [MUSIC PLAYING] [NON-ENGLISH SPEECH] SPEAKER 1: Look at the fafa [LAUGHTER] [MUSIC PLAYING] [LAUGHTER] SPEAKER 2: Here
comes the fafa is what we'll say every single time. SPEAKER 3: Like
a kick in the guts. SPEAKER 4: I'd
run off and I'd cry. SPEAKER 3: Here comes young,
the fafa SPEAKER 4: When
kids in the village would sing that
in front of dad-- [NON-ENGLISH SPEECH] SPEAKER 2: He'd get so angry. [YELL] SPEAKER 4: Yeah? SPEAKER 5: Iopu! Son,
walk like a man. SPEAKER 3: Our sister
would catch us wearing her stuff. SPEAKER 4: But I
thought I looked pretty. A dancer's in the village. I'd want to put on
Tuiga headdress. [MUSIC PLAYING] SPEAKER 3: Then the
whole village would stare. People would laugh. SPEAKER 2: The light went dark,
and I felt nothing. [WHISPER] [MUSIC PLAYING] SPEAKER 4: Here comes the fafa Here comes Iopu the fafa!
Here comes the Fa’afafine SPEAKER 3: But I'd
go to the mountain. SPEAKER 4: And
bounce by the waterfall. SPEAKER 2: Being
free was a beautiful thing. SPEAKER 3: If I leave,
I'll take my roots with me. A piece of home to carry. SPEAKER 4: In my blood. SPEAKER 6: Hold on tight. SPEAKER 3: As
I shift in the wind, I will land and
plant these pieces. And I'll grow. Walk like a man, Iopu! That's what dad used to yell. I reckon I'll walk just like me. (SINGING) [YELL] [SINGING IN NON-ENGLISH] [APPLAUSE] [SINGING IN NON-ENGLISH] AMANDA: Hey, folks! Years of research were
spent to bring real-time global illumination and reflections to
Unreal Engine--which resulted in Lumen. In our latest video overview,
Lumen in UE5: Let there be light!,
we share exactly what it is, discuss its features,
and walk you through a high-level
overview of how it works. Head over to the Unreal Engine
YouTube channel to get started. Alright educators! Hold your students'
attention using one of the most popular
games in the world. With our free Fortnite
Creative lesson plans, you can explore science, design,
and history concepts using a dynamic game environment. Find these materials on
the Unreal Engine blog. Give yourself extra fuel with
our dedicated learning path for the 2021 Epic MegaJam,
our exciting seven-day game jam kicking off August 26. Featuring courses on getting
started in UE, packaging, and more, this series will help
you be ready--and you can earn a limited-time event badge for
completing it by September 16! Register now on
Itch.io and start the courses via the Unreal
Online Learning portal. How do you make an already
great game even better? The developers at Square Enix
share how they took advantage of PlayStation 5's extra
performance power to bolster FINAL FANTASY VII REMAKE's lighting,
framerate, VFX, load times, and more for
INTERGRADE--get the deets on the feed! Creating an animated short
film in less than three months with a very small team
while working from home is no small feat. But that's exactly what one
Cornish studio has achieved. Adapted from wordless
children's book Wylder, explore how the Engine
House created their film with final pixels rendered
entirely in Unreal. When Scavengers Studio revealed
Season at the 2020 Game Awards, unsurprisingly, adoration
for its beautiful 2D illustrative art style poured in--and we
had to know what inspired them, and how they pulled it off. The Montreal-based team
shares a peek behind their stunning visuals on the
feed--don't miss it! Increasingly, the products you
see advertised in commercials are CG replicas--not
footage of the actual product. These digital twins
are becoming the norm. Steer over to the feed to
learn how the power of AWS cloud computing and
real-time rendering has been combined
in Mackevision's new render-on-demand system,
offering a way to create personalized,
photorealistic marketing assets in seconds. Using virtual production
and digital human tech, Madison Beer,
Sony Music and Magnopus have set a new bar
for virtual concerts, thrilling fans who had no
choice but to stay at home. The real-time techniques enabled
them to bring the same digital experiences to fans around the
world--go backstage and learn how they composed The Madison
Beer Immersive Reality Concert Experience. If you're ready to explore
the latest virtual production features in Unreal Engine 4.27,
join us on Wednesday,
August 25 for a free webinar! We'll reveal more about
how this release will unlock new potential
for the creation of both live-action
and animated content, and talk about how the
explosive growth of in-camera VFX and broadcast
virtual graphics inspired the
development of 4.27. Register today! And now over to this
week's top karma earners. Many, many thanks to: ClockworkOcean, Everynone,
GRIM.YA, LunaNelis, dannymozzer, FatalBreak,
Deathrey, Sakkash, LightCanadian, and Shadowriver. Our first community
spotlight this week is a video series
created in homage to Dragon Age from Leo Torres. These lovely scenes
served as a playground for trying out Lumen and
Nanite in UE5 Early Access. Go get more tech details
and give your feedback to Leo on the forums! Coming at you from Grimware Games,
Ghost Knight: A Dark Tale is a
2.5D action platformer, set in a cartoonish
dark fantasy world. As Ghost Knight,
a spirit bound to a suit of armor, traverse the dark land of undead,
Demons, Witches, and Beasts to stop the
Mad King's misled quest and free Lorentia. Wishlist Ghost
Knight now on Steam! In celebration of the 20th
anniversary of the game Golden Sun,
the Virtual Video Game Orchestra released an arrangement of
"Prologue" by Jonathan Shaw. Built in Unreal,
the video showcases musicians from all over the world
and allows the editor to quickly shape
the performance. Watch the full
event in the forums! Thanks for watching this week's
News and Community Spotlight! AMANDA: Hi! And welcome
to Inside Unreal, a weekly show where we learn, explore,
and celebrate everything Unreal. I'm your host,
Amanda Schade, and we're excited to get started today. We're going to talk about
Unreal Engine 4.27 and some of the exciting updates
that are coming. And to help me do so,
we have Andy Blondin, senior product manager. ANDY: Hello, everyone. Here at Epic,
I work with the virtual production team on some of the features. And today, we'll be
covering a bunch of the things that we've put into 4.27. So, I look forward to chatting. AMANDA: And Antoine Guillo, product specialist for AEC. ANTOINE: Hey, everyone. So I work on Datasmith,
Dataprep, and some other features like LiDAR,
doing some QA, and getting you guys feedback. AMANDA: Awesome. Chris Kulla,
principal rendering manager. CHRIS: Hey, everyone. I work on the ray tracing
group here at Epic. I've been here about a year. I come from a film and
visual effects background. And I'll be talking about
some stuff I've been doing. AMANDA: Thanks. Ryan Mayeda,
senior technical product manager live from an LED stage. RYAN: Hello, everyone. I'm Ryan. I'm a Pisces, and I live in LA. So I'm one of the product
managers on virtual production. I work really closely with Andy. And I'm excited to talk about
the stuff we have coming in. AMANDA: Awesome. And Steve Smith, our XR lead. STEVE: Yeah, hi. I'm Steve Smith. I'm in the Seattle office. So a little north of LA and I
lead the XR team on the engine, doing all things AR
and VR in Unreal Engine. AMANDA: Awesome. All right. Well, first of all,
thank you all so much for coming. We have a lot of
really cool content to share today to give
the folks at home some-- we're going to
start off with Steve. He's got some XR updates for us,
and then he's got to dip out. So we'll let him take it
away and dive right in. STEVE: Awesome. Thank you, Amanda. OK. In the world of XR,
we've got three things to note for 4.27 that I
wanted to talk about. First,
OpenXR being production ready, we've also added some
fixed foveation VRS support for desktop VR,
which is currently experimental. And we also added new
VR and AR templates. So first, OpenXR, for 4.27,
our OpenXR support is now officially
out of beta and is considered production ready. So historically, to support VR
and head-mounted AR in UE4, you'd need to enable a
bunch of different plugins for different runtimes. New devices, they'd need
to support existing runtimes, or a game,
would have to be ported to support a new runtime. With OpenXR,
we now have a single service to support all the things. So this is going to make
life simpler for you guys, for the developers,
since there's just one plugin now. And you've got
consistent behavior. And it's really a boon
for us on the XR team as well, since now,
we're going to have more bandwidth to focus on innovation. And speaking of innovation,
VRS is a pretty cool new feature. So we've added experimental
support for image-based VRS for 4.27. So currently,
that's only focused on XR use cases,
specifically fixed foveation in a VR headset. So you're probably wondering-- well,
you may be wondering, what is VRS? So the TL;DR for that is,
we can basically provide an image
for the GPU that is going to describe
how much detail to use in different
parts of the frame buffer. So by default, it's always
a one by one shading rate. So that's one indication
of a pixel shader per on screen pixel. So VRS can actually allow us
to define groupings of up to four by four pixels, so groups
of 16 per single pixel shader invocation and then various
combinations below that. So for the VR fixed
foveation use case, we can reduce the shading
rate towards the edges of the view with virtually no
visual impact due to the lens distortion. So on the slide there,
you can actually see the picture
of the motorbikes. The red shading there actually
shows what that fixed foveation attachment looks like. So we're actually seeing
significant performance improvements using this. In some experimentation I did,
I actually saw the base pass time on the GPU cut
in half when using aggressive fixed foveation. Currently,
this is only supported on DX12. But we've got a Vulcan
implementation in the works, probably for UE5. It's exposed
through a new setting under the renderer settings in
the engine, in the VR section. We're definitely
going to be continuing to refine and
improve on this tech, since the initial results
through experimentation have been so good. We're looking at
adding eye track based foveated rendering of
future release for headsets that support it. And we're also
going to be looking at extending this tech
beyond just VR use cases. So it's definitely worth
keeping an eye on. Also, we've added the
new VR and AR templates. So the new handheld AR template,
greatly improved UX design. It's a great starting point for
any handheld AR applications, and it runs both
on Android and iOS. And we've also added a
completely new VR template. So our old VR
template was definitely not the best example
of VR in Unreal Engine, did not age well. Our new template is
significantly improved. It's all new content. It's designed specifically
around OpenXR as well. And it shows best
practices for room scale VR. So Victor Brodin
is actually going to be talking about
that in an upcoming livestream on September 2. So tune in for that
for more details. So that's everything
from the XR team for 4.27. I'm going to hand
off to Ryan who's going to go into virtual
production and in-camera VFX. AMANDA: Great. Thank you so much, Steve. STEVE: Thank you. RYAN: Also,
once again, I'm Ryan. So I'm coming to you looking
live here from the Epic Games large stage at NantStudios. So we call it the large stage,
but it's technically the Los Angeles R&D Stage. But a bunch of us have
been here making sure that all of the
features of 4.27 are going to work on real stages. And so that kind of starts with
a big production test we did. Next slide. So this has been posted already. Hopefully,
a lot of you have seen it. If you haven't, it should be
easy to find on our website. But over the past
several months, we were putting together a
big-- like a production test. We were shooting real shots. Proving things out as far
as all of the new features that were coming in 4.27,
just as a way of making sure that the next release is ready
to go for real productions that are going to go
to shoot with this. In addition, there's a bunch of
behind the scenes stuff talking with the different filmmakers
that were participating in it with us, as well as our
DEV team, the people that were building the features and
working hard behind the scenes to have our code
base ready to go. So I mean,
I guess with that in mind, also thought it might be
good to just maybe give a little bit of context on
in-camera VFX, just what it is. So next slide. Oh, sorry. I just skipped ahead. The last thing to mention
related to our production test is that we are sharing
sample content. So if you watch
the production test, you'll see that it goes through
a number of different scenes. And all of these
things are going to be available in this
sample project that's going to accompany the 4.27 release. So there's three new
scenes that are part of this. And then if you
also watch the piece, we're using the Memories
of Australia asset pack which is already available
in the Marketplace. So next up is maybe
a little bit of stuff I prepped just to give
people some background on what in-camera VFX is. So next slide shows-- this is just what the
stage looks like when it's populated with an environment. You get a sense for what is
physical and what is virtual. So everything that
you see on the wall, as you would when we replace
the grid with the real scene here is a full real-time environment. And the other thing to
note about the stage space itself is that it's immersive. So in addition to being
what's in the background, it's also an
immersive space that also lights the characters,
gives real time reflections in the physical objects. So the next slide shows the
way that the camera works. So another thing to
note here is that we are using a live track camera. And this in particular,
I think, it's a good way of showing how game
development is changing the way and transforming the way that
movies and TV shows are made. Right? So when you're playing a game,
you're accustomed to being-- you're accustomed as the player
to controlling what the camera sees, where the camera goes. And in-camera VFX effects
is kind of the same concept. But the director,
cinematographer, camera operator, et cetera,
like they're the player, and they're live controlling
what the camera sees and where it goes physically. Right? So we're moving the
camera physically, and it's adjusting
what the camera sees. And this picture in picture's
always the correct perspective of the camera. So next slide
shows that in action. So this is sort of better
than just a 2D image or a video playing behind. Because no matter
where the camera goes, it's always seeing an
accurate perspective of what you're supposed to see. So inside of where the camera moves,
what the camera sees is always moving. And what's outside
of it remains still, because that's what shows
up in the lighting and reflections. The idea here is we're
providing the directors a more tactile way,
like shooting virtually in a way that feels
a lot more like how they're shooting if they're
actually at the location. So that all starts to segue
back into the actual features that are coming. So next slide. Yeah. So diving to the
features for 4.27, we feel really confident
about this collection of features that are all part of
the engine that are necessary for in-camera VFX. The idea here is that this
in-camera VFX workflow is now something that anybody can do. I think the big ticket
items to call through are multi-GPU support. So what this
means is that you're able to use computers
to drive the wall that have more than one GPU. This allows us to scale
better for large volumes. So this is a pretty big one. Some of the productions
that we're seeing are much larger
even to accommodate the types of epic scenes
that they want to produce. But with multi-GPU,
you're able to use one GPU that that's dedicated to the
pixels that you see in-camera. And you use another
GPU to drive what's outside for lighting and reflections. This gives a lot
more flexibility for content creation. It's just basically a lot
more graphics horsepower to keep things at the
horsepower driving, like the concept
that you actually see. In 4.27, we've optimized
for two GPUs per machine. We're definitely exploring
other options to take this further. But this is kind of a big
jump from one to two. And it also allows us to reduce
the total number of machines, bringing the cost
down hopefully as well. And all of this is in
service of this idea that filmmakers can
turnaround live creative changes during a shoot. So the scene is always live. You're able to make
real time changes, just like you do in the editor. And that's going to be reflected
in what you see on screen. So next slide is for
multiple cameras. So this is another thing
that we just want to call out, and the multi-GPU performance
that really helps with it. But it's very common to
shoot with multiple cameras. And you're able to do
this on the LED wall. This is a little example
that shows how it works. Right? So you have two cameras. You also get to [INAUDIBLE]
that are tracked to each camera. And with MGPU,
it just, again, leaves more of the graphics
horsepower available to drive more than one camera. We've also made a bunch
of optimizations and workflow improvements just to
make it easier to configure. So the idea is that,
with nDisplay, you're configuring all the numbers
of cameras you need. And you just turn them
on and off as you see fit. And then you're only
rendering the cameras that you have enabled. Next slide covers
traveling shots. So traveling shots is
a very common thing. And we're all accustomed
to seeing maybe some fake looking car driving shots. This is a really common
workflow that people are using the LED walls for. So we have some big
improvements on the rendering side to make this look
more realistic. The first thing has to do with
motion blur, so with the way that the background is moving. And so with this feature,
or with this release, we've introduced a
feature that allows you in nDisplay to disable
the camera motion blur. Because the idea here is
that the objects are moving. You want the object motion blur,
but the camera motion blur is going to be coming from a
physical camera as it's moving. So this really seems
like a subtle thing, but it's what makes the
image look more realistic. So you're not getting
excessive motion blurred background
images and reflections. That's more related
to the 3D side of things. On the 2D side, our team has
done a bunch of optimizations for EXR playback. So sometimes people
are just playing plates. We want them to be able to
do that in the engine as well. So with EXR, there's a bunch
of performance improvements to allow that. Next up is the
nDisplay Config Editor. So for anyone who's tried
to configure nDisplay before, it's in the past been
a little bit challenging. With the configuration file,
it's something that is a
little bit error prone, a little bit opaque. And it's just can't really
see what you're configuring. Right? When you show up to the wall,
you just want to see-- as you're configuring
the different strains, you want to be
able to see it live. So that's what the Config
Editor really sets out to achieve. The Config Editor's
like a new feature. It lets you visualize all of
the different display setups that you have. You can create your cluster,
arrange the viewports, and then map those viewports
to different GPU outputs. We've also consolidated
all of the different Blueprints. In the past,
it's a single Root Actor. So all the parameters
are together. They used to be scattered
apart and hard to find. For in-camera VFX specifically,
that stuff has been a big, big gain. Because the
operators on stage will be changing things in a
whole bunch different places. And they could get lost
or not realize it's there. So now it's all
consolidated in one place, make it easier to
change and easier to address what the
director wants in the moment. And then last but not least,
you can actually see the display
cluster live in the level. So wherever you move
your nDisplay Actor, you can see what
the screens are going to see live and in the editor. It just takes the
guesswork out of positioning where you're going to go
and what you're going to see. Next up is color science. So color science is a
big deal for filmmakers. I mean,
I think it's a big deal for everyone. But our goal here
was to really make sure that what the
filmmakers get when they film a scene on
the LED wall is accurate. So the big ticket idea here is that,
when artists are creating the scenes in Unreal Engine,
they're looking at it on their
computer screen. What we want to do
is match what they see when they're creating the
scene at the desktop to what the camera sees when the
same scene goes through the LED wall. So we're doing this with
some tech called OpenColorIO. So we've had OpenColorIO
support for a while. But now, it's been
expanded to include nDisplay. And we've also done a lot of
research and experimentation on the process for
doing this calibration. So we'll be sharing a
bunch of documentation on that with everyone,
just further the discourse on what the
right way to approach this is and what the best practices are. Next up is Level Snapshots. So this is a little bit more
of a workflow pipeline tool that allows you
to save and restore the state of a given level. On stage,
this is used a lot to say, OK, we're shooting the
scene in this area. And then we're going to
shoot a bunch of shots there, and then we're going to
move over to that scene. And maybe we're doing
some floor shot changes. So this lets the operators keep
everything in the same level but have different
snapshots that are used for different
sort of set ups, so to speak. Level Snapshots
will basically take a snapshot of all the
actors present in the scene. And it'll also allow you to
selectively restore things. So oftentimes,
directors will be like, I want it to be like it was
last week but everything except for the trees. Leave those as they are. It just lets them be
more flexible with how they bring things back. More on the tech side, you can
define customizable rule sets. So if there's a common
frequently used set of filters for restoring,
you're able to do that. And we also support
adding and removing Actors. So if you take a snapshot
and add something, it'll let you remove
it and vice versa. If you remove something,
try to restore a snapshot, it'll help you bring it back. Next up is the
Multi-User Editor. So Multi-User has also
been around for a while. It's kind of a
critical component of how all of the
machines talk to each other. It's how the desktop operators
can make changes that are then reflected on the LED wall. There's been a lot of stability
improvements for Multi-User, just so that it's just solid
and performing as expected. Right? So any change you make
is then propagated over to the LED wall. We've also had
some optimizations for creating large
assets in session. A lot of this has to do with
recordings in particular. We've improved save times. And we've also added
some additional nuances that are related to Take
Recorder which is primarily this ability to from one
client initiate recording on another client. That's kind of a big,
big workflow gain for the way that the operators
choose to lay out what happens on which machine. Next up is the Virtual Camera. So pivoting away
a little bit more from the LED
wall side of things, we have some updates to
the Virtual Camera setup. So the main thing here is a
dedicated iOS app, just better UX for how you
use the iPad VCAM. This is going to replace
the Unreal Remote2 app. We've also added
rebroadcast support for Live Link virtual subjects. This is really useful,
especially if you're using a
game controller to pilot the Virtual Camera. And we've also improved
things for multi-user support, just for who's controlling
the camera when in a session. And then last but
not least from my end is an update to
the Live Link Face. So this is another
one of our iOS apps. And the main feature
here is rest pose calibration. So this is a really
improving the live quality. It allows you to account
for the individual performer. In a nutshell,
you can capture a rest pose that's specific to
the person that's actually doing the facial capture. And this is going
to improve things, especially for lip-sync and
closed mouth expressions. We record both the
clean and calibrated data. So you can get back
to both ways if you want. And then we also have
added proper support for iPads. And so for those of you who
have an iPad but not an iPhone, we have a more
native experience. And the UI is just going
to be proportioned correctly for iPad. That's it for me. Andy's going to take over
and cover the rest of our virtual production stuff. ANDY: Yeah. Thanks so much, Ryan. Next up, we're going to touch
on the remote control web UI that we've built.
We have a full new kind of UX around this. The purpose of the web
UI is to allow the filmmakers the ability to easily extract
the controls that you want to control from inside the level,
put them on to an easy touch
interface that's web based, that allows you to, with no
scripting or anything required, attach properties to
be able to be controlled from an iPad inside the volume. But because it's
fully web based, you could actually run it on
any kind of device that you want. So you can see in
these examples here, you can put color
corrections on there. You can put
anything onto a slider that you want to control. So it makes it very
easy and tactile to easily control properties
inside the scene, the lighting, direction, color changes,
all those kinds of things. And so we've built
corresponding web widgets that match all the properties. And I've shown some there,
which you can see on the left hand side. There's sliders,
buttons, toggles, all the kinds of things that
you'd want to be able to use. And it works fully
with Multi-User. So when you trigger
these properties, it also will trigger across
the whole cluster itself. And so remote control
has become a central piece for the filmmaker
to be able to control their stage in an easy fashion. Next slide. So along with that,
we've extended this remote control area into also hardware devices. So we realize some people
are using DMX, OSC, and MIDI to control parts of the engine. And what we've done is
take this remote control plugin and allow you to also not
just use web technologies but also use those protocols. So with OSC, DMX,
and MIDI, a lot of times you have hardware
control devices. So you can see on screen
there I have a little music MIDI controller that has dials on it. And I can control, say,
the camera's focus. And I'm racking back and forth,
where I can bind it to the sun direction. And it allows somebody to
have a tactile feel and build out these control systems
in the way that they see fit. So all those changes,
again, replicate through both the Multi-User
and the nDisplay system. So you can control the
whole volume in this way. And we built in
some nice features that allow you to auto
bind to these properties. So you don't have
to be a tech artist. You don't have
to know Blueprint. You can just
expose the property, go to your MIDI device,
turn the dial. It binds to that,
click to let it know that you're done binding. And then off you go. You can set the
ranges and the limits. So it's a pretty cool feature. I'm really excited to
see where people take it with this next release. So next slide. Next up is lens distortion. So we've invested quite a
bit on getting a lens distortion pipeline up. This is our first pass
at putting in both a lens distortion shader,
a process in which you can calibrate your own lenses. So you can see on this
example Trent is holding up a checkerboard there. So we use OpenCV that's
natively built into the engine now to use image recognition
to match those patterns. And you can go ahead and
derive basically all the lens distortion coefficients. Along with that,
we can map the encoder. So if you have your zoom
and your focus on your lens, we have pairs along those
so that your CG camera will match that physical camera. So it helps for depth
of field matching. There's a process
for calibrating the nodal offset as well
based on this image recognition. So it will
automatically calculate where the nodal
position of the lens is,
which is always really tricky. And then we've
also added the ability to import ST-Maps from Nuke. So a lot of the
filmmaker typically will go through the process
of shooting these grids, processing them in Nuke,
and then exporting something called an ST-Map. And this allows for that
same kind of workflow. So you can see
on the bottom there we're getting
results from the CG as we fade from the real life
white and black checkerboard to the black and red
checkerboard there and matching those as
accurately as we possibly can. So along with the tracking,
we've added support to
stream live values into the engine for
certain of the tracking providers, along with the
Preston and Master Lockit's devices as well. And one of the small ones too
is this Free-D protocol support. So a lot of the robotic cameras,
pan tilt zoom cameras,
utilize the Free-D protocol, and that's now
native in the engine. OK. You can go to the next slide. Updates on alembic, so there's
been some nice improvements for alembic,
when it comes to motion blur. So if you're using geometry caches,
especially ones that have subframe samplings,
you can see here in the example of
scrubbing through. You need subframe
accuracy to happen to provide the right kind of motion blur. So you have improved support
for that, along with simulations that are happening. So if you have a fluid or an
explosion that has starting off from empty frames,
support for that has definitely been improved,
along with topologies that are
not consistent in allowing us to import those motion vectors. So there's been some
nice improvements on the alembic front. Next slide. So next up is the USD. USD is a format that's
becoming more and more popular in the industry,
the Universal Scene Description that was put out by Pixar. In the film making
visual effects world, it's one of those
formats everyone is looking at right now. We've had support in the
engine for quite a while. But we're each release
continuing to invest in that. In this release we focused
more on export capabilities to do interop between DCCs,
so export levels of levels, landscapes,
foliage, and animations. There's better
caching support in there now and allowing us to use
this with Multi-User workflows as Ryan was describing. So being able to have a session,
where you could have USD data in it,
and have that propagate across
all the other nodes that are in the session. And we've also had improvements
for materials for MDL support and for runtime support. And next slide. OK. So the next up is DMX,
Pixel Mapping. So if you're not
familiar with this feature, DMX is lighting data. It comes into the Unreal Engine
through a protocol called DMX. And what we can do
is sample a texture. So you can see at the bottom
there the rainbow pattern and send that out to lighting,
fixtures, or inside the engine virtually capture the
texture as a render target, and then display and match. So this comes into
play if we wanted to match the
lighting in the scene, let's say, inside the volume
that Ryan had showed but match the RGB
values as close to reality as what we can. You can use a feature like
this to do the Pixel Mapping and capture that and
then drive real world physical lights with
those same values. So really powerful feature,
we've seen a lot of performance
optimizations happening across the board with this and
better recording functionality. So it's a lot more production
ready in this release compared to prior ones. And I think that's it. If you want to advance,
and we can hand it over to Antoine. ANTOINE: Hey, everyone. Antoine here from Montreal. Let's have a look at the new
feature for AEC and automotive. So in 4.27,
we have some improvements to the Movie Render Queue
tool used to render movies. And now,
we can set up to render stills. So there's basically
a tool that we create ready to render
sequences from all the cameras in your
scene or just a selection. And you can set up
the stills to have presets. And you can
override those presets. So it could be using
CVars to activate effects. Basically, it's very handy
when you want to render stills. And it's complex today
to do it in the Sequencer. So next slide, please. So in 4.27,
we are adding new Datasmith Exporters of plugins. We are shipping a
reworked Sketchup. So we had a Sketchup
exporter before, but we worked it
from the ground up. And we also have
a Rhino exporter. Those two plugins
have Direct Link support compatible with UE4,
to some extent UE5. It's not been fully tested yet,
but you can try if you want. And also Twinmotion,
also the Revit plugin received a number of improvements,
and OK. Did I mention-- yeah. We have also ArchiCAD as well. Next slide, please. So on the CAD,
on more the mechanical CADs side, we have a new experimental
plugin for SolidWorks. This plugin supports geometry,
products structure, materials, metadata, model variants,
and is also Direct Link capable. And we are looking
to look forward to your assemblies
in Unreal Engine. Next slide. So regarding the
Datasmith runtime plugin, we have a number of
improvements in 4.27, including some fixes,
some new features, like the ability to have
full scene hierarchy. Before, you were limited
to have an Actor that was like a black box. You couldn't have access
to your full scene hierarchy. Now, you have access to everything,
to your components. And you can also import
your Datasmith scenes. With settings like
simplified hierarchy, you can change
the colliders as well. You can build collisions,
complex collisions. You can use simple collisions. And you can also import and
view metadata, which is very useful. We also, in this release,
added support for Direct Link PBR materials, and yep. That's it. Next slide, please. So regarding the-- we have
some improvements as well with the LiDAR plugin. We are looking to performance
improvements regarding streaming as well
as editing tools. So you'll be able to change,
modify, your point clouds directly in Unreal. If you can-- OK. The video is already running. So here we have a video of a church,
which it looks like a mesh,
but it's actually a very, very dense point cloud. And it renders perfectly. It's really smooth,
and it's very, very cool to see. In 4.27,
we also are adding master materials with Datasmith. These materials will be
included in the Datasmith content plugin. And the intent
with these materials is to have a material compatible
with the Datasmith format in Unreal and in
Twinmotion as well. So it's a very
easy way to set up your materials without having
to program a new shader. It's everything you
need is probably in here. And it's very useful when
you are creating, for example, plugins using Datasmith's SDK. Next slide, please. So to conclude,
on this Unreal Engine release and the
Datasmith Ecosystem, we're unifying our code
base between Unreal Engine and Twinmotion to ensure full
compatibility between the two tools. These improvements enable
the fluid design iterations with Direct Link. And users can collaborate
and aggregate their content from a wide array of softwares,
easily and efficiently use and import Datasmith
scenes in editor or at runtime. So the Direct Link
feature right now is available only at runtime. Twinmotion is an example
of use of Datasmith's runtime. But obviously,
you can create your own applications as well, package applications. In this release, we are also
allowing third party developers to create their own
Datasmith plugin. So we have a bunch of them here. It's pretty sweet to
have every kind of DCC export the Datasmith file,
Datasmith scene, and import these
in UE or Twinmotion. To recap, the Revit, Sketchup,
ArchiCAD, and Rhino plugins are Direct Link. And the others allow you
to export a Datasmith scene. Next slide, please. And regarding-- so
we have a feature called Visual Dataprep
in Unreal Engine. This allows you to prepare
your CAD and Datasmith files and to modify, for example,
change materials, build LODs, simplify meshes, swap assets. And this allows you to have
a very streamlined workflow and easy to just re-import
your data from DCCs and have everything
sorted out correctly. And in Visual Dataprep 4.27,
we have some new features. So we have filters like
the jacketing and volume filters, which are going to be very,
very useful. We also have some new operators,
like collision complexity which has been asked for a while. So you can change the collision
complexity of your meshes from simple to complex
as well as access to actor components,
which is a new feature in 4.27. So you'll have access to,
for example, in your walls, you'll have access to mullions,
to your class components and whatnot. And we also have a
small revamp of the UI. You can now group, collapse,
or resize Dataprep Actions. And this is very
useful to avoid clutter. And also,
the operator panel has been reworked. I know you have a new
statistic window, which is useful to pinpoint potential
assets causing performance issues. We have a lot of
cases where data is imported from external
softwares and our millions-- we've had toilets with
millions of polygons, for example, which is not very
good for real-time rendering. And we also have a new import
setting button for CAD files. So you'll be able to change
the import values for this. Next slide, please. And finally, we have
improved the project templates for AEC and automotive. So the collab viewer template
now has OpenXR support, has an integration with
Datasmith's runtime. And you also have new
tools like a section box that you can set the
ratio between the avatars and the world which
is very useful when you have massive worlds
of very small objects. And we also are shipping
a new design configurative template which is based on
the old project configure template. But this one is dedicated
to AEC projects. And yeah,
we're looking forward to see what you guys do with it. And I'll hand it over to Chris
for some rendering goodness. CHRIS: OK. Hi, everybody. So yeah. I'm going to talk to you
about the Path Tracer. In 4.27, we're rolling out
a number of improvements to the Path Tracer
rendering mode. This is something that was
introduced all the way back to Unreal 4.22. At the same time, we introduced
a lot of ray tracing features into the engine. But it always had a
number of limitations, mainly because the
team was just focused on polishing all the real
time ray tracing aspects. But with 4.27,
we've gone back to this project and implemented a good
chunk of the missing features. So just in case you
haven't played with it before, this is a rendering mode that's
purely based on ray tracing. So there's no
rasterization at all going on. And it does a full
brute force simulation of light transport in a scene,
all the possible ways that light can bounce around. So it's very accurate. It includes full blown
animation effects. It's not really intended to
be real time at the moment. We think of it as our
reference quality solution that we use to measure
the quality of our real time rendering modes against. So in particular,
the Path Tracer's been really useful, for example,
in development of movement in UE5 as a sanity
check of that it's computing the right answer. But of course,
we also think the Path Tracer's useful beyond just being a reference,
especially for folks that are in the enterprise space,
Archviz, automotive, manufacturing,
even film and TV, basically anybody that's looking
for final quality pixels without having to
leave the engine. So some of the
things we've improved, we've basically gone
through all the major material types, all the light types,
and most of the light parameters. This includes things like refraction,
blend modes. We now have a
reference quality random walk subsurface scattering,
just the kind you might find in
an offline renderer. We've made a number
of improvements to sampling overall, including
how many lights are managed. Previously,
the Path Tracer here had this kind of hard limit of 256 lights. That's mostly been lifted. We've also integrated
a denoiser that's specific to the Path Tracer. So this particular release,
we're just shipping with Intel's
Open Image Denoise Library. And it's really nice
for just removing that bit of noise
that's left over, even when you
have lots of samples. And we expect to grow that
list of supported denoisers in the future. And also, we've made sure
that the Path Tracer could be used from Movie Render Queue. So this allows you
to get your final pixels. It includes motion blur,
and yeah, it's a good way to do your final frame,
final quality frame rendering. Depth of field is
also supported. At the moment,
it's currently relying on Unreal's existing post
process depth of field. We'll be working on
that on future releases. So next slide. I just have an example of
what it looks like in the editor if you've let it accumulate
for a few seconds. So this is a really
simple example, where it shows off
the soft shadowing accurate and indirect
lighting that you can get from Path Tracer. And if you look closely,
you can see some of the settings that are
exposed in the post process volume. So there's intentionally
very few settings. We just have number of samples,
number of bounces. We set it up to do 32
bounces by default, but you can definitely do
more bounces if you want. There's some antialiasing
filtering quality settings. And that's basically it. Next slide. This is an example
of a nice indoor scene that's basically being lit
only by the sun and sky. So aside from the light
that's coming directly through the window,
everything else in the environment is indirect lighting. So you even get the proper
indirect lighting response through the mirror. So if you played a bit with a
real time mode, sometimes, you get things looking pretty
good in most of your scene. But the quality
and the reflections needs additional tweaking. And so one of the nice
things about the Path Tracer is it removes all
those technical steps and technical decisions you
have to make on the quality performance straight off. Of course,
if you want real time performance, it's worth understanding
all that stuff. But if you just want
nice looking images and don't mind
waiting a few seconds, that's what the
Path Tracer is for. Next slide. I have a few more
Archviz interiors from some of our
internal testing at Epic. Again, these are mainly
lit with just sun and sky as the main light source. Most of the lighting in
these rooms is all indirect. Next slide. Path Tracer is also really
handy if you have a model, and you just
want to show it off. So here, one of our directors
brought in a ZBrush sculpt into the editor, just placed a
few lights and got this result. So if you're an
artist that works on just characters,
props, environments, the Path Tracer's a nice way
to just show off your models and do turn tables
without having to leave the engine at all. Next slide. We have an example
of automotive test scene. This one was rendered
through a Movie Render Queue. So it has motion
blur on it as well. Car paint is one of the
supported shade models now. So there's glass for the
windshields and all that. So you can definitely
do some nice car renders. Next slide. This one is a more
complex environment the team at Quixel put
together for the virtual production shoot you saw a bit earlier. This scene was designed to
look great in real time mode. But just switching
to the Path Tracer essentially worked out of the
box with a few minor tweaks to the scene. And that was kind
of exciting to see. So of course, when you
look through the release notes, you'll see we're not done
yet with the Path Tracer. There's still a number
of limitations, things that we didn't have time
to implement just yet. Maybe we'll talk about
that in a Q&A later. That's why it's still
marked as a beta feature. But we hope to keep working
on these and in the coming releases. Next slide. So related to the Path Tracer,
I just want to say a few words
about GPU Lightmass. This is our GPU accelerated
light baking solution. One of the key
improvements here was to basically bring all the
improvements made to the Path Tracer into this mode as well. So GPU Lightmass, essentially,
is running the same code as the Path Tracer. It's just a bit simplified
because it's designed just for baking diffuse illumination. But it benefits from all
the other improvements we've made, like supporting
all the light features, like rect light textures,
barn doors, IES profiles. Things like that, that are now
supported in the Path Tracer, are also working
in GPU Lightmass. It also supports the
different blend modes which means we can bake
color translucent shadows now in GPU Lightmass. There's been a number
of other improvements, including better
support for LODs. We've implemented
multi-GPU support here. Related to the multi-GPU
support and the in-camera VFX workflows, sometimes you want
to bake as quickly as you can. And so if you have multiple
GPUs in your computer, that's really nice. And finally, there's been
a number of sampling improvements to Volumetric Lightmap baking. So those have better
quality in less time as well. So with that,
I'll hand it back to Ryan to talk about Pixel Streaming. RYAN: Cool. So I'm going to jump
into some quick words about Pixel Streaming. So the Pixel Streaming,
there's a few updates here. I think the main one is that
the team is considering it out of beta. We also have really
expanded the team that's working on the
Pixel Streaming features. There's actually a
whole UCS which is like the cloud services team. There's an upgrade to WebRTC,
like a lot of general quality of life improvements
and Linux support as well, which is obviously
really important, since there's so many more
Linux machines on the cloud. And then the last
thing to mention is that it's also
available as a container. I think this is a spoiler
alert for the next slide, which talks about containers. So developers that
are in the cloud world are very obsessed into
working with containers. So the team has gone
through a lot of effort to ship container
images of 4.27. It just makes it a lot easier
for them to work with Unreal. That's something they want
to deploy to any cloud set up. That's I guess all I
really have to say. I mean, I think,
to a large degree, the container stuff has
been a long time coming. And hopefully,
this unblocks a lot of people to leverage Unreal Engines
in their cloud deployments going forward. I think the last one will
be covered by Amanda. Right? AMANDA: Yep. That's me. So we just want to
make a few quick notes. We have a few things left. So we want to touch
that Unreal Engine will be supported as a library. So you'll be able to
build the UE4 runtime and interact with it through
a very minimal built in API. So you know what that means. You can integrate a UE viewport
inside a Windows application. And then that gets
compiled as a DLL. So it'll be able to accept
command lines, run it in a client window that you'll
be able to send-- where the engine
can send its output to. You can interact with
Windows messages for interprocess communications
and shut down the engine when you're done with it. So you, as the user,
can certainly expand the API
to suit your needs, whether that's exposing
existing or new functionality for your external use. Next slide, please. We're going to touch on some of
the RAD new tools from RAD Game Tools. So first of all,
with the Oodle Data compression, some of the fastest and
highest ratio compressors which is really awesome. So there's four
different algorithms that you can select from to
best fit your project's needs. In 4.27,
these will be enabled by default. And it means you'll load
your package projects faster. So hopefully, that helps you
in your typical work sessions. For Oodle Texture,
they're, again, high quality encoders for
block-compressed BC1-BC7 textures. And these can help
make the visual quality encodings two to three times
smaller than non-RDO encoders. This is also going to be
enabled by default. And you can-- the RDO encoding itself
is off by default, but you can enable it
in your project settings. And then with Oodle Network,
the idea here is you're
going to be reducing your bandwidth
required by game servers and significantly reducing
the bandwidth required for your multiplayer games
which just a great experience for everybody. So next slide. With RAD Bink Video,
it's a popular video codec. And again, it'll be built into
Unreal Engine like the Oodle compression,
will be the cool thing for you devs is it'll decode
up to 10 times faster. And it's using 8 to
16 times less memory. It could also be potentially
offloaded into GPU compute shaders for faster performance. And Bink Video is
totally self-contained. You don't have to
install anything else. And the plugin
interface is meant to be super simple,
really easy to use. So just a couple of notes there. That's all we have from us today. We are going to-- there are many more
features and improvements that will be coming in 4.27
that we haven't covered today. We will be doing future streams,
where we go in-depth and hands-on
with a variety of these features. So don't think this is the end
all be all of 4.27 information. We'll certainly be
getting more out your way. But if you have questions for
the folks on the stream today, we're going to
dive right into those. So I know we've already got
some that we'll start answering. But if you have
questions for the team, feel free to drop those in
chat preceded by "Question." I also think Steve's
going to join us again. So if you want to hop back on,
welcome back, Steve. All right. Diving into questions,
so I know some of you all have been asking,
what's the official recommendation for developing in the coming months,
since we've forked to UE5 and marching there? So we absolutely recommend. Feel free to go ahead
and dive into 4.27. 4.27 will absolutely be
compatible with UE5. We recommend you
following that path. What the caveat there is you
cannot take your 4.27 projects into UE5 Early Access
because of that fork. But you should
not feel concerned in any way of jumping to 4.27. That won't hurt you
going into Unreal Engine 5. Let's see. We've got, is path tracing,
from Displace, is path tracing in 4.27 going
to support decals and possibly in the future
things like volumetrics? CHRIS: Yeah. Thanks. That's a good question. So we don't have decal
support there right now in 4.27. This is actually an area
that's being refactored a lot for Unreal Engine 5. So because those code
bases are diverging, we didn't want to risk
putting that to 4.27. But it's definitely high on the
list of things to bring to UE5. As far as volumetrics, that's
a little further down the line. But we're definitely planning
on supporting that in the Path Tracer at some point. AMANDA: Awesome. All right. From Mr. [INAUDIBLE],
I think this is for you, Steve. You had mentioned wanting to
do non-VR foveated rendering stuff. But they're very curious,
nice and cryptic, very curious what you're thinking of. STEVE: So that was
deliberately cryptic, because we don't
know completely yet. But I know there's
some prior work that The Coalition did
using edge detection filters with VRS. They got some
pretty good results. Applying this tech for VR
is a slam dunk just because of the lens distortion,
the option for foveated rendering with
eye tracking and all of that stuff. So that's an easy one,
and it's a pretty big win for not a huge amount
of thinking involved. But I think,
just given the amount of performance we can get back with that,
there's a ton of options. And over the coming
six to 12 months, we're going to start thinking
about what those will be, and really start digging in,
and seeing what we can get out of that. But I don't have anything
that's not cryptic yet. AMANDA: It's always the soon TM. Right? STEVE: The soon. Yeah. It'll happen though. It's worth it. It's good stuff. AMANDA: Excellent. Ryan, I think this is
while you were presenting. So from [INAUDIBLE],
they were asking, will it work with GTX
or is it only for Quadro-- I think that was in reference to
multi-GPU or nDisplay maybe? RYAN: Yeah. So multi-GPU
will work with both. For the purposes
of the LED wall, though you generally want and
need to use the Quadro cards for the Quadro Sync feature. That's what allows for
all the different screens to be synchronized and then
also synchronize with the frame rate of the camera. So for in-camera VFX purposes,
you pretty much need to go with the Quadro
cards for the LED wall. But for GPU Lightmass and GPU,
you can use either,
since the sync is not a factor. And that's actually
something that we do a lot when we're using
GPU Lightmass in relation to the states. Sometimes we'll just
have a machine that's using the non-Quadro
cards to do the light bakes, and that works fine. AMANDA: Thank you. I know we've had-- in the chat,
I just want to address this. We've had quite a few
questions about water systems. I don't know that
anybody on this call is ready to address
the state of those. But if you-- feel
free if anybody is. But if not,
feel free to follow up on the forum post for the stream today. And we can chase down
the folks to maybe get some updates on that for you. Sorry. I'm not going to try and
pronounce your name, but they're asking
if there's going to be a white paper on the OCIO,
the OpenColorIO. I know, Andy, you believe we'll
have some more information coming with the release. Right? ANDY: Yeah, there's some
documentation that's coming. Ryan alluded to some of it
describing the process of color management for the LED walls. But in general,
more information about OpenColorIO, the touch points for where
it hits in the engine, so both in the nDisplay, Movie Render Queue,
and overall workflows. I think you're going to see
more of that in the future to come. RYAN: Yeah, specifically
on the white paper aspect, we have a document that
we're getting ready to go along with the release. And then we're also hoping to
do some additional supplemental material that just shows
the process that we did the color calibration,
and then potentially some of the data as well. That'll probably
come after the release, but it is something we're
getting in the works to share. It's a big topic,
and hopefully, that will help give people more
of a frame of reference for our process. AMANDA: Perfect, thank you. From This is CBC regarding
remote control protocol binds. They're wondering if this
is an editor only feature. And specifically,
would the created binds be accessible
in package sessions? ANDY: Sure. That's a great question. So we do focus on editor
and actually dash game. There is some difference
in package builds. So not all properties
are supported in the same binded way. We're looking at it
for this next release, trying to make sure that we
can match as best we can. Currently,
you'll see at the left hand side of that special details
panel of the remote control preset, there'll be a
little warning that says, this will not display
in package game. So you can definitely
go ahead and try it out. But we try to
warn you if there's something that's not
going to work correctly or is expected in
a package build. AMANDA: Perfect, thank you. From Y2K Matty is,
UE4, will it have any option of ACES color space? ANDY: I can answer that one too. So on the ACES side,
we do support, as I was mentioning, OpenColorIO,
which ACES is as well a color space that's used
throughout OpenColorIO. And so there are
certain points where we have access points for ACES. This person,
if you come from the VFX space, you may be thinking of
texture import and stuff like that follows the same kind
of color management process throughout the pipeline. I would say that we don't
have 100% full ACES in and out. But we do have the
display and render portions of that covered
now through OpenColorIO. But it's something
that we're continually-- it's a multi-year release
of getting in small features. So look for more. We're definitely
headed in that direction. But yeah, currently it's
focused on those two areas. AMANDA: From Clara Lima, will we have camera perspective
correction for Archviz in 4.27 or UE5? I think that's you, Antoine. ANTOINE: Yep. So this has been
requested for a while. This is not going to
be a feature for 4.27. But we are looking into
solutions and new tools, including perspective
correction in UE5. Probably not the release 5.0
but one of the later releases. AMANDA: Awesome. We've had a couple of
questions around Live Link VCAM or the next
update to Live Link Face. I know you were asking about
the facial calibration update. And so I think, Ryan,
you were mentioning it'll be-- RYAN: Yeah. So both of those are meant to
go to the App Store, the iOS App Store day and date
with the 4.27 release. So we'll have those timed up. I saw there was another question
that was asking about Live Link Face and translation. So maybe that person could-- I'm not quite
understanding their question. So maybe a little bit more
context on what you meant, we'll try to answer
that as well. AMANDA: Yeah. So Thomas Halpin, if you could
expand on what you mean by, will it include translation. We can follow
up on that for you. Another from [INAUDIBLE],
will Multi-User system still require Git, or can't you just
join server and download everything there is on it. RYAN: Yeah. So you don't have to use
source control with Multi-User if you don't want to. There is a mode that you can
run that makes the checks that it does less strict. In fact, yeah,
so if you run the Multi-User session from the beginning,
and you keep it going. Then yeah,
another client could join, and it'll just grab all the
assets that you've created. In general, what you need to
do is start from the same place. So if you start from
an empty project and kept the session going,
the other people that joined would get the same stuff. AMANDA: Awesome. Thanks. From Gregor [INAUDIBLE], how
is hair working with path tracing? CHRIS: So not working today. It's not part of 4.27,
but that's definitely high on our priority list,
something to look into. So there's a lot of
components to that. Obviously, there's the geometric
side, being able, for rays, to intersect the hair and
then supporting the actual hair shading model. But we definitely
want Path Tracer to support those going forward. So it's on the list. AMANDA: Great. From Shanathan Hansen,
is 4.27 going to be the last UE4 version before going to UE5? So that is our
current plan of record. I think that we don't
anticipate additional releases. So 4.27, get in,
get updated, and we'll begin the march to UE5. Obviously, if that ever changes,
we'll let you all know. So from Andy Fetic,
will there be an option for third party
renderers instead of the Path Tracer? CHRIS: Yeah. So I can say a few
words about that. There's basically nothing
preventing a third party renderer from trying to integrate. Actually, I think the V-Ray team
has a V-Ray for Unreal plugin already. So it definitely can be done. And we're definitely happy
as people try to do that. But we'll keep
improving the Path Tracer as a built-in solution. And if any other vendors
want to look into adding support, that's great too. AMANDA: Thanks. Oh. Regarding GPU Lightmass,
is that playable or just render? CHRIS: I think what
this is asking is just-- I'm not sure exactly what
the question is asking. But basically,
GPU Lightmass is meant for baking the lighting
into the environment. So that when you play the level,
the lighting is super fast to evaluate
and basically real time. So it's actually one of
the faster lighting solutions. The only compromise is that
it's pre-baked and not dynamic, but yeah. So it's meant to be playable. The actual baking process,
of course, takes a few minutes,
depending on the size of your level. But once you've baked,
everything is playable. AMANDA: Perfect. And another one
around GPU Lightmass, is there support for
AMD cards as well? CHRIS: I believe there is. So everything ray tracing
related in Unreal Engine is based on just DXR,
which is the cross vendor solution on Windows. And there's nothing
Nvidia specific there. So yes, I believe everything
works on AMD cards. We do a lot of testing
on Nvidia first, generally. But AMD should be supported. AMANDA: Excellent. Thanks. From Thomas Franklin,
they're asking they've noticed that
some of the templates have been removed for
UE5 early access at least. And they're asking if 4.27
will have limited templates too. And Antoine, you were
ready to jump in on this one. ANTOINE: Yeah. This is just for
UE5 early access. UE 4.27 is going to have
all the templates and even more like the ones I
mentioned earlier in the stream. AMANDA: From Zianim,
are these container images of the editor
or for built projects? RYAN: So my understanding is that it will help you cover-- there's the options
provided will facilitate both. AMANDA: And then on
the topic of containers, from Loreash,
do we support Windows, Linux, or both for containers? RYAN: Yeah,
and that one is both also. AMANDA: OK. We did have a follow up
regarding the Live Link Face regarding translation. So the comment was,
there is rotation. Can there be a switch
that controls motion in XYZ in space, like moving
the head forward, backward, and side to side? RYAN: That's a good question. AMANDA: Unsure? RYAN: I'm not totally sure, because we're basically
just passing along the data that you get from the ARKit. Actually,
Steve may know the answer for this. If you get this
info from there-- that's a good question. It seems like it would
be possible to do. But I don't know if that's-- that's something
we'll have to take back. And probably I will
ask Steve about it later. Because I think the facial
capture aspect of our ARKit is separate from the other
tracking features of ARKit. So I don't know
if that's something that we could do together. Although it is an
interesting idea. STEVE: Yeah. We should definitely
chat about that. AMANDA: All right. Now it is on the discussion. [LAUGHTER] From John Kensal,
they wondered your thoughts about bringing
Datasmith to Blender. ANTOINE: So Blender is
not currently in the plans, but we are always evaluating
for new DCC software support. So look at the news
maybe at some point. AMANDA: Keep a lookout. From the Koala Wolf,
is Live Link Face going to add features
related to Vtube-ing? RYAN: Yeah. So I mean,
this is another one maybe that would be
good for a follow up. I would be very
interested to hear what you think are
the types of features that would help facilitate
you doing Vtube-ing better. I think that that's
something that would be awesome to
get the face using for. I know Victor is
not on the screen. But it's something that he
and I have discussed in the past as well. So definitely
interested to hear what we could provide that would
make it easier for that type of work to be done. AMANDA: Let's see,
and from Captain Berner, is there going to be a
Direct Link for 3ds Max? ANTOINE: So there
is going to be one. We just started development. This is very early for now. But we are working on a new
Datasmith plugin for 3ds Max that supports Direct Link. AMANDA: Awesome. That'll be exciting. ANTOINE: Yeah. No release date yet,
but we are working on it. AMANDA: Awesome. Let's see. I know I got a few more
questions coming in. I know you all are
asking about Chaos. We'll still be doing a,
like with 4.26, I believe we're doing a
4.27 Chaos build explicitly that is separate. And we're not enabling
Chaos by default for 4.27. My understanding is
that will be coming in UE5. So keep checking it out. Keep exploring it. But at this point in time,
you'll be seeing it with
the UE5 release. And that was from Jacob. Oh gosh. [INAUDIBLE] Workflows,
can we expect to see to visualize VDBs with
the upcoming Path Tracer? And if yes, what could be the source,
such as true USD directly from Houdini? CHRIS: Yeah. So VDBs would probably fall
in the category of volumetrics in the Path Tracer,
which like I said, it's on the list as something
we're looking forward to doing but not doing it just yet. The only other thing
I'll mention with VDBs is we don't want
to add-- we're trying to keep the Path
Tracer at feature parity with the rest of the engine. So we don't want to just
add features that would only work in the Path Tracer. If we are going to add
support to something in the Path Tracer, we want it to
work throughout the engine. So yeah. I don't have any concrete
timelines on when direct VDB support will go in. But it's definitely something
that we're aware of and that we'll be
thinking about with UE5. AMANDA: Thank you. So from the Real Svane Man,
they're asking regarding the over
the internet Multi-User Editing sometimes has trouble
connecting with VPN servers in their experience. Are there improvements
coming to remote workflows? RYAN: Yeah. So I guess the first
thing I would mention on that is, last summer,
we did create some additional documentation
for helping people do that, especially since there was
an uptick from people working from home trying
to do this stuff. So maybe I'll pop
onto the Twitch stream and just post that doc. Because it does
at least guide you through a lot of the networking
settings and considerations that you have to make. Longer term, I think,
this is definitely something that we want to do
better with the feature set. And it's something that we're
working with the cloud services team on to better facilitate. So there's the potential
to more easily run the Multi-User
sessions in the cloud, be able to connect much easier,
without the need to get into a lot of
these esoteric settings. So that's something that
we hope to improve more in the UE5 timeframe. But it's definitely on our minds
for Multi-User collaboration. AMANDA: Awesome. Well, I think that's the
questions that we're going to be jumping on today. So first of all,
I wanted to absolutely thank you all for joining us. It's been a pleasure
having you walk through all these
awesome features that we're going to see in
the upcoming release of 4.27. We're really excited to
get them out in the wild, get all of you exploring
the new features, and seeing what you're
going to make with it. There's some really cool
stuff coming in this update. And you all always blow us away. We're watching all this
stuff even in preview. So you'll knock our socks off
with your community projects. We love them so much. But as we will be jumping
off here shortly, just a couple of reminders. Do tune in next week, we'll be having the team from Houdini on to join us. They're going to talk about
Project Titan and some SideFX Labs updates for you all,
always really, really cool stuff. Follow us on all of
our social channels, Twitter, YouTube, Facebook,
whatever, for regular updates. And definitely
jump in the forums if you have questions,
or comments, or anything like that. As far as the topics today,
if there were additional questions,
or you think of stuff, we can always come
back to the forum thread. That's the event forum thread,
and check it for any additional follow ups there. And yeah, again, thank you all. We hope you have an
absolutely wonderful week. Thank you to all of
you who joined us today here on the show, Antoine,
Andy, Steve, Chris, and Ryan. Y'all are rock stars,
love seeing you. And yeah,
we'll see you all next week. Take care, everyone. [MUSIC PLAYING]