>>Amanda: Welcome to your weekly
news and community spotlight from Unreal Engine. First things first, the results
of the 2019 Rookie Awards are in! Of the thousands of
projects submitted, all three Game of
the Year winners in Console & PC Games, Mobile
Games, and Immersive Media, as well as the Rookie
of the Year winner in Game Design & Development,
were created in Unreal Engine. We’d love to offer up
a huge congratulations to the winning teams behind
Spire, Xploro, and Soundbender, Juras Rodionovas,
and to all of the students that submitted their incredible
projects this year! Moving to our own contest,
we’d like to say thanks to all who participated in the
Cinematic Summer event! We loved watching
all the submissions. AFO Studio, James
Field, and TJATOMICA, will each receive a custom
Unreal Engine-branded chair as part of our
DXRacer-sponsored sweepstakes. Check out our blog for a
few of our staff picks! During the Unreal Engine
User Group at SIGGRAPH 2019, we revealed a collaborative
project with Lux Machina, Magnopus, Profile
Studios, Quixel, ARRI, and cinematographer
Matt Workman, which put our latest virtual
production tools on display and demonstrated how this
rapidly evolving technology is radically changing the
way films are being made. In our blog, learn more
about how dynamic LED sets, paired with modern techniques,
are being used to transform film stages into living,
breathing worlds. Extended footage
will be coming soon and you can try out all
this tech for yourself in the latest Unreal
Engine preview! Earlier this week, we announced
a new update for Twinmotion, the easy-to-use real-time architectural
visualization solution. With this release, you can
get from SketchUp Pro to VR in just two clicks! Plus, there’s a great new
pack of high-quality grasses for more convincing
lawns and landscapes. You can download Twinmotion
for free through November. When IGN said that “[Assetto
Corsa] Competizione offers the best sim racing
experience on the market in terms of feeling,
immersion, and handling," we knew we had to
catch up with the team from KUNOS Simulazioni. The developers explain how
they were able to produce one of the most authentic
racing Sims ever created, elaborate on how they’ve achieved
stunning best-in-class visuals, and delve into how they designed
a dynamic weather system that not only looks great but realistically
affects how cars drive. Korean studio Action Square’s
took a risk developing Gigantic X, their top-down,
sci-fi space shooter, a genre and theme typically not
popular in the Korean market. It was a risk that paid
off, however, as Gigantic X
launched with success in both Australia and
Singapore earlier this year. The secret? The team found that if they
combined the arcade action of console shooters with
the mobile platform, they could reach an audience
larger than ever before. Now available in 150 countries, read more about how Unreal
helped them bring Gigantic X to the global playing field. Leading up to the
launch of ACE COMBAT 7, it had been 12 years
since the release of the last mainline
game in the series. Not only is ACE COMBAT
7 the first installment in the franchise to
use Unreal Engine 4, but it also marks the series’
first foray onto modern consoles. To see how Bandai Namco
Studios was able to reignite the success of Ace Combat, we interviewed Kazutoki Kono. The ACE COMBAT producer
elaborates on the great lengths that went into getting
the jets to feel authentic the gameplay thrilling,
and discusses how the studio arguably the best graphics for
a combat flight game to date. Plus, Kono also expounds
on the implementation of ACE COMBAT 7’s
critically-praised VR mode,
another first for the series. We are pleased to announce the
opening of our newest studio located in Cologne, Germany. Led by the founders of Factor 5,
creators of the Turrican and Star Wars: Rogue
Squadron franchises, Epic Games Cologne is joined
by F5’s principal members and serves as part of
the company’s expanding focus on emerging forms
of interactive media. We’d like to welcome
them to the team and remind you that we
have over 200 positions open across the globe,
including new roles in Cologne! Now for our weekly Karma earners.
We’d like to give a shoutout to: ClockworkOcean,
DonBusso, Shadowriver, MCX292, Jezcentral,
kevinottawa, Leomiranda5281, Rolffso, LoN.
and 6r0m. Thanks so much for helping
folks on AnswerHub! Now onto our weekly spotlights! This slightly haunting
scene from Sescha Grenda called Overgrown,
inspired by Andreas Rocha, is our first
spotlight this week. You can see a breakdown
of their progress, from blocking to final renders, which is a nice way to see how
a project like this unfolds. Great work and we hope
you'll share more with us in the future. Here we have Grimmstar, an
action-packed space combat simulator with Action RPG
and Fleet Management twists. Command the last fleet of
mankind while you defend the stranded remnants of
the human race with your fully modular fighter, growing your forces along
the way with the hope of eventually taking down the
planet-devouring behemoth, the Grimmstar. Last up here is Ashes of Oahu, an open world,
post-apocalyptic RPG shooter where you tap into the power
of the spirit world to liberate the Hawaiian island of Oahu
from the army that occupies it. It’s always exciting to
see what a small team, in this case five
folks, can accomplish. Thanks for tuning
in to this week's news and community spotlight. >>Victor: Hey everyone! Welcome
to the Unreal Engine Livestream. I'm your host, Victor Brodin,
and we're here today to talk about some of our
featured highlights that we will be releasing in 4.23. To help me with this
endeavor, I have Engineering Director for
Graphics, Marcus Wassmer, as well as Lead Physics
Programmer, Michael Lentine. Thankfully these are two of the
smartest minds on these topics, so I'm very happy you were able
to be here with me in the studio and do the livestream. Michael, this is the first time
you're on the stream with us, so I would like to ask what
you do here at Epic and and explain to our viewers
a bit about your work. >> Sure, my name is Michael,
I lead the physics team here at Epic. The physics team
is largely responsible for all physics in the Engine. Our focus for the last while has
been on destruction technology and the Chaos Physics System and trying to get that
ready so everyone can start playing with it. >>Victor: We have definitely
been showing off some cool footage of the demo
we did at GDC and now also over at SIGGRAPH. And Marcus, you are the director
for our graphics team, right? >>Marcus: I am.
What that means is I oversee our four different rendering
teams, as well as the mobile team, the console team, and the Niagara Next
Gen VFX System team. >>Victor: Which we will
talk about briefly in a bit. >>Marcus: There's some
cool stuff in 4.23. >>Victor: Michael, would you
like to give us a rundown of some of the features that
will be coming out with 4.23? >>Michael: Sure. From the
physics side, this will be our first official release of
the Chaos Physics System with a large emphasis on
destruction technology. There's a number of
different features that we incorporated into the system. The first is the
notion of trying to basically have this many
movable bodies in a scene. For the demo that we came up
with, there were hundreds of thousands of bodies all
moving simultaneously. Traditionally, this would have
been fairly infeasible with the existing destruction
systems or with that many bodies in general,
so we had to introduce a new component type inside the
Engine, which we called Geometry Collections. Geometry
Collections are a way to group all these bodies together, but they can all still move
independently based on based on the rigid
transforms the destruction system will update them with
as the simulation progresses. We also obviously support a
large number of features for breaking up bodies,
so you can take a building, break it up into potentially
hundreds of thousands of pieces, or however many pieces
you want it to break up on depending on how much
computational power is available. We did this by adding a
new mode inside the Editor. which gives a number of
different controls for how these things break up.
Different fracture patterns, things like that. As well as the idea of
Destructible Levels or what we colloquially refer
to as Destructible LODs. This allows an artist
to go in and author fracture setup
that can scale from a small number of bodies
on certain platforms, to a very large number of
bodies on other platforms with a single authored setup. That was a big focus for this
entire system in general, because we wanted to give
as much power as possible to our artists and game
designers, so they could come up with the most cool and
amazing ways to incorporate destruction into
their technology. A couple different other
controls that we provided-- Potentially the biggest
one is the Field System. The Field System is
essentially a region of space that can set various
physics parameters. You can use it to set the
strength that a building needs to exceed in order to break up,
we call that the strain. You can use it to set more
traditional physics parameters like friction or
coefficient of restitution or anything like that,
and it gives a way to control, essentially,
simulations of this scale. So you don't have to go through
and, say, select an individual body and set a whole bunch
of properties on that or have everything
tied to Materials. We incorporated a number
of other systems into the 4.23 release to try to help with the way that destruction
feeds into it. We believe that physics is something that's
potentially under-utilized and we want to make it
more of the forefront of trying to make games
and interactive worlds and using a heavy amount
of physics systems and a lot of that requires
integration with other aspects of the Engine. I mentioned Niagara
a little bit. One of the things we introduced
in 4.23 was the Niagara Chaos physics integration. This
allows you to do things like spawn dust effects
or smoke effects when a building
starts breaking up. Without this, obviously,
you get simulations that don't look very realistic. So
this is a key feature in order to drive sort of a holistic
destruction system. We've also incorporated
into other aspects, such as audio, to allow you to basically create the most
immersive destructible environment that
you possibly can. Finally,
we've added a whole bunch of sort of features for
programatically controlling various things, like how a
building breaks up over time. This, again, has to do with
the scalability of this type of system. We really want
people to be able to Run simulations, if enough
computational power is available, of hundreds of
thousands of objects. So this means different
paradigms in terms of authoring. One of those is we
call the Connection Graph. So that if you start breaking
connections, which happens when some force or some field violates, essentially,
the allowable strain inside the connection, certain parts
of the building break off and if you do this, say, across
an entire section of a building, the entire top would start
falling and collide with everything else,
which gives you a much more sort of organic interactive
looking simulation. Of course, if you don't want
this, you can also add controls to make it more
artistically directed. Then the other aspect of
this is cache playback. As we're aware, you can't always
have the computational power necessary to run
100,000 objects, but in some cases, you may not need interactions, and so what
this allows you to do is do something that you used to
be able to do in the Engine, or still able to do, which
is to bake out something from Houdini, import it in. Of
course, this creates a number of problems with iteration time,
having to go back and forth between packages. So we now
allow you to do this directly within the cast system within
the Engine, so you can set up your simulation, your
frames per second, record that out and then
play it back at runtime and we also allowed fields to
start activating bodies when they interact with those
bodies, which allows you to even if the building is
interacting with characters, you can effectively only
simulate the bodies that are doing those interactions and
allow the rest of the bodies to keep doing playback. This is sort of a high-level
overview of some of the features that we provided for destruction in Chaos in 4.23. >>Victor: We'll be providing
an example project as well, where some of the things that
Michael just mentioned will be there, broken down and
commented, so that you can dig into them and figure
out how to work with it. I was a little curious about
the fields that you mentioned. Are they similar in any
way to the Physics Volumes that we've had in
Engine for a while? >>Michael: They're a
lot more comprehensive. Yeah, they're really kind
of a different thing. This is primarily
meant for control of, essentially, any
arbitrary parameter within the physics system. It's a really powerful way
that artists can use to control kind of everything.
We envision this long term being able to apply to much
more than just physics. Right now, it's limited to Chaos
Objects and Niagara Particles, in terms of what things they can
affect, but we expect this to really grow and allow people
to have a different way of controlling things at scale. >>Victor: Will that mean
you'll be able to do stuff like flip gravity? >>Michael: Sure. You
could absolutely do that with the Field System. That's
basically a physics parameter. You can set any
physics parameter within the Field System. >>Victor: That's awesome. I know that here in
the preview, we've had several people go in and try
out some of the features. Just to let you guys know
that all of the ones that Michael is talking about, even
though not all of them have been able to use inside of preview,
you will be able to use them all once 4.23 comes out. We also wanted to mention
that it is a beta release of our new Physics Engine, so go
ahead play with it, try it out, check all of the things
and start planning for what you can do in production. The full release will happen
in a couple of Engine versions. Marcus, there are some pretty
cool new features coming-- or not new features, but
we've done improvements on quite a bit of the ray tracing
that we released in 4.22. >>Marcus: Yeah, there's
some pretty significant improvements there. One big
thing that we did this year was finally release the first beta version of ray tracing
in an official UE4 launch. The goal there being,
finally let everybody in the world start
playing with ray tracing. >>Victor: Yeah, in real-time. >>Marcus: Yeah, in real-time,
which is super nice. However, the 4.22 release,
you know, it was kind of what we considered like the-- kind of the basics of what
you needed to get started. Since 4.22, we've
just been working on stability improvements,
performance improvements, and then filling in
some of the gaps. So some major things
that are in 4.23 is support for Landscape
for ray tracing, support for Instanced
Static Meshes and Hierarchical
Instanced Static Meshes. Another really important one
is World Position Offset. for Skeletal Meshes
and for Landscape. Not for Static Meshes
yet, unfortunately. But at least for Skeletal
Meshes and Landscape, World Position Offset
will be properly intersected with
the Array tests now. There were a few other
little visual improvements. For example, when you're
doing ray traced reflections, you can actually choose in
the Engine how many bounces your rays should go
through before terminating. And in some cases,
you really need more bounces than
you should be doing if you want to maintain a
reasonable real-time frame rate. In those cases now
in 4.23, we can at the end of the ray trace,
the amount of ray bounces, it can fall back now to one of
our static reflection captures. Just to fill in something,
so you don't end up with a weird black spot in your
reflections or anything like that. >>Victor: Does that mean that
you would set up the scene for both cases? And so using sphere reflection,
box reflection captures-- baking that as well as-- >>Marcus: That's
right and honestly, you would want to
do that anyway. Because another thing you need
to pay attention to when you're doing real-time ray tracing
for reflections at least, is that when a reflected ray
hits a very rough surface, so you would get, like, not
a sharp reflection at all. In fact, most things
in the environment are rough/diffuse. So when you hit those things,
you can configure the Engine to pick a roughness point
at which we terminate rays and fall back to the
reflection captures anyway. Because a very rough surface
requires a very large number of rays to properly converge. and that can be a performance
hazard you need to watch out for. So that's something you
would be doing anyway, in a lot of scenes. Certainly it's
something that we did in the reflections
demo two years ago. >>Victor: We had Sjoerd
and Juan Canada here and they actually-- Sjoerd set up a preview mode of
how you can see how expensive the bounce would be, depending
on the roughness value of the Material. So if
you're interested in that I'm going to drop a
link in chat here soon and you can go ahead
and watch that entire, I think, hour and a half of
Sjoerd going over a big ray tracing example
that he's working on. >>Marcus: I think those are
the big ray tracing features. There's a whole host of
probably a couple hundred small performance
and bug fixes and and stability issues--
>>Victor: Only a couple hundred? >>Marcus: Yeah, you know... One of the big things we're
doing here is turning, like, DirectX 12 into a, a big boy RHI. Right now,
most people default to DX 11. With the advent of ray tracing,
a lot of people are going to Ne moving to DX 12. So we're spending a lot
of time making sure that DX 12 is in a good state
and production ready and performant competitively
with other RHIs as well. >>Victor: You mentioned RHI.
Would you like to dig into what that means a little bit? >>Marcus: Sure, sorry.
RHI is an acronym for rendering hardware interface. It's just an abstraction layer
we have in the Engine so that high-level rendering programmers that are
writing features like SSAO or or depth of field or shadowing
techniques or whatever, don't have to write
them 20 times for every single API. There's this layer
in between and then we have an RHI
implementation for DirectX 11, DirectX 12,
Vulcan, the consoles, and any other platforms that
need their own specific graphics API implementation. >>Victor: So it's an important
part of making it a little-- so it's easier for some
programmers to do their job. >>Marcus: Exactly. It's also
just easier to maintain the Engine if you have a common
interface to write graphics effects again, so not
have to write something specifically for every platform
that we have to support. >>Victor: What would the
difference between an RHI and sort of the general term for
an API be? Are there any-- >>Marcus: An RHI is an API. It's just one that
we made up at Epic. >>Victor: That's cool. Moving forward,
I think we mentioned a little bit about some
of the features and virtual texturing will
be another big addition to the Engine coming
in 4.23, right? >>Marcus: Yes. That is
one of the ones I'm more excited about lately. It's
something we've been working on for a while and I'm really
excited to see what people will do with it. So
Virtual Texturing, if anyone doesn't know, is a way to to Texture Streaming but by only streaming
parts of the Texture. Let's imagine that you're an
artist or you have an artist on your team and they
want to put an 8k-- or a whole bunch of 8k
Texture Maps on a face so that when you get the camera
really close, there's still a whole lot of detail in there. That can look really good, the problem with a traditional
mip-based Texture streamer is that 8k Texture Map,
depending on its format, might be like 64 MB of memory. If they layer four or five
different of those Maps, your memory footprint
becomes enormous because when you get really
close to a surface, to get the detail of that 8k
Map, we have to stream in the whole 8k Map. Virtual Texturing is a
little bit different in that we kind of break those very
large Textures up in to 128 by 128 Texture Tiles and instead of streaming the
whole Texture, we stream just the parts of the Texture that you actually need that are
actually visible on the screen. So if you get really
close to that face again, a lot of that 8k Texture is
probably parts of the face that you're not seeing,
and we only stream in the bits that you can see. That means you can have very
high resolution Textures, but your active runtime
memory footprint is more determined by
your screen resolution more so than the
resolution of the Textures. It's more about the visual
fidelity of what you can see and not what you can't.
So it's really good for artists that really want to
ship that high quality content but not pay for it
all of the time. It's really exciting. The other thing that I
think is going to really unlock a lot of power
with Virtual Texturing is, we're also shipping a thing
that we're calling, I think, Dynamic Runtime
Virtual Texturing. This is a case where
we don't actually have any data on the disc.
We don't have 8k or 16k Textures on the disc. What we're doing is procedurally
generating that data and the filling in a Virtual
Texture that we can sample on a Material. So the most canonical
use case for this is, imagine that you've
got a Landscape and imagine that you
wanted to put in some crazy number of layers on it,
like you wanted to have 32 layers on your
Landscape really finely put a lot of detail in and
a lot of transition areas and whatever. In the
traditional Landscape System, you would have to sample
all 32 of those Textures for every single pixel. The
more layers you stack on, the more impractical it gets
from a performance standpoint. With something like
Runtime Virtual Texturing, what you could do instead is go,
okay, I'm looking at this Landscape. Which parts
of this Landscape can I see? Let's
imagine that I've mapped a 16k Virtual Texture
onto it or some size, that's not really important.
And you can fill in those 128 by 128 tiles, the ones that are necessary
to draw your Landscape, and you can, at one time,
render all of those 32 layers and their transitions into
the tile, write the tile out, and so it's like
a caching system. So then the Landscape itself is
only doing one Texture sample for this runtime pre-cache collapsing of those layers. And you fill in some large
number of those every frame and the performance of
your Landscape is now a single Texture fetch
and you can kind of-- I don't want to say arbitrarily
have infinite of them but certainly more than you would
be able to do in the traditional type of system.
>>Victor: Is it similar to if you took, say, all
those 32 Textures and pre-blended them and just
took that out as one Texture? >>Marcus: Yes. You could
do something like that offline as well.
It's just that-- >>Victor: It's a lot more work.
>>Marcus: It's a lot more work and it's harder to
maintain and even, like, with this big bake step
that you'd have to-- >>Victor: Right. Every time you
make a change, now there's-- >>Marcus: You have to bake
the whole thing again. And changing the ordering
of the layers and stuff, it becomes a work flow issue. But
you're right, you could do that. >>Victor: Just trying to find a
similarity between a work flow that maybe someone has already
gone through that they might-- >>Marcus: You can absolutely
do that and you can use the traditional Streaming Virtual
Texture to do that as well if you ended up with some 16k
bake for your Landscape Texture. The Streaming Virtual Texturing
would still be a good way to deal with that content. >>Victor: By "streaming" it's
actually streaming from the hard drive, right? Not
from a cloud server? >>Marcus: Correct. >>Victor: That's exciting. Another cool thing that that
will allow is that it's really useful for mobile, right?
Because you can reduce-- the memory footprint of
all the Textures will be greatly reduced, right? >>Marcus: That is the desire. As of today, in 4.23, the
Virtual Texturing isn't currently supported on mobile. But that is coming
in the future. We're actively working on
that because we think it's-- Specifically for Landscape,
we think it's important to maintain a consistent
work flow and consistent look across platform. But yeah, the hope is that with Virtual Texturing, yes,
you should be able to maintain a more consistent and smaller
memory footprint for your Textures and that should be
of benefit on mobile as well. >>Victor: Because we're limited
in memory more so on mobile devices than we are on--
>>Marcus: Yeah, it's a pretty significant limit.
Especially depending on what devices you're
targeting for your game. >>Victor: Most developers
want to target as big a range as possible.
>>Marcus: Yeah you want to hit as much stuff as possible
for the broadest audience. >>Victor: That can be hard.
I definitely had to either split the Landscape up to get high fidelity and try
to design it in a way that culls section of it that's
just Static Meshes and then you lose the functionality
of being able to use the Landscape tools in
the Editor as well. A lot of cool features
around that tech. >>Marcus: I should mention, the
description of what I mentioned with the Landscape
Layering System is an example. That's
not shipping in 4.23. But it's something
we are working on. >>Victor: It's coming, like
a lot of exciting things. Was there anything else
around the ray tracing in Virtual Texturing that
you wanted to mention? >>Marcus: I think those are the
two major call-outs for those. >>Victor: Okay, let's continue
then. We are shipping native support for Hololens 2.
That's coming in 4.23. It's in beta but
you'll be able to use it natively in-Engine. We also have a couple of virtual
production pipeline improvements and some of this stuff
you might have come across on YouTube already or any
of our other channels. At SIGGRAPH, we demonstrated
this new kind of way of being able to record--
essentially doing essentially doing
CG in real-time in the set and the way we
do that is using an LED wall that is almost
encompassing sort of a 270 degree stage, essentially. Let's see if Amanda can
drop us a link there. I was unable to find the right
video to present for you all but it's on YouTube,
it's been showcased in plenty of places. >>Marcus: That thing
is really cool. I actually had the opportunity
to visit that stage while I was out at
SIGGRAPH and it's very neat. There's, as you
mentioned, the big 270 degree LED wall with a ceiling and
UE4 is driving the pixels on the whole wall, so you're like
loading your 3D scene in there. You got the actual real camera
being tracked in the space and it's feeding data
directly into the Engine to move the rendering camera so you get correct parallax and
perspective from camera movement onto the-- from the
wall, from the actor. And you get lighting
on the actor and the real props
that you have in there that matches the environment
that you want to film them in. Which is a huge improvement
over a traditional green screen where you have to redo all
the lighting in post and fix up any green reflections.
The thing that really struck me being on the stage is there was
a very shiny Indian motorcycle on stage and if you
actually walk up to it and look at the body work
or look in the mirrors, and forget that these
walls are around, it looks like it's really
there and it's a really good improvement for people
people doing film content. >>Victor: There's another shot
there, I think when they're close up on the rider
riding the motorcycle and and the screens
are sort of doing the motion, and all of those
reflections, you can see them in the bike and that
really helps selling the point that he's actually
riding it, when he's actually sitting completely still. >>Marcus: Yeah, it's really
cool and it's all in-camera, all of it's live, none of it's fixed up
in post or whatever. It's a really cool
piece of tech. >>Victor: It helps directors and everyone who is involved with
the photography of the set, if they're interested
in a change, they can on the spot just change it. >>Marcus: The whole thing is
live, driven by some people that are operating the
Editor off camera, so if your DP or something wants
to change some lighting or change the background
environment or move a rock or rotate anything, you can just
[snaps finger] do it right there,
right on the spot. >>Michael: If I understand
correctly, you can also do that within an iPad. >>Victor: Yes. We are
shipping support for-- I think it was called an HTTP
protocol that will allow you to build an interface in HTML 5 just any browser and
load that on any device because it's just a browser,
and you can use that to drive the settings and parameters
inside Unreal Engine. >>Marcus: We had a prototype
of that on the stage there and it was really cool. You
just hand someone the iPad and they could mess around
with the lighting or swap environment backgrounds or move light cards that were in the 3D environment and
project them to the wall to get some off-screen
extra area lighting-- >>Victor: Oh, so similar to what
we have here at the set, right? We've got big, strong lights-- >>Marcus: Exactly. Instead of
having to physically move the lights around, you just take
the iPad and move them around. Because the light is
on those very bright LED walls, it's like
you're moving a real light. >>Victor: They're very bright. That even comes through in the
footage, although I haven't been there myself.
You can see that yeah, it's pretty magical. You should
all go check out the little-- >>Marcus: Check the video
out, it's pretty cool. >>Victor: Yeah.
It's definitely-- We'll see a lot of
that moving forward. We had Jon Favreau talking about
some of the work he's doing at SIGGRAPH and he's a huge
supporter of the technology and just what it allows
directors and producers to do on set. I'm sure-- This is
obviously for, like, first of its kind and we've
just announced the technology, but slowly but surely,
I'm sure it will trickle out among studios. >>Marcus: I expect it's
going to be pretty common for television and film production
in the next couple years, for sure. >>Victor: If you're
curious about how people have solved
the problem between lighting on set versus what
we're rendering behind them, we did a showcase
for Welcome to Marwen which was a really cool example
of how they were able to do that previously, using
a green screen. And even though they sold it in a very, very
interesting fashion, and they were able to get almost
final imagery from the set. For me, it's just interesting
to go back and see what we had to deal with and now what
technology is bringing to the table and allows us to do
without having to go through some of those. >>Marcus: You should see
people that go see this thing, their eyes light up.
People that make-- >>Victor: Their eyes and
their faces because it's so-- >>Marcus: Yeah, they have a ton
of fun on the set. It's great. >>Victor: Yeah, it's exciting.
I'm hoping I get to see it at some point soon. What else? There's
quite a lot of-- I've been pushing a lot
of fixes on the forums that we provided for 4.23.
I'm not even sure what the change list is, but it's
a big release in general. Especially real-time ray
tracing coming to a point where it's more usable in production. You did see, one of
our spotlights earlier, right after the news is
from Grimmstar and they implemented RTX in
the entire game. Which, you know,
it's right there. >>Marcus: Yeah, it's
available now for anyone to integrate into their game if
they would like to and also, we've been
using it internally as well. Not on games but for Fortnite trailers
and cinematics and things our cinematics team has been trying ray tracing out too and
they've been really enjoying it. >>Victor: Yeah, it's
just-- RTX on, RTX off, seeing the difference is cool. It's neat that you can
do that all in real-time. Some other topics, I wanted
to mention that for both Chaos, next week, we are
actually having Wes Bunn as well as Jack Oakman here, to walk everyone
through how you can set Chaos up because currently
you are required to use a source build of the Engine. Until Chaos comes out of
beta that will be the case. So if you're unfamiliar
with how to compile the Engine from source, we
will touch briefly on that and then walk you through
how to use some of the topics that Michael
mentioned earlier of how you can
actually use the system. Then I believe I have something
scheduled for virtual production as well in the
next coming months. We are also shipping a new performance, or should
we say profiler would be more accurate. >>Marcus. This is one of my
favorite features of 4.23. I'm really excited about this.
It's called Unreal Insights. It's something our core
team has been working on for quite a while. Unreal Insights is a new CPU and GPU sampling timeline profiler. It's great. You can capture
massive amounts of data from your game. If you're familiar with
our existing stat system, it's essentially capturing
all of our stat system, as much of it as
you want to turn on, as well as GPU events and I'm not sure if this is
in 4.23 but it's planned to also put IO events
in there as well. It's written so it can handle
massive amounts of data because we battle test the thing
on large projects like Fortnite. So you're capturing all
this data and it can zoom in and out and display
all the performance data very well on a single timeline. On a single
timeline, you can see all the different CPU threads,
what's going on in the GPU, if not now, soon, what's
going on on the IO thread. It really gives you a good way
to analyze the performance of your game and find out where
you're getting hitches from, why aren't you making frame rate, what systems do
you need to look at. It's a dream for me. I'm
very excited about this tool. >>Victor: Rather than hopping
between the GPU profiler and-- >>Marcus: Exactly. Having it
all in one place so you can-- Sometimes you need to know the interaction between systems.
Sometimes the GPU frame is slow because the CPU is slow. If
you're not feeding the GPU fast enough then
the GPU goes idle. That's something that's kind
of hard to see from a lot of GPU timers but if you've
got the GPU timeline combined with the CPU
timeline, you can see, oh, there was a 100 millisecond CPU
frame and then the GPU was slow. You can connect those
dots really easily when it's all combined
in a single timeline. >>Victor: That's great. It's
also coming out in beta in 4.23 and we are, I
think, in two weeks, I have a livestream with some
of the subject matter experts on Unreal Insight and we'll
walk you through how to use it. Which will be exciting and it
will be a lesson for me because profiling and figuring out
where your bottlenecks are can take up quite a bit of time and sometimes cause you to make pretty big changes in the
way you are implementing new features or
your Asset pipeline. Being able to figure that
out a little bit easier, I think everyone
will be very happy. Some other topics
that we can mention briefly, obviously the
release notes will eventually make it out but this is sort of
a little preview for all those of you who are excited. We've got new nDisplay Warp
and Blend for Curved Surfaces. I believe this is an improvement
to our current nDisplay tech. >>Marcus: That's right. I think in 4.22 we
shipped the first version, or maybe it was 4.21,
I don't remember exactly. But for something like square
LED walls where you need to do projection from
multiple machines, to sync things up. With the Warp and Blend,
it's more for, like, curved surfaces or different
physical installations that you might be building.
Think like a planetarium dome or something
like that, that you need to project some content
onto that looks correct. It's handy for
that sort of thing. >>Victor: If you're curious
what that might look like, we also did a showcase on
the Childish Gambino concert and how they were
using that technology to do just that. It was a dome
and the stage was in the center and the audience was all around. It wasn't interactive
but it could be. if you were able to give
everyone some kind of hand tracking and they could
control things on the ceiling. Moving on-- Niagara
improvements. Niagara is still in beta
but there's a lot of work being done there. >>Marcus: There's been some
pretty significant improvements to Niagara. It's
definitely one of our-- Well, one of my favorite
new systems that we have. It's been around for
a little while but it's getting really, really good. This is something that's
allowing our tech artists to have as much freedom as-- Just
imagine when our artists first got the Material Graph. Any artist could make
their own shaders. Now it's like any artist
can make their own-- Any tech artist or
developer can make their own high performance, data driven data manipulation simulation in a graph interface.
It's really cool. For 4.23, some of
the cooler things, you can now sample Static
and Skinned Meshes in a GPU particle system. This is something that
we did on the Troll demo that we did for GDC. Where we had-- if anybody
saw it-- we had these fairy-type creatures
that would move around that were Skeletal Meshes. We spawned something like
600,000 particles a second from the surface of
their Skeletal Mesh, so it looked like they
were made of fire. When you're sampling from the
Mesh, you get something like, you get the position,
you get the UV, and you can use that
to sample Textures and manipulate what
you're spawning across the surface of the Meshes and it's on the GPU so
you can do lots of them. And because they're
Skeletal Meshes, that means you can very easily
track things that are animating, so you can do any sort
of animation and do all sorts of crazy
particle effects and stuff. >>Victor: In terms of the Troll
demo, the fairies were sort of flying and lifting
the crown and moving. That was an actual animation
that was being played back? >>Marcus: Correct.
>>Victor: And then Niagara was basically rendering? >>Marcus: Niagara was
dynamically running and simulating the particles
based on the positions of those animations. None of
the particle was pre-baked. It was run live in Niagara. >>Victor: You said 6,000? >>Marcus: 600,000.
>>Victor: 600,000. >>Marcus: Per second spawned.
Yeah. >>Victor: That's a
ridiculous number. >>Marcus: It was
pretty cool looking. Some other cool things there--
Definitely the integration with Chaos that Michael
mentioned earlier, so that instead of having to somehow pre-calculate where all your
impact effects and stuff from Destruction
Events would go off, you can get that data directly
from the Chaos simulation. However it simulates, you
can get your dust effects and explosion effects
in the right place. It's really important
distilling those things. >>Victor: Michael,
question for you then. That would make it entirely
possible to do the cache simulations and then tie
Niagara to that, right? >>Michael: Yes. This
works through a form of Event system and as part
of the caching simulations, if you use cache in
Chaos inside the Engine, it will also cache those Events. Then when you play it back, you
can essentially play back those events which will then
trigger Niagara simulations. If you're set up
to listen for them. >>Victor: Is there a-- similar
to some kind of timeline there that you work on? Of being able to
notify when those are supposed to be spawned? >>Michael: They happen when the
simulation says they happen. The playback can happen a
number of different ways. The most common one,
which is what we used for the demos that we showed,
it's also what we have for the content that we're going
to release is essentially set up through sequencer. What you can do is use
Sequencer to play it back which gives you some
amount of control over how you're playing
those back and then when those Events happen
through Sequencer playback it'll trigger whatever
Events you have hooked up. >>Victor: You dropped a little
bit there, but I think your sentence ended and
I'm going to continue. Before we move on, I'd like
to go through some of the questions that we've received
on the topics already. Let's see. Going to take a
quick second to see here-- They're asking for Chaos, when is it fully
available for production? >>Michael: Not right
now, is the short answer. It's our first release
and there's going to be a lot of rough edges. We're excited to see what
people can figure out, but it is definitely a risk to try
to ship anything with it. All we've effectively
shipped with it internally is demos and content and things
like that. We have not shipped full games internally
at this point. So there will be issues
that come up that way. We expect over the
next couple of releases to ship Chaos in its
entirety, out of beta. But as with everything, timelines are a little
nebulous, but we are working very
hard and really believe that Chaos and physics in general
will be a large cornerstone for the future of the Engine,
creating interactive worlds that everybody can play
with, so our entire focus is to try to get that out. >>Victor: Even then, being
able to touch on the tools and play with them and see
what they're able to do, is important in
any pre-production, no matter if you've
used the tools before or if they're brand new, so being
able to start using them and planning what you might be
able to do in the future, it's as important going
through that process as it is to actually producing
the content itself. I hope everyone is excited
to get to play with it. Almost everyone loves
things breaking, right? It's difficult to not smile and be happy when you
see big blocks of things exploding. And it's all
virtual, so no harm is done. A lot better than some of the
other ways you can do that. "On the topic of
Virtual Textures, is there a quick way to
measure the performance aspect of traditional streaming
versus virtual texturing?" >>Marcus: It depends on
what type of performance you're trying to measure. If you're sampling
a Virtual Texture, there is a performance cost compared to sampling
a regular Texture. Because the way virtual
texturing works here, you're basically doing
two Texture fetches, because you have to
fetch to a smaller indirection Texture
that tells you where the smaller tile is in
a larger physical Texture. I don't need to go into it. For measuring that overhead, I would recommend
doing something like taking one of your Materials
you're worried about the performance of,
putting it on a card, and making it fill the screen. Then maybe having a static
switch where you can turn on and off going down the
virtual texturing path versus a traditional
texturing path. You can kind of A/B there
on your target platform Filling the screen is
a good way to make sure you get the maximum, like, what
is the worst case of this one Material filling every possible
pixel that I have to shade? If you're talking about
the IO performance, it's really going
to depend on your content and what
kinds of Textures you wanted to stream. I'm trying
to think of a good way to-- >>Victor: You want to explain
what IO is and what it means? >>Marcus: IO is
just input/output. Streaming speed off a a
disc, like from disc to hard drive, because part of the
problem with the old streamer is that you're really streaming
in a lot of data that you don't need, but it does
have some positives in that it makes a prediction of
what you're going to need and can start sooner. It might be bringing
in too much data, but it's bringing it in sooner. Whereas virtual texturing
works based off of exact GPU feedback of
exactly what the GPU wanted. It's always kind of a
just too late system. Like one frame the GPU requests
this thing and you go, oh man, I need this tile, time
to read it off a disc. It streams tiles very quickly,
so you usually don't notice that too much, particularly
if a fair amount of it's already in memory and the actual
visual change isn't too much. But it's a tricky
thing to really A/B. I think you need to pick which metric you're trying
to understand your measuring and then set up a
test case for it. >>Victor: I did see one quick
example of just enabling streaming virtual
texturing and it will show-- So when you're inside
the Texture Editor or viewer in the Engine, you turn on virtual texturing
and you can actually see the reduce in memory footprint
that the Texture will have when it's loaded. So that
will give you an idea. >>Marcus: That'll give you an
idea of the memory footprint changes, yes, but not
runtime performance or streaming performance. >>Victor: Of the cost
of doing that itself. This is another question
for you, Michael. "Are you going to support soft
bodies and fluid simulation in the future?" >>Michael: At some point in the
future, I would say yes. But our focus is really
to build the foundation, get Chaos in its current
form production ready and up and running so it can
be used for anything, but as I mentioned before, our focus
is really, on the large scale, trying to make worlds
more interactive and dynamic, so pretty much everything physics related
falls under that scope. Including soft bodies. >>Victor: Another
question regarding Chaos. "Will Mesh Tessellation and
Chaos play well together?" >>Michael: They
certainly can play well. Mesh Tessellation is independent
of the destructible LOD system that we've talked about.
You can, at any one of those levels, add Mesh
Tessellation on top of it. You can do it to the extent
that you want to do it. But there are separate
systems, so you won't have automatic Mesh Tessellation
based on destructible LODs or something like that.
You'll have to set it up on that particular Level
for that particular Mesh. If you want it for all Levels,
you have to set it up for all Levels. >>Victor: I'm assuming
just like normal that the collision of
the simulated Meshes will not be able to take
tessellation into account, right? They're still some
form of primitive? >>Michael: We have a number
of different types that can be used for collision
inside of Chaos. But yes, Mesh Tessellation
will not affect that at all. Currently, our primary
collision types are, spheres and boxes
and Level sets. That will be expanded over time. >>Victor: Question for you,
Marcus. What the four rendering teams here at Epic are and
what their main focuses are. >>Marcus: I thought I
might get that question. We have a ray tracing team. There is an RHI team, like I
described the RHIs earlier. Because we need people to
actually implement all those different APIs. The rendering team right now
is one of our more distributed teams at Epic. So, for lack of a better name,
we have a features team east and a features team west. And the teams don't always
fall within those buckets but that's just a moniker. Most
people on the east team are here in North Carolina or in Stockholm or the
UK or something. And most people on
the west team are in our San Francisco office or Vancouver or-- I feel like I'm forgetting
someone. But there's also some people in Korea as well. >>Victor: So its sort of east
and west from the center-- >>Marcus: Yeah from the center
of the US, just as a way to break it up so that we
don't have one giant 15 or 20 man team. >>Victor: Let's see.
They were curious-- "How about custom shaders
for ray tracing? For example, colorized ray depend
on incident angle." >>Marcus: We don't have custom shaders for ray tracing just yet, at least not in the
Material Graph where you could choose to make a closest hit
shader or something like that that's custom. If you are in a position where you can write Plugins or
modify the Engine code, you can create your own custom,
like, ray generation shaders, as like a global shader Plugin. Or you can-- if you modify the Engine,
you can write the other ray tracing shader
types as well. It's just not something that we really exposed yet
to the high level. It's something we're
thinking about doing and probably will do in
the future. It's just-- We're kind of laying the
foundations here for what good ray tracing support
looks like out of the box and then at some point
we'll go, okay, we've done most of the basics and start
to open up the toy box to tech artists and stuff. >>Victor: We gather
all the ingredients and now it's time to add some
spices, essentially. That could be the
metaphor I would use. There's another question here,
if there will be any livestreams dedicated to virtual texturing,
and yes, there will be. I'm currently planning
one on the 19th. Hopefully that will come through
and then we'll walk you through as much as we can. "Does ray tracing
work with Vulkan?" >>Marcus: Ray tracing does not
currently work with Vulkan right now. Currently
we only support DXR. Support for Vulkan is planned eventually,
basically when Vulkan has-- if and when Vulkan has
an official ray tracing specification, then we
will work to support that. >>Victor: Does virtual texturing
also affect lightmaps rendered with light mass?" >>Marcus: It can, yes. I would I guess want to double
check that it's still in 4.23. But I believe that the option to
have virtual textured lightmaps is still there. The only thing with that
is, yes, you can use virtual texturing to lower your
lightmap footprint and you can theoretically increase the size
and resolution of your lightmaps much more easily. It's just, of
course, the larger you make your lightmaps, the more bake
time you're going to grab. So the idea of, maybe I make 16k
lightmaps everywhere, you could do, but you might
end up waiting a while to bake those. >>Victor: "Virtual
texturing-- Does that mean we can now have 64k+
Textures in Unreal?" [Laughter] >>Marcus: Right now, you can't.
That's actually something we went and tested. Support for that is planned,
it's just in testing, I think after we
got over 16k or 32k, there were some bugs
that started to show up in various places
where there were some-- some placers that were using 32s
that needed to be a little bit bigger size and whatnot.
So for the moment, I think 16k is a safe bet. We will be fixing
those paths and making it possible to
have bigger Textures in the future. >>Victor: That would
be a very high res face-- >>Marcus: Very high res face.
Or Google image search data. There are uses for this stuff that aren't just like a
super high quality character. >>Victor: They're asking,
"RHI on multi-GPU?" >>Marcus: RHI on multi-GPU. >>Victor: There were
some follow-up questions that might help. "Non SLI mode, option to
select GPU for rendering?" >>Marcus: Here's the state
of multi-GPU in the Engine: We have RHI support
for it, which means you can write high level
rendering code that can-- at least for DirectX 12 and we
only support this on DirectX 12-- you can write rendering
command lists and choose which GPU they're
going to render on. There's primitives for
transferring data between GPUs. So there's synchronization,
command list, and then there's resource control of
where resources live. If you were willing to modify the Engine, you
could customize the renderer for your work load, to
move certain work off to other GPUs and bring it back. At the moment, we don't
have a turn-key solution, like if you just plop
another GPU in there, it's really not going to do
anything out of the box. The reason is, we found,
over the course of the last two years of doing the
Reflections demo and the Porsche Speed of Light demo, that the work loads for
different content are so different that
hard coding those paths into the renderer is just not
that valuable for anyone. But all the RHI code is there. If you want to write
multi-GPU code, you can. It just doesn't
come out of the box. As far as the second
question, for GPU selection, I'm 99 percent sure
there is a command line-- like a command line option
to choose a particular GPU to use for the instance of
the Engine you're launching. If that person
would, I don't know, ask it on AnswerHub
or something, make a post. It's in the-- If you look in the device
creation code in DX 12, device.cpp, you can see it
there if you look for it. >>Victor: "Support for
Volumetric Global Lights in Light Function Material?" >>Marcus: Yeah, that's a
request we get sometimes. The problem with those
kind of lights is that our fogging solution which is
mainly the froxel fog we have. It's a fairly low
resolution type effect because it was mainly built for
environmental fog and stuff. And you can use it for
those kind of lights, but the resolution is really low
and if you crank that up so that it looks good,
which you can do, it raises your hardware bar because it gets pretty
expensive at that point. So there's nothing
extra planned right now for light functions
for those things. But we do plan on improving
our volumetric support overall, next year. >>Victor: Is froxel a fog voxel? >>Marcus: No, froxel
is a frustum voxel. Sorry. If you imagine your graphics programmer here.
But if you imagine your trapezoidal looking
camera frustum, like little boxes of
varying sizes that cover the cover the frustum and
get bigger in the back. Each one of those has
some information about the fogging system and the
smaller those boxes get, the better the resolution you
have and the more fine-grained of an effect you can
have with that system. >>Victor: I learned
a new word today. >>Marcus: That's the existing
volumetric fog system that's in the Engine. That's
just how it's implemented. >>Victor: "Does the streaming
Textures get mipped to?" >>Marcus: Sorry,
say that again? >>Victor: "Does the streaming
Textures get mipped to?" I think they're asking if the
streaming Textures are mipped. >>Marcus: I'm assuming the
question is for virtual texturing and the answer is
yes, it has MIPS. I think they only go down to the tile size. 128 by 128. Don't 100 percent quote me on
that though. We will have a technical spec, but I'm pretty
sure the MIPS go down to the tile size of the
Virtual Texture. >>Victor: Another
question for you, Michael. "Is Chaos only meant for
destruction, or is it a full replacement of the
Physics Engine?" >>Michael: In its form in 4.23,
it's a destruction-only system. There is early support for general rigid body dynamics, but
there are a number of features that are still absent. We are working on
filling those features in and it will be more of a
full-fledged system in future releases. >>Victor: One step
at a time, right? A question you might be
able to answer, Marcus. "How are things progressing
with support for USD?" For those who don't know,
USD is a new file format. >>Marcus: I'm not keeping up
100 percent with the USD support but it is progressing.
We do have people actively working on it. Though I
couldn't personally tell you exactly what the status is.
It works. You can import USD things, there just
may be some features that don't quite work yet.
For example, I think you can put MaterialX
definitions into USD or MDL definitions or something
like that, I'm not sure that we import those yet. We might.
The tools team would know. If you get Matt K. here on a
livestream or something. >>Victor: Matt K.? I'll
write that down, see if I can tell him that Marcus told me
you should come talk about this. They're a little curious about Chaos and networking,
and I assume they mean network games. >>Michael: There isn't a special
solution for networking with Chaos right now. It pretty much
piggybacks off the existing network solutions. Things
like destruction at scale, if it's dynamic, there are
potential bandwidth problems that you will run into if
you're trying to do too much. If you're doing a limited
amount, that should work more smoothly. But
it is something that falls under the category of
things that are being worked on to come up with a more
efficient solution for large scale destruction when
there are bandwidth limitations. >>Victor: So it's pretty
much, you get to the same problem or limitation where we can't have the server
tell all the clients-- so the server runs
the simulation, then the server
would have to send a huge amount of data of where
each and every little simulated piece goes, right? >>Michael: Right. >>Victor: That would
still cause a fundamental bandwidth issue, right? That
there's just a lot of data that needs to be transferred. >>Marcus: You're probably better
off just running the simulation locally, right? >>Michael: Yes, that's correct. We have a number of ideas and
things that we're currently experimenting with, so I
would expect improvements in future releases, but
in its current state, it's pretty much
what you described. >>Victor: They're
also curious when Niagara will make
it out of beta. >>Marcus: We are trying
hard to make sure that before we take it out of beta
and say it's production ready, that we've actually
battle tested it hard. The short answer
to the question is, either late this year
or early next year, is when we expect it
to come out of beta. To do that we're kind of
dog fooding it internally on a very large project. >>Victor: If you're unfamiliar
with the term dog fooding, here's a little anecdote of
where the term comes from. I believe it was a CEO of a dog
food company that would actually eat the dog food. >>Marcus: Oh... okay. That's a grosser anecdote
than I thought it was. >>Victor: Sorry, I apologize.
That's, as far as I know, that's what I've been told is
where the term comes from. That he was dog fooding. I'm not
sure if he had his employees eat them as well. I hope not. He was eating the dog
food to make sure that it was okay for his pets. >>Marcus: That said,
Niagara in its current state, in 4.23, we think is pretty stable and
pretty feature-rich. I encourage people
to try it out and let us have any feedback
because we're coming into the last stages of bringing
it out of beta. >>Victor: I saw a really cool
tweet from one of my friends, he was doing windshield wipers--
Just essentially, in the Particle System, he was
making it look like there were two windshield wipers brushing
all the rain away that was coming down on the windshield. That's essentially--
It's not only a better looking system or more performance, it's also the
actual tools of making particles that have been greatly improved
over something like Cascade. >>Marcus: It's a huge
improvement over Cascade. We try very hard to make sure
things are rock solid before we take the beta tag
off and tell people, yes, this stuff is ready to go, ship your games with this. Often we do that by shipping
our own games with it. >>Victor: I'm so into this
discussion because I'm really happy to be able to ask you
all these questions and get amazing answers that I don't
really have time to read the questions. There's a little bit
of general questions. Just some of our other features. "Did we get RTX Landscapes or
improvements to GI for RTC?" >>Marcus: You did get support
for Landscape in ray tracing. There were some improvements
to ray trace GI. Though I wouldn't say
it's production ready or super performant yet, but
there were improvements made. I'm not familiar with RTC. I
don't think I know that acronym. >>Victor: "Plans for full
Vulkan support for PC?" >>Marcus: Yeah. We do ship Vulkan on Android,
for a few devices on Fortnite. And we're intending
to make it our primary RHI on Linux for Linux
desktop rendering. So that kind of
coincides with PC Vulkan. We haven't shipped something
with it, but it's in a pretty reasonable state. If you were someone
that wanted to ship on a single RHI on a Linux and
Windows and mobile game, I think you could do it
on Vulkan today in 4.23. There are still some minor bugs
and a few minor features to do. But I think when we tear
the band-aid off and make it the default on our-- like, fully remove OpenGL 4 and just have Vulkan on Linux,
that'll be the day when we say, yeah, it's definitely
good to go for everyone, for all purposes. >>Victor: Would that help you
know what your game will look like on other devices when
you're iterating on PC as well? >>Marcus: Generally speaking,
we try to make sure that things look the same no
matter which RHI you're on. As long as you're
using the same-- one of our three renderers,
like, we have a deferred renderer, and the
VR forward renderer, and the mobile renderer. If
you're using the same renderer, things will look the same no
matter which RHI you're using. It's more about
your testing matrix. You're not testing DX 12
and DX 11 and Vulkan on all these places. Maybe
you're a company that just wants to test just Vulkan, to
reduce your testing burden. >>Victor: They
were wondering if, "Is virtual texturing only
for terrains, or can it be used on models
and Characters?" It's for everything, right? >>Marcus: It's for everything.
It's not just for terrain For the streaming of
disc Virtual Textures, there's just a new node that you
plop into any Material you want that can sample a Virtual
Texture and you can apply it to anything you would
want to apply a Material to. And the system I
was talking about, the decal system on the
terrain using the runtime virtual texturing
generation caching system, is not available yet.
It will be soon. That's just an example of how
you might use that system. The system we have
is pretty generic. There's just an API
you can hook into to put data into a Virtual Texture. And it's up to you what
you want to use that for. The Landscape example is just
an easy example to talk about. >>Victor: Yeah, and probably
a place where the benefit of runtime virtual texturing
will be the greatest. "How is the progress coming
along with glTF support?" >>Marcus: glTF
support-- I'm not sure. That's maybe one for
the Datasmith team. >>Victor: I think there's
another question here that might be for Datasmith. "Is V-Ray being fully
integrated into 4.23?" >>Marcus: The V-RAY guys do
their own implementations. We don't have anything to
do with the development of their Plugin. So
I'm actually not sure. If there's any information, it
should be on the V-Ray website. >>Victor: Okay, we'll see. "What is the status of Light
Propagation Volumes?" >>Marcus: The status is
pretty much the status quo. We're not doing
active development or improvements to LPVs. So they're in the state they're
in. It works for some people. It can be a reasonable
outdoor GI solution, if you don't care too much
about the leaking artifacts and and you can afford
the performance. But we're not doing anything
further with them right now. >>Victor: Another
question for you, Michael. They're curious about, "Perhaps--" They're being nice-- "Perhaps a time frame or some
insight into vehicle physics changes, like wheel colliders." >>Michael: The vehicles
are something we don't currently cover with
Chaos, and so the solution that we currently have is
a physics vehicle system. I'm not familiar with
their timelines, but we expect, over
time, to expand Chaos to also be able to cover
vehicle type systems and plan a pretty comprehensive set of controls that can be
used to do that, as well as other rigid body
dynamics with it. >>Victor: They
were curious about support for colored
translucent shadows with dynamic light source. >>Marcus: Like in a... ray tracing context or... >>Victor: Maybe?
Not entirely sure. That wasn't part
of the question. >>Marcus: Yeah, I-- Well-- I often get that question in
the ray tracing context and assuming that's what
the question is about, the ray tracing team
is-- that's an active area of research. One of the
problems specifically with translucency and the real-time
ray tracing is that it's kind of a performance hog. Because you've got to do ray intersection with
a translucent surface, you've got to let
the ray go through, you might have to
intersect multiple levels of translucency
depending on how the ray reflects
against the surface. It's just very complex and a lot
of rays could end up being thrown. That whole thing is kind
of an active area of, what can we do and
still keep it real-time? That may come in the
future, though right now, I don't think you
can get good colored translucent colored
shadows right now. >>Victor: There are some
other things I can see there because today, if you're trying
to do something that's similar to real life, we don't
have single pane windows anymore, right? They're several pane windows
that will technically reflect between each other. So there's quite a bit there to
actually get it looking real. Someone is asking-- I'm not
familiar with this acronym but maybe you are. "What about SVOGI for
real-time lighting?" >>Marcus: SVOGI, yeah, we had an implementation
of that a while ago. We don't really
think it's viable as a tech right now. SVOGI has some problems that are fundamental
to the technique with light leaking, mainly, and the grid of voxelization
making certain Level layouts impractical or
to cause extreme artifacting if things aren't laid
out a particular way. It's been researched here
before, it's not something we're really pursuing, although if you're interested,
I believe NVIDIA has a fork of the Engine with one of their
implementations of that that you can try out. >>Victor: I think those
were all the questions that we will be able to ask
here. I'm just going to make sure that we-- if
there's anything else that we would like to touch on. Oh,
there's a new cool feature coming out for
Animation Blueprints, where you'll be able to
take an Animation Blueprint and essentially encapsulate
it within another Animation Blueprint. The biggest benefit I see is
that you'll be able to have several designers working
on and checking out, essentially unlocking the files
when you're working in a source control situation. >>Marcus: That, and also
just a large, complex Animation Blueprint
will become-- you can encapsulate little bits
of it and make the high level logic flow a
little more understandable, once you've got something
sufficiently complex. >>Victor: Right, you can say
something like, upper body, right? And divide it into
lower body and just know, if something
breaks, you can also divvy it up a little bit. >>Marcus: Even as an
organizational technique, it'll be a useful feature. >>Victor: We also have
some new sound features. Although, I'm going to try to
get maybe Aaron or Dan on here and they will talk a little bit
more about those, stay tuned. Then I believe we have
this new tool that was CSV to SVG. >>Marcus: Right.
CSV to SVG tool. I don't know if we can
bring up a little thing, but There is a feature
in the Engine called CSV Stats, meaning
comma separated values. This is a very
lightweight stat system. It dumps certain, like, key
value pair of name stats out to a file
very, very quickly, with very low runtime overhead. Then this tool,
which is CSV to SVG-- SVG is like a little
graphical format. It basically just makes a
chart for you, over time. So you can do something like-- We used this a lot
on the Troll demo when we were doing
performance runs every day. We would play the cinematic, dump out all the stats
that we cared about, like, GPU time, CPU time, maybe
number of draw calls or whatever it is that
you want to chart, and then you run this tool and
it makes a nice little graph over time for you,
and you can just see where you're
spiking and on which-- on the timeline, where
you're having a problem. You can use it to
track chart memory. It's just a nice visualization
tool for your performance data. >>Victor: An easier way
to grasp what's going on. >>Marcus: Yeah. We use it all
the time on our cinematics and tracking
performance on Fortnite. >>Victor: I think we might
be running up on the end of the stream here. We've
been touching quite a bit-- Is there anything else you
would like to mention, Michael? Maybe some thought hit you
while we were sitting here? >>Michael: I pretty
much covered everything. We're very excited to see what
people are able to do with the existing system but a
caveat again, we don't expect it to be able to be used
in a production setting as of this point, but we hope
people can play with it and give us some feedback
as we continue to build it. >>Victor: Yeah, I've seen a
couple of cubes breaking already on Twitter and the forums. Hopefully there will
be some more grand destruction coming through
our channels in a bit. I'm definitely excited for it. As always, there's a
whole other slew of fixes, improvements, and other various aspects of the Engine that
we've been working on. Another thing we can
point out real quick, which I am planning a livestream
for, because I know I might be getting that question, but
it is the multi-user editing improvements that are
coming to the Engine. This is in lieu-- or sorry,
that's the wrong term-- It's alongside the
VR Scouting tools that are coming as part of our
virtual production improvements. I think we touched on it
briefly but, directors and people who are responsible
for the virtual production set, they can, together,
in real-time, enter their virtual
world and together, move around, look around, and
figure out, where should we have this set. They
have a virtual camera-- This is all in VR-- A virtual camera that they can
move around in VR and sort of have a very
collaborative and much easier to explain, because you can
literally point at things in the virtual world and share
that experience together. I'm definitely hoping that
I'll be able to get the subject matter expert on
that on the stream here and maybe we'll be able to
set up a way that we can display that for you. That would
be a little bit of a hardware-- a bit of a different hardware
setup but maybe we can get a couple of Vives in here
and do a livestream in VR. It might be possible,
we will see. But we're at least going to show
you how you can use the tool and all the benefits
that come with it Then, the last point, I think, off of what we're
shipping in 4.23 is Disaster Recovery. Which I'm not actually sure
entirely what it's about, so maybe I will just leave that. >>Marcus: The idea here is that because the way the multi-user-- The way the multi-user
system works is it's basically a little server
of the Engine that's running and it's doing transactional recording
of what's going on, because if somebody moves
a thing, it has to record that somebody moved a thing and
then send that out to everyone else. So if you already have
this separate process that's recording everything
that's going on in the Editor, then you can have a
separate process that's recording everything that's
going on in the Editor and then if the Editor crashes, now you have a record
of everything you did since the last
time that you saved and then maybe use that to
recreate your data and recover from the crash. I'm not an expert on exactly how feature complete
that is, as far as what kinds of things
it can recover from, but that's the basic idea. >>Victor: I believe
it's experimental. >>Marcus: It is experimental. >>Victor: Yeah,
very first feature. That's really neat though. I don't know if that was
planned or if it just came out-- >>Marcus: I think it was
like a happy accident. Someone realized, oh, we have
this server that's recording everything, why don't we use
it to bring your work back, if you happen to crash? >>Victor: It's writing to
disc at all times, right? >>Marcus: I'm not sure. I'm not
sure how much it caches to disc versus how much
it caches in RAM. I've never worked on
it, so I don't know exactly how it operates. >>Victor: Yeah, we'll definitely
be providing you a lot more information on all these
features coming in our release notes,
which will be soon. We don't have an exact date yet
of when 4.23 will be out but it is very soon. As of right now, you can
check out preview six. That is available on the
Launcher as well ad GitHub. And remember if you
want to try out our new Chaos Physics Engine, you
should head over to GitHub. I wrote up a little section
on the forums of how you can get started, because there
is a line you need to change in the Editor target file. With that said, I think I want
to thank Marcus for coming on, taking some time out of his
day, as well as Michael, who is in Chicago, if anyone was
curious where he is calling in from. A beautiful, very large city
that I've had the fortune of visiting several times.
I'm sure Michael likes it over there, and that's
why he spends his time there. With that said, as always,
if you do what we do, which is streaming on Twitch,
make sure you add the category of Unreal Engine,
so we can tune in. I was watching a couple of the
people who were streaming the game jam, and it's always
exciting to go in and see what they're working on. If you happen to have a Meetup
group that are also referred to as User Groups in your city, go to
unrealengine.com/usergroups. I really should
get that link down. I'm sure it's in
my notes somewhere. I don't visit it a lot
because it's in my favorites, so I just click a button. But you should go check them
out. It's always exciting to share your projects with the
developers that are local to you and if you don't happen
to have a Meetup, you can contact us at
community@unrealengine.com and if you're excited
or interested in hosting and organizing one of these, let
us know and we will get you a bit of information of what
it entails, what it involves. How you can possibly
put all that together. We will make sure
to help you out. Other than that, make sure you
subscribe on our channels and sign up for all the
news that's coming out, which will, in just a bit,
be some of the blog posts coming out around 4.23 release. And all the other cool tech
blogs and various things that we are releasing
through our channels. I think, with that said-- Anything else we should
mention before we end? Marcus, do you have anything? >>Marcus: I think we
covered a lot of ground. >>Victor: I think so too.
Michael is saying goodbye from Chicago, and we will
see you all next week. Bye everyone!
>>Marcus: Thanks everyone!