Epic Games just unveiled a slew of
advancements for Unreal Engine 5.4 and beyond in its upcoming roadmap for 2024. In this video, we'll delve into eight AI
powered feature updates. That are set to transform the virtual world
as we know it, including support for Nanite and Lumen, a next generation terrain
solution, procedural content generation updates, a new unreal
cloud solution, and much more. So here's the list of the most
exciting. Unreal engine features coming in 2024. One of the standout features coming to
Unreal Engine is. Nanite dynamic displacement. This feature allows nanite meshes to be
modified at runtime using a displacement map or procedural material. This means that users will be able to create
material driven and animated displacement effects, as well as incredibly
detailed landscapes. With the growing use of nanite dynamic
displacement at runtime, this feature is a game changer for creating dynamic and
detailed environments. Number two, another exciting update is the
support for Nanite Spline meshes, currently an experimental feature in
Unreal Engine 5.3. This update will bring performance
improvements, optimizations, and fixes to allow for better usage of
nanite spline meshes. This will enhance the rendering capabilities
of the engine, making it easier than ever to create stunning visuals. Number three on the rendering side, Unreal
Engine is introducing improvements such as large world coordinates
on GPUs. This update increases the maximum world size
from 21km to over 88,000,000km, allowing for the creation
of massive and detailed worlds by using a refined tiled
representation. This update improves precision and eliminates
jittering near tile boundaries, resulting in a more realistic and seamless
experience for users. Number four Unreal Engine is also striving
to improve the performance and capabilities of its
rendering features, with updates to lumen hardware ray tracing. The goal is to achieve four milliseconds per
frame in typical scenes matching the performance of lumen software
ray tracing running at 60 frames per second. This will allow for high
quality, real time ray tracing on next gen consoles, further enhancing the
visual fidelity of Unreal Engine projects. Number five in addition to rendering
improvements. Unreal Engine is also focusing on enhancing
its procedural content generation system with updates such as runtime
hierarchical generation, GPU generation, and attribute set arrays. Procedural content generation will give
developers more flexibility and control over generating dynamic
environments. Additionally, these updates will enable the
creation of richer and larger procedural worlds, improving iteration workflows and
allowing for more complex logic and asset generation. Number six Unreal Engine is also addressing
the performance bottlenecks that developers face with updates to shader cook time. Renderer parallelization and cooking
optimizations. The engine is aiming to reduce cooking times
and improve overall performance. These updates will benefit
developers working on large projects and save them time and effort during the
development process. Number seven character and animation tools
are also receiving significant updates in Unreal Engine. Features such as modular rigging framework,
improved Skeletal Editor and enhanced Animation gizmos are aimed at
providing better control and ease of use for character and animation workflows. These updates will enhance the animation
pipeline, making it more efficient and intuitive for creators. Number eight Unreal Engine is not neglecting
audio and user interface improvements either. The engine is introducing a new Audio
Insights debugger and profiler, allowing creators to optimize and
debug audio more effectively. The user interface is also
getting a makeover, with improvements to the Content Browser,
viewport toolbar, and a quick Unreal Motion Graphics UI Designer Preview
feature, enabling faster iteration and easier creation of user
interfaces. And these are just a few of the many
exciting new features coming to Unreal Engine in 2024. With its ambitious roadmap, epic Games is
pushing the boundaries of what is possible with artificial intelligence,
meaning that whether you're a game developer, a filmmaker, or an AI enthusiast, the future
of Unreal Engine will continue to empower you to bring your
visions to life. Meanwhile, another slew of AI breakthroughs
took place recently as Adobe unveiled a gamut of new features
across its suite at the Max conference. But what caught everyone's eye was the
company's introduction of generative AI for video, which they introduced as
Firefly Video. But what is Firefly? Video generative AI is no stranger to Adobe,
starting with Firefly for images, a precursor to Firefly
Video, which enabled users to generate or modify visuals by
typing a simple text. But transitioning this tech into the realm of
video was the next obvious step, albeit a challenging one. This is because with videos, it's not just
about creating a solitary frame, but weaving together at least ten. 24 frames per second. Despite these complexities, Adobe's
audacious attempt has proved promising by typing a blue ocean surrounded
by rocks. Firefly video conjured video clips mirroring
the dynamism of ocean waves colliding with rocks. Although the initial rendition showcased a
lower resolution, it's important to remember that Rome wasn't built
in a day. Thus, Firefly video is still in its infancy
and will become more robust very quickly. But that's just the beginning, because also,
Adobe offered a tantalizing glimpse into the future during their Adobe
sneaks. In a groundbreaking demonstration,
generative AI was employed on a video clip featuring a woman amidst a scenic
backdrop. But the real surprise was that Adobe's I was
able to effortlessly remove passing individuals from the entire video. While some might argue that such features
like Content Aware already exist, this new fast fill feature is undeniably a
leap forward from Adobe's previous tech. On top of all of this, Adobe showcased the
application of this new tech on footage of a model walking in fluctuating
lighting conditions. Within moments, Adobe's generative AI was
able to adorn the model with a tie that moved and adjusted to the light,
seamlessly, exemplifying impeccable integration and realism. Furthermore, another phenomenal stride in
Firefly Video's capabilities is its ability to animate still images in a
fascinating demo. A static image of an elephant was
transformed into a boomerang esque moving visual. This features practical applications are
vast for filmmakers. Just imagine being able to effortlessly
convert every still image in a documentary into a moving visual, elevating
viewer engagement substantially. But Adobe's repertoire of
video enhancements doesn't stop with Firefly. Because Project Rez is their experimental
initiative using AI to upscale videos, demonstrations depicted
substantial improvement in upscaling low res footage to HD. A promising tool for filmmakers in the era
of HD and 4K, it's important to note that film isn't just
about visuals. Audio plays an equally pivotal role. Adobe's project Dub Dub, Dub is set to be a
game changer. This AI powered feature can dub videos into
languages while preserving the original speaker's voice, tonality, and
texture. A boon for global content creators, this
tool could potentially obviate the need for multiple channels in different
languages. With the advent of Firefly Video and its
associated features, Adobe stands at the cusp of an AI revolution in
video production. And as these tools evolve and integrate
further into Adobe's software suite, filmmakers and content creators worldwide
stand to gain immensely. Unlocking a world of possibilities that were
previously far too expensive to be attainable for the masses.