I can’t believe that this is happening.
And this. And this. And this too. Yes, games and virtual worlds are about to get a
heck of a lot better in terms of visual quality. So what just happened? First, this paper took
NERFs to the next level. What are NERFs? Well, NERFs are Neural Radiance Fields, a tool for us
to create photos or video footage of something in the real world, and then make a copy of
it as a virtual world that we can play in. But through an amazing paper called Instant neural
graphics, we can do this in a matter of minutes, often even seconds. So what’s new here? Why
write another paper on it? Are we done here? Dear Fellow Scholars, this is Two Minute
Papers with Dr. Károly Zsolnai-Fehér. The answer is nope. No sir, we are
not even close to being done. You see, the speed is satisfactory with Instant
neural graphics, but the quality, hmm…not so much. Look. And here is the new
technique. Wow, look at the hair. The hair is now not a blurry blob, but it is a
crisp piece of geometry. Loving it. So, this new technique finally gives us higher quality
virtual worlds. Worlds that are worth playing in. So, high quality. What does that mean?
Being slower, that’s what is means. Well, most of the time, yes, but does it also mean
this is slow too? Let’s have a look. And…oh my goodness. Are you seeing what I am seeing? This is
easily twice as fast as instant neural graphics, and at the same time it is also of much
higher quality. And it gets better. Now hold on to your papers Fellow Scholars, and
look. In some cases, it is up to 20x faster. Wow. That is absolutely incredible. This kind of
improvement just one more paper down the line, I can hardly believe it. Yes, Fellow Scholars,
this is the corner of the internet where we look at research papers, black ink on
a white table and flip out together. And, did you notice it? Have you found it? Look
again. These scenes have tons of hairy things, and fabric, and even more hair, and then
even more hair. Wait a minute. That, dear Fellow Scholars, is impossible.
NERFs are not good at modeling thin structures at all, right? But this one
is excellent. How is that even possible? Well, here is when shells come into the
picture. Let’s pop the the hood, and, oh yes, first, this technique works very
much like a NERF-based technique would, but then, here is where the magic happens.
From a set of colorful points in 3D space, it constructs a shell. And from that shell, we get
a triangle mesh. Hoo boy - are you thinking what I am thinking? Oh yes, using triangle meshes means
that we can use our amazing computer graphics and physics simulation algorithms to deform it. Yes,
so from now on, from your cameras, we get not only a virtual world that we can play in, but
we can even run our physics simulations in it. I absolutely love this idea. Yummy! But wait,
this is AI research we are talking about, and oh my…what is this? Can it be that? Yes!
A separate, specialized paper for physics simulations within NERF-based worlds is already
available. And this one is a real treat. We can simulate elastodynamic motion within these
worlds, and all this happens in real time. And, we are still not done yet. Here
is another really cool innovation on improving neural radiance fields.
If these are virtual worlds, that means that we likely wish to move around
in them. When we get close to these objects, the quality is still not quite there from
close by. However, with this new technique, we can get closer and just zoom and zoom
and zoom, and the quality is still there. Now, there is also a new, different school
of thought for creating virtual worlds, and that happens through Gaussian Splatting. This
is a technique where you can cheaply store a few of these lumps that we call Gaussians, but that’s
not what you see on the screen. What you see is a reconstruction of these lumps, which looks like
this. And this representation also lends itself to physics simulations. This one is going to be an
absolute treat, we can spread these questionable materials on a piece of virtual bread, and
my favorite, even tear it. Oh yeah. OG Fellow Scholars, remember this legendary paper? You might
soon be able to have this kind of quality with any object in a virtual world that you create by
recording your own home or anywhere on the planet. Wow. This work used to be a simulation paper
that took a long time to compute. Up to about one hour for every second of footage that you see
here for the baking paper, and Fellow Scholars, about four hours for every second of footage here
for the tearing paper. Yes, these are measured not in frames per second, but in seconds per frame.
And we haven’t even talked about all the integral calculus and human ingenuity that is required
for this. Yes, these were handcrafted techniques. No AI anywhere to be seen, just the ingenuity of
computer graphics researchers. And in the future, they might run in our virtual worlds and video
games in real time. I would absolutely love that. I wanted to make sure here that we honor
the great works out there and compare to them. So, these 4 papers truly are a dream come true
for virtual worlds. And as good Fellow Scholars, we should always invoke The First Law Of
Papers. The First Law of Papers says that research is a process. Do not look at where we
are, look at where we will be two more papers down the line. And just imagine what we will
be capable of two more papers down the line. My goodness. If you have some ideas, let me know
in the comments below. What a time to be alive!