Dear Fellow Scholars, this is Two Minute Papers
with Dr. Károly Zsolnai-Fehér. If you have been watching this series for
a while, you know very well that I love learning algorithms and fluid simulations. But do you know what I like even better? Learning algorithms applied to fluid simulations,
so I couldn’t be happier with today’s paper. We can create wondrous fluid simulations like
the ones you see here by studying the laws of fluid motion from physics, and write a
computer program that contains these laws. As you see, the amount of detail we can simulate
with these programs is nothing short of amazing. However, I just mentioned neural networks. If we can write a simulator that runs the
laws of physics to create these programs, why would we need learning-based algorithms? The answer is in this paper that we have discussed
about 300 episodes ago. The goal was to show a neural network video
footage of lots and lots of fluid and smoke simulations, and have it learn how the dynamics
work, to the point that it can continue and guess how the behavior of a smoke puff would
change over time. We stop the video and it would learn how to
continue it, if you will. This definitely is an interesting take as
normally, we use neural networks to solve problems that are otherwise close to impossible
to tackle. For instance, it is very hard, if not impossible
to create a handcrafted algorithm that detects cats reliably because we cannot really write
down the mathematical description of a cat. However, these days, we can easily teach a
neural network to do that. But this task here is fundamentally different. Here, the neural networks are applied to solve
something that we already know how to solve. Especially given that if we use a neural network
to perform this task, we have to train it, which is a long and arduous process. I hope to have convinced you that this is
a bad, bad idea. Why would anyone bother to do that? Does this make any sense? Well, it does make a lot of sense! And the reason for that is that this training
step only has to be done once, and afterwards, querying the neural network, that is, predicting
what happens next in the simulation runs almost immediately. This takes way less time than calculating
all the forces and pressures in the simulation while retaining high quality results. So, we suddenly went from thinking that an
idea is useless to being amazing. What are the weaknesses of the approach? Generalization. You see, these techniques, including a newer
variant that you see here can give us detailed simulations in real time or close to real
time, but if we present them with something that is far outside of the cases that they
had seen in the training domain, they will fail. This does not happen with our handcrafted
techniques, only to AI-based methods. So, onwards to this new technique, and you
will see in just a moment that a key differentiator here is that its generalization capabilities
are just astounding. Look here. The predicted results match the true simulation
quite well. Let’s look at it in slow motion too so we
can evaluate it a little better. Looking great. But, we have talked about superior generalization,
so what about that? Well, it can also handle sand and goop simulations,
so that’s a great step beyond just water and smoke. And now, have a look at this one. This is a scene with the boxes it has been
trained on. And now, let’s ask it to try to simulate
the evolution of significantly different shapes. Wow. It not only does well with these previously
unseen shapes, but it also handles their interactions really well. But there is more! We can also train it on a tiny domain with
only a few particles, and then, it is able to learn general concepts that we can reuse
to simulate a much bigger domain, and also, with more particles. Fantastic! But there is even more! We can train it by showing how water behaves
on these water ramps, and then, let’s remove the ramps and see if it understands what it
has to do with all these particles? Yes, it does! Now, let’s give it something more difficult. I want more ramps! Yes! And now, even more ramps! Yes! I love it! Let’s see if it can do it with sand too. Here is the ramp for training, and let’s
try an hourglass now. Absolute witchcraft. And we are even being paid to do this. I can hardly believe this! The reason why you see so many particles in
many of these views, is because if we look under the hood, we see that the paper proposes
a really cool graph-based method that represents the particles and they can pass messages to
each other over these connections between them. This leads to a simple, general and accurate
model that truly is a force to be reckoned with. Now, this is a great leap in neural network-based
physics simulations, but of course, not everything is perfect here. Its generalization capabilities have their
limits. For instance, over longer timeframes, solids
may get incorrectly deformed. However, I will quietly note that during my
college years, I was also studying the beautiful Navier-Stokes equations and even as a highly
motivated student, it took several months to understand the theory and write my first
fluid simulator. You can check out the thesis and the source
code in the video description if you are interested. And to see that these neural networks could
learn something very similar in a matter of days… every time I think about this, shivers run
down my spine. Absolutely amazing. What a time to be alive! This episode has been supported by Lambda. If you're a researcher or a startup looking
for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations
in other videos and am happy to tell you that they're offering GPU cloud services as well. The Lambda GPU Cloud can train Imagenet to
93% accuracy for less than $19! Lambda's web-based IDE lets you easily access your instance right
in your browser. And finally, hold on to your papers, because
the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to lambdalabs.com/papers and
sign up for one of their amazing GPU instances today. Our thanks to Lambda for helping us make better
videos for you. Thanks for watching and for your generous
support, and I'll see you next time!