- Hi, my name is Adrian Meyer. And in this presentation I'm gonna talk about fractals and other procedural madness, that I created for my
360 VR experience called "Strands of Mind". So, Strands of Mind is a 12 minute cinematic 360 VR experience. It's kind of a psychedelic journey, that explores a world beyond
our normal human perception, and the nature of existence. It is currently running on many international festivals and venues, and it's gonna be released
in VR stores next year. It is pre-rendered 4k,
in stereoscopic 360. And, I of course thought about
doing it in a game engine, in six degrees of freedom and real time. But as I wanted the
visuals to be very organic, and highly visually complex, I quickly had to come to the conclusion that this wasn't possible in real time, at least when I started the production, but I fear it still wouldn't be. Though I am hoping that I might do a future project in a game engine. Also, I didn't really
want any interactivity. I wanted it more to be a passive trip that you can fall into
and not be disturbed by any interactive elements. So, in this case, this
wasn't a big sacrifice. But with that, I really had the ambition to do a very high quality 360, with high quality stereo. And also with a concept to not
let you feel the constraint of a 3D degrees of freedom experience, as opposed to a full six experience. It also has full Ambisonic spacial audio that makes it much more immersive. The whole thing is almost
entirely created procedurally in Houdini, and compositing
is done in Nuke. So, huge part of the film are these kind of dark fractal scapes, and they are all rendered
with Arnold from Houdini, as the entire film is. And I'm gonna talk specifically about the fractal pipeline later on. It also features a psychedelic forest with lots of high-res assets, and very dynamic shading
for the luminescence. And for that, I created quite
an extensive custom asset, set dressing and shading pipeline, which I'm also gonna talk about later. And finally, there's of course, a luminous baby floating beneath a black hole in outer space, which internally also
has a lot of geometry and procedural geometry. As throughout the film there's lots of procedurally generated stuff, which I'm gonna talk about later. But before going into the
nerdy tech behind all that, let's watch the trailer. (intense music plays) (sounds of birds) (ominous music) (crackling static) - [Adrian Meyer] And as a quick overview, I'll show you a short VFX breakdown. (intense music) - Alright, so let me give
you a pipeline overview. As I mainly created the film alone, I really had to come up
with a smart pipeline, and automate many, many tasks to tackle the quite heavy workload. Especially the 360 stereo
VR required many tools to be reinvented because
there's not a lot of out of the box solutions, and during the talk, I'm only gonna focus on some aspects of the pipeline of course, because it would be a
bit too much otherwise. So, the pipeline constantly
evolved during the project, basically almost right until the end. And I ended up with 140
custom Houdini digital assets, and 125 Houdini shelf tools. And these tools included
many production aspects, like 360 tools, fire cache
and asset management, set dressing tools, a
custom SpeedTree pipeline, lots of Render Farm and PDG tools, and also very important video
and coding pipeline, via PDG. So, with 360, you end up encoding hundreds and hundreds of videos, with quite special and
extensive settings with FFmpeg. So, it was really
important and a lifesaver to automate that, with PDG. Not really the focus here, but especially in Nuke and compositing, I needed many custom 360 and stereo tools that weren't really present. And I ended up with 50 custom new gizmos just for the production. I had some help from a
fellow TD student with that. In total, I ended up with over
14,000 lines of Python code that were mainly Houdini
specific or at least related, but a lot of the code was authored or triggered from Houdini. As the film is produced in 4k stereo at 60 FPS in 360, this led to a pretty crazy render load for a short film in a
university environment. So I ended up with over
90 million gigahertz hours or frame rendering and simulation, which equals 580 years of rendering on a single i7 quad core PC, and all in all, I generated around 400 terabytes of data
during the production. So I was quite lucky that
my university environment could handle such a crazy workload. All right, let's talk a bit about procedural growth and procedural geometry. So, as I mentioned, there is just procedural
geometry everywhere, which is kind of obvious if
you're working with Houdini, like these vein systems in the embryo that were fluorescent, or also things like this organic tunnel, were basically completely
created procedural with a vein system as a base, and then some, some other
procedural layers on top. But I also had much more complex systems that had longer simulation times. And at that time PDG
and TOPs were brand new it's around three years ago. So I started using PDG for creating these simulation variations, but during that I've faced
some challenges and limitations that motivated me to build a tool set that facilitates creating and managing large version variation
cans for simulations. And I called this tool PDG Mutagen. Firstly, the tool provides a nice UI for reviewing which results
returned from the PDG farm directly in the Houdini UI. And it is possible to select the results and preview them in RV
or the file browser. You can also directly
select the 3D geometry in the view port over the video thumbnails and explore the actual 3D result, which is very handy. And it provides a one-click
solution to convert take based wedging to top based switching. So I'm still a big fan of the old take based wedging system in Houdini. And at least at that
time, it wasn't possible to convert take based wedging or to have take based wedging in PDG. So I built a tool to quickly convert that. All that was wrapped up
with nice shelf tools that quickly and automatically set up the top networks for you. So you were very quickly
set to go with yeah, quite complex, wedging setups. Finally, it allows you to
select your desired results and use them as a base for new generations and mutations. So you could choose the
results you liked the most and from these parameters sets, as base, new parameters sets that
kind of spread it out, the parameter space from there
could be set up very easily and yeah, that's created mutations. So the name PDG Mutagen. Alright, let's talk about
the fractal pipeline, which was quite an essential
part of the production. So in the R and D phase, I of course did some research
on different approaches that existed so far on how to tackle fractals in VFX. And I wanted to check if
I could make my life easy and maybe go a similar way. So I first stumbled
across Disney's approach. They had these nice volumetric, multiple variations for
the film Big Hero Six, and I think it looks super, super cool, but the approach wasn't so
suited for me because it was largely static fractals
and it still involved a lot of manual set dressing. So artists, layout artists could set dress with like proxy fractals
that were then later on addressed by the effects department. Also, I didn't want that
cloudy milky look all the time, even though it's very cool. Also Animal Logic, they had a point cloud rendering approach, and they built the fractals from Houdini for Guardians of the Galaxy. So these kinds of columns here internally, they have fractal point cloud geometry, but they are contained
within kind of bounding boxes and are sitting in traditional geometry. So, and they were also completely static and I needed full fractal environments and not just fractal parts, so it wasn't complete suited for me. And what [Phonetic 00:12:41] Hagida, here on the, on the lower right left. They had a quite unexpected result. They actually used the
software called Mandelbulb 3D. So this is a dedicated fractal software that can produce nice fractal images. It's only made for creating fractals and it creates kind of nice images, but the problem is it's
completely contained in itself. It's limited. You cannot
export any point clouds, any geometry, any camera
animation whatsoever. Also Animal Logic, use
that for previousing and look devving. But as the fractals were kind of trapped in the software, they created the point clouds in Houdini, but Weta actually came up with the idea to render turntables from Mandelbulb 3D and then feed these turntables into a classical photogrammetry pipeline and then get fractal meshes in the end. So here you see these fractal
measures that could then be used to kind of manually
set dress the scenes, which is also really cool,
but again, it's static. And to me it seemed also a
bit cumbersome and inexact. So I really had different needs. I needed a full 360 fractal environment. So the other approaches, they had classical VFX or CG sets and they wanted to include
some fractal elements. For me, it was the other way around. I wanted to have fractal environments and include some classical 3D geometry. Also I wanted to have
one kind of typical deep, endless zoom into the fractals. And I wanted to have
the possibility to have the fractals morphing and fully in motion in themselves and not being static. So finally it was really
important to me to have a very flexible and fast
design exploration possibility because I wasn't an army of artists and I needed it to be very fast and intuitive. So the first approach was to build a tool called Visual Effects Fractal Toolkit. So I teamed up with a fellow student who worked on that and it's a note based OpenCL path and based on fractal point load generation system in Houdini. So here you can see it in action. You have these fractal notes and you could combine different fractal notes to create kind of hybrid fractals. You had fractal parameters
exposed in the UI and play around with them. And yeah, it was quite
a promising approach. Unfortunately at some point we realized the tool doesn't work
out for the production due to multiple issues, but mainly because Houdini has a 32 bit precision limitation. And that meant that the
resolution degraded more and more, when you dive deeper and
deeper into the fractal and also you were basically unable to navigate at all at some point. So this was really a pity because the tool was very promising. And then also my fellow
TD student graduated. So I had to come up with a
different solution on my own. And I used another fractal software called Mandelbulber, for prototyping all my fractals before. And it's kind of a similar
software to Mandelbulb 3D, just a bit more modern
and more open source. And I really liked the
results geometry wise, and it was very fast and
intuitive to use the tool, again because it's only
built for this purpose, it's built for creating fractals, but from a VFX standpoint, it is very limited. You don't really have proper
lighting, PBR shedding, you cannot really combine
it with 3D objects. So I really wanted to bring the fractals I already had to the VFX world, to combine them with
proper volumetric shading, particles, geometry destruction, so to combine the fractals with
shattered fractal elements. So I decided for building
a pipeline bridge from Mandelbulber to Houdini and do all the fractal exploration, layout, camera animation in Mandelbulber, and then hand that over to Houdini for the 3D integration and
lighting and shading rendering. So here you see Mandelbulber in action. So I'm exploring some sets of mine here. This is real time. So you'll see it's quite fast. It's GPU based. So I teamed up with the
Mandelbulber developers on GitHub and we together, we
implemented some features that were needed for this Houdini bridge, because again, the software
is meant for doing fractals. It produces kind of nice images, but you cannot export anything. No geometry, no point
clouds, no camera animation. So first of all, we did some
360 stereo enhancements, which were important for me. We implemented the position
and world normal path, which was very important and the possibility to
include camera meta data into the XR images that were exported. And finally, a custom
render farm integration because still there was quite a bit of rendering from Mandelbulber. So finally I could send
fractal layout scenes as utility passes to the farm, and then in Houdini with a shelf tool, with a easy to use one-click solution, basically I could import camera animation, the fractal point cloud. And that was internally processed in quite a complex fractal
HDA that internally processed the utility passes to a full point load that worked
for 360 stereo rendering. It had lots of options
like view port LODs, you could choose if it was a per frame camera based point cloud, or you could accumulate
all the different frames and camera angles to like a more watertight static mesh or point cloud. You also had the option to
import different utility layers like fractal pattern masks,
ambient occlusion passes that were exported from Mandelbulber. I could also import layout
lights from Mandelbulber with one click and use
them as a lighting base. So sometimes I just did a
pre lighting in Mandelbulber to get a general idea
of lighting directions, and then just transport that to Arnold with one-click basically. For destruction and additional manual placement of fractal elements, these spires camera dependent
point clouds weren't enough. So I needed full fractal
object scans and yeah, Weta used the photogrammetry
workflow for that. But again, for me, it seemed
a bit inexact and cumbersome, but with the workflow I
already had developed, it was very easy to take
turntable renders or just renders from different angles of parts of interest in the fractals I liked, and then use these utility passes with the exact camera
data baked into the XRs and could convert that into dense watertight point clouds
or measures in Houdini, and they were a hundred percent mathematically correct, basically. And then I could use them to shatter them and place them manually, incense them, et cetera. So that was, that was quite fun actually. For the morphing fractals
that had motion in themselves, I needed a velocity attribute
of the fractal surface to drive particle simulations. As you know, with changing topology, generating velocity at
roots is quite difficult and it gets more difficult if this is like fractal 360 point cloud, but with optical flow and some VEX math, I luckily managed to really
get the velocity attributes and drive some secondary
simulations of that. Not directly related with
the fractal pipeline itself, but I had the idea to
let the Google Deep Dream algorithm dream on the fractal renders. And this was for one
very trippy stroboscopic sequence in the film. But then I had to think
about how to approach this in 360 and especially in stereo. So first I thought, okay, I would want to let the algorithm match the left and right eyes, so features are sitting in the same place and the same depth, but this didn't really work out. It kind of killed the
depth effect a little bit. So then I thought, okay, let's just try to let the algorithm dream on the left and
right eyes separately, which is completely wrong, but it actually gave really
cool and trippy results because you had that proper
stereo depth from the fractals. And then you had these oddly
placed psychedelic elements throughout the depths of space. So yeah, that was really cool. All right, let's talk about
the SpeedTree pipeline. So there were lots of trees and I worked on kind of a loan. So for this again, I needed a pipeline to have this very automated. So my SpeedTree pipeline could set up a custom SpeedTree HDA that automatically sets up groups, materials, textures. It even- That automatically sets up
groups, materials, textures, even displacement scales
based on the tree size. So it first normalizes and sent us the displacement values from
the displacement texture. So it reads the values
from JSON texture library. It sets preview thumbnails
and a network editor, and a lot of nice stuff like this. It also sets up a custom
production uber shader that features material blending. So you could have different materials on the top and bottom of the tree, options for procedural
moss, mold, per instance, color variations,
luminescence, iridescence, and much more fun stuff like this. You had culling and LOD
options and many controls over the geometry attribute creation like curvature, ambient
occlusion. thickness, et cetera. As SpeedTree doesn't
export looped animation, the animation was automatically
looped in Houdini, and you had controls over that, a integrated geometry
cache farm submission. So with the whole pipeline,
basically within one click, I could set up a fully
shaded and animated tree and having also a turntable with a, I could do another click and just send multiple turntables of that to the farm. So that was very, very handy and helped me a lot in the end. When I started the production, I used Houdini 16.5, and later moved to Houdini 17. So there wasn't any
Solaris yet, unfortunately, and also not really any sophisticated scatter and instance pipeline, or scattering instance tools, which was a bit of a pity to me, because to be honest, if
I will do the project now, I would probably approach this in a different way and use Solaris. But back then, I really had to come up with our own solution. So for efficient set dressing and scattering instancing
of animated objects, and also to control all of
these instances with attributes to make them glow or to make
them move more in the wind and stuff like this, I built quite an extensive modular system. So I had many set dressing HDAs for procedural scattering,
painting, drawing, manual placement, a population wizard, where I could just choose some assets and randomize them, had
control over how many of which asset I want to scatter, controls over density,
noise, scale, orientation, and also a very fast collision detection. Also I could easily extract
and unpack instances for things like collisions,
with a river simulation, which was ID based and non-destructive. And also I had a nice sub instance system so that I could easily create packs of instances and scatter or paint small instances on other instances. So for instance, lots of instances, so mushrooms on trees or moss on trees and this enabled me to have
a very efficient handling of almost endless
variations in the forest. There was also a plant motion HDA and lots of global animation settings. So I could offset the animation, but also quantize them to really reduce the amount of unique
instances in the scene, which optimized rendering quite a bit. And finally, I built a
custom node preset manager where I could easily save
and load presets for notes. It had an override and append mode and it was JSON based. Now the new Houdini preset
manager also is JSON based, but it still doesn't have append options. So you could already have a preset, but then append another preset and also some other advanced features. So this is why I built
my own preset manager. All right. So that where all the pipeline parts I wanted to talk about. All in all, it was quite a journey creating the film
technically and creatively, and Houdini was really the only option I could ever imagine to
create something like this, due to its nature of proceduralism, which also conceptually
fitted the film very well. So it was a lot of fun
building it with Houdini. So with that, thank you for
your attention and goodbye.