Create mind-blowing AI RENDERINGS of your 3D animations! [Free Blender + SDXL]

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
AI is the future of rendering and to prove it I'm going to develop a workflow that lets you render any 3D scene in any style it will also offer full control of the final image so you can create separate prompts for all the different objects in your scene finally I want to put this workflow to the test a few months ago I created a fully AI generated 3D short film everything from the modeling to the texturing to the animation was done with AI except for the rendering so today I want to fix that can I take this pretty ugly looking scene and turn it into something that actually looks cool with this AI workflow yes yes I can let me show you how first we need a 3D scene and for beginners I prepared an example scene with all my settings and a super extended bonus tutorial on my patreon but I want to reuse some scenes that I created a while ago let's start with this Zelda fan animation I created for Tik Tok so I delete the image sequence and make the scene six 16x9 and I create a super simple 3D environment with just a plane and some cubes I also delete all the materials and lights from the scene because all of that will be handled by the AI and before we render the full sequence let's do some tests on a single frame to find the right prompt and this is the frame that I want to work with now we need a way to communicate to the AI what to generate and for that we're going to use render passes in traditional VFX workflows we render all the layers that the renderer uses to create the final image separately and and this way we can control every aspect of the image for example how reflective it should be without having to reender everything when we want to make changes but we can also use these render passes to control AI image generation with a little thing called control net let's say we have an image and we really like the composition we can use an AI pre-processor to estimate for example the depth of this image to generate a new one based on this depth information and these pre-processors are getting better and better but especially for image sequences they can still flicker a lot because it's just AI estimation after all but we don't need to estimate anything because we have a 3D scene we have all this information so in my blender scene I go to view layer properties activate Z and render out an image in the compositing tab I click use noes and connect a viewer note to the depth output now the depth information is in there but we can't see it yet because it's all over the place but we need it between zero and one for black and white so so to get them there I just add a map range note and play around with these values until I see a black and white gradient of my whole scene we also need to invert these values so that black pixels are far away and white pixels are close to us finally I use a curve note to adjust the values a bit more and try to separate the parts of the image and I'm just eyeballing this and this is not really the correct way to do it but it works and it's super fast finally I add a file output node at the end I set my output path and double check if the color management of my scene is set to standard and this is really important with the depth pass alone we should be able to generate some really amazing images but to be safe I want to generate another pass one of the most commonly used control Nets is the canny one or any control net that uses lines in the image to guide image generation and first again you would use a pre-processor to find the edges in an image or video and then use them to guide your image and as you can see they flicker a lot but again since we have the 3D geometry we don't need a pre-processor instead I'll activate the freestyle tool that will create outlines based on the 3D geometry to export them I go to view layer freestyle check as render pass set the color to white reduce the thickness just a little bit render an image and create this note setup in the compositing tab finally we need a pass for masking out the individual areas for our prompts and there is a render pass called cryp mat that does exactly that but unfortunately it does not work with the AI tools yet so we have to create our own simplified version let's say I want to create separate prompts for this Guardian the ground the background and these cubes I just assign simple emission shaders to them and choose some random colors later we can assign each color an individual prompt and that's it we're done these are all our passes so now I go over to comi a note-based interface for stable diffusion it's super easy to install and I created a free step-by-step guide for that including the links to all the models that you'll need and the information on where to put them and you'll find the link in the video description so once you have everything set up you can just download my free sdxl image workflow and just drag and drop that into the comi interface so this is our full image workflow and we're going to work from left to right on the left here you import your images and set your scene resolution and you can see that the first one the mask pass here go go straight into the extract mask setup and here you don't have to do much but you have to put in the hex codes for all the different colors here you do that for all the masks and to see if it worked I'm going to deactivate this last part by pressing contrl B and click Q prompt and you can see how fast that was so next to these masks here to the previews you find the corresponding prompts up here you find a master prompt and this one is added to all the other prompts to the whole image and I would put like the general style in there and the light that you want next I already put in some Regional prompts so I want to create a squid like an octopus standing in the ocean with some white marble coming out of it and also I want thunderstorm clouds epic landscape vast ocean a very epic atmosphere and down here is my negative prompt that's also added to the whole image next our two other passes are loaded into these control Nets and the first one is the depth control net that I'm going to use use at full strength maybe reduce that just a little bit to give it a bit more freedom and then the cany one I use at a very lower strength so finally we can activate these notes Again by pressing contrl B and click Q prompt to generate our image and after a few seconds we have our final image and this one is very very very creepy oh my God ah let's let's try something less creepy um now actually let's try another creepy version but let's make it more dystopian let's make it an alien trap standing on like an alien planet surface and these blocks are like buildings and metal structures construction side stuff in the background I want to have City ruins skyscrapers something that this crap destroyed let's click Q prompt oh yeah this looks pretty nice maybe we can make it a bit more epic by changing the lighting a little bit so I'm going to add thunderstorm rain wet surfaces and because we added like rain and wet surfaces here this will added to all these other prompts so we don't have to put that there again but I'm going to add puddles to the floor to the ground plane so let's click Q prompt yeah that's epic look at these surfaces here but this guy is still very creepy let's do one final image let's create a foggy mystical atmosphere the guardian becomes a futuristic robot we're trying to make these cubes uh into trees or at least wood let's see how that goes and they are in the Deep rainforest and here is our final image okay now it created like these wooden houses which is okay but let's actually change them into uh rock formations yeah I prefer that that looks really cool I love how stable diffusion even though we have prompts for all the different areas really understands the whole image and the geometry of it so the light is coming from the correct direction we have the robot casting sh Shadows here we have Reflections it looks so good but let's say we like this image and we just want to change the composition that's not a problem I just go back into Blender make the changes that I want in comu eye I switch out my passes and render the image again and you can see since we used the same seat and prompt to generate this image it's pretty similar to the other one so using this technique we could generate consistent concept art or even storyboards for a movie for example we could also project these images that we generated back onto the geometry in our blender scene to texture them and I've covered this workflow in one of my older videos make sure to check that out but let's take it one step further let's create some animation the preparation is pretty much the same as for the image workflow but instead of a single image we render out the whole sequence I import my free 3D rendering video workflow and as you can see it looks very similar to the image workflow that's because it is so I copy the path to all my image sequences and put them in here and repeat that for the depth pass and the line art pass and here you should set the maximum number of frames to save some time I also want to only generate every second frame and then interpolate between them and that's why I chose the select every n second here and then right before the output here there is this RI node that will interpolate the frames so we get the original 24 frames per second you can put in the prompt and I'll start with the squid one that we tested before CU prompt wow this looks really cool okay it still has this AI weirdness going on but what I really love about this workflow is that it not only textures things it also animates things like this ocean look at these waves now let's try a few more promps okay this scene has a lot of camera movement and is really hard with all the tentacle things so let's try another scene and maybe go for a more stylized look this time the next scene is still an octopus though I don't know that I'm sorry that was just a coincidence so I created my render passes just like before and now let's try a few prompts let's just try octopus snorkling in space that looks cool it's still a bit creepy though but I have an idea how we can make it more beautiful let's turn it into an animated painting I just downloaded this painting Laura put it in my ppui foldo structure and edit it here in the beginning of the not Tre and let's put the strength to one and change the master prompt to something that will amplify this painterly style oh I love this that's so cool let's try some more prompts okay I think you see how amazingly customizable this workflow is you can basically turn your renderings into any style you want but now I really want to put this workflow to the test a few months ago I generated a whole CG short film with AI I used CAD GPT to help me with a story generated the models with Luma Labs Genie animated them with mushion and generated the facial animation with Nvidia audio to face so pretty much the whole film was generated with AI but it was not rendered by AI so with this new workflow let's try to fix that and let's try this shot here Linguini is running away from the goulash here are my render passes these are my prompts and let's render the scene look how it was able to transform the style into something that looks so much more like a pixa movie than the original rendering I mean sure we have we still have some weird AI stuff going on especially in the background but that probably won't be an issue in a few weeks anymore because we are using a very early version of the motion model here and it's only trained on eight frames but we can already improve this consistency a lot by using an IP adapter and you can find the setup in my Advanced workflow an IP adapter takes an image or a sequence and turns it into a sort of prompt guiding image generation and I want to load in this rendering with textures to help guide the new images and this way this workflow becomes more of a filter for our original rendering but I honestly prefer the renderings without any visual information in them and you're always more flexible this way let's say you want to change the kitchen from this modern aluminum kitchen into a cottage kitchen with wood and brass or you could decide that this chase scene should happen in a forest at night in the end you have to figure out what works for your scene and don't be afraid to play around with the models the prompts and really make this workflow your own if you want to help me make these kinds of videos and gain access to the advanced versions exra example files and an additional in-depth tutorial consider supporting me on patreon so thank you very much for watching to the end and thank you to my lovely patreon supporters who make these videos possible
Info
Channel: Mickmumpitz
Views: 130,337
Rating: undefined out of 5
Keywords:
Id: mu3JEfx3PHM
Channel Id: undefined
Length: 12min 50sec (770 seconds)
Published: Mon Mar 18 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.