Animation with ControlNet, almost perfect!

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello there so in this video I want to cover control net and stable diffusion and primarily how we can make animations by utilizing this new extension if you're not familiar with control net I will let you know what it is as well if you interested to install it I will provide a link below to the great video created by entrepreneurs about how to create how to install control net on your local stable diffusion you can also run on collab Google collab if you need it okay we can done AI animations for some time so from a long time we can use it actually as a text string and we can specify all animations what we wanted and create these animations um in AI the problem is with creating this animation so we have some limitations for example it's okay to create panoramic to move the cameras and other stuff we can specify to some degree what we want to see but if we want to create like a with a person something to do then we cannot really control any motions of this so in that case what's happening we went and we can use it now GIF to give so in the I covered video again link will provide for this if you're interested in how you can create it by using GIF animations and applied style to create it your own AIG funimations and this is actually great not bad how it's working for example here's GIF and this is when we apply sterilizations by AI but it's have it some limitation to this and primarily limitations because if we apply too much style to this we come up to this kind of nonsense even we have most uh strings we are defined we use negative prompts all that stuff the problem is when you have a too high the noise or other thing it's come up to the a little bit problem and you can see the motion even has tried to catch motions but does not work very well and in this case control net help extension help us a lot because it's have a different approach here's an example on the sterilization with control net and you can see how much it's cleaner produce and okay there you go let open this one again they go right here and you can see how much cleaner this animation is happening and how it is um better Define motions based on our input video and this motion is going from same G file created we can also use the video beside the gftgif have it additional limitations beside how they're going with stylizations it's limited on a size limited on resolution and some other elements with using the control net we can actually have no limits on how long our video can be so let me show you how we can actually do this first you need to be sure you install control net on your local installation stable diffusion or inner collab and I will provide link below how said before to the video you can watch from entrepreneurs how to do this after you install it I highly recommend for you to go and verify that you have a latest update and you can do if you go to your um stable diffusion go to extensions and by the way I'm running on aftermatic 1111 installations so on right here in extension so you can see we have our web UI control net is installed be sure just check for the updates because I noticed the updating quite a good quite a bit often and I just want to be sure you have a latest version of this installer so after this we can go to um image to image at this point if you scroll down you'll notice right there you have a control net ability to kind of use it so what does control net have if we're going and we'll put it just some image inside and just let's go type um one girl on flower on the field of flowers maybe you or my English is not so great in this case so I won't go on the flower field let's go just render and see what's happening it this is a basic and it's image to image and we have it our image generated somewhat color based of course we at this point we want to go and put it properly scale and we want properly scaled for couple reasons because um control net will kind of require this as well okay so let's set this render notice what's happening I'm going with the noise string 75 which is quite a bit High and if we're using seven five with a gif to give it's I mean its model will be pause will be gone all the way and we'll just leave it as default so I'm just leaving as default here okay maybe the noise maybe drop a little bit lower but I think it's okay steps 20 should be okay so this is what we usually done but notice on a pause it's a little bit different it does not look like okay so let's go ahead down and now we enable control net so we need to go click expanded notice right here it's not enabled let's click and says enable we want be sure our size is matching so we'll take our size from width here and that's 512 and I'm just putting with here so that matching our things what is for our important stuff it is create uh select preprocessor and model that we want to use it they actually have it quite a bit nice models and I will look on them in another video so we'll go over and see where and what model you want to use it for this purpose right now open pose it's work very well open hand will have also the Define on the hand that we'll show later but let's go open pause for the preprocessor and we want to use it there open pause model so it's two things we want to select next let's just even with these selections what's going on and guardian strings you notice right here we just put it on maximum one we'll just leave it there as a default by the way if your guardian string showing up to two it's you probably know the version you probably want to update this so now let's go ahead click generate first time when you click generate what's happening you'll notice right here it's pre-loading model in my server so it may take a little bit time to do this but after first time it should be fine and going and notice what's happening we have a precise pause look on this position of the hands position of the head everything but it is applied in an environment that we specify and this is very basic this is raw not in a twinkle because I did not specify like dress or whatever we can have it in a um dress for example soft lighting so you can create all this other additional options or specify right now it's Simplicity the one thing keep it in mind because it is very simple I don't specify in a lot of details deviations may happen as well from this point but let's look right now what's happening on the back and you notice this great second image and this is second image it's representing the pores not as well as the bone system selected with position of the head position of the eyes and position of the body legs everything this is what we needed because the other ones when we have a gif to GIF this is where we have the problem where is we have a disconnect happening um and overlay so now we Define what model we wanted now how do we process because we don't want drag and drop each image for this we'll using the batch processing system this has allowed us take each single frames and use it to get single frames you have a couple options if you want to from GIF to GIF you can go to like easygiv.com uploaded file there and extract frames directly from there if you work from video or any other applications you're probably utilizing some um video editor and most of them will have it frame output so you just need it save them as individual frames and when you save them as individual frames like right here you can go ahead and put them in a same directory so they're kind of located in one place um copy the place of your of locations next if we're going inside the our batch area we can go input directory and paste here there are frames where we want to put them read from okay so we kind of said but it's not yet there the one other things that I found out it's kind of helpful if we're going to again back to image to image we're going back to our frames let's select one single frame put it in okay because we need actually set properly um with Hive for this Frame so we'll go ahead adjust this what we needed like right there I think okay let's go ahead and copy this one okay from our frame and we're going to put it in our width and height so that is set the next step what we want to do we want to Define at this point we want to Define what style we want to create so for example if I leave it same style and I click generate it will produce for me the image that I look I can say okay do I like how it's look or not or I can go ahead and create different batch for example six images let's go generate them right now okay and it will look I can select different style the style this type sorry not style mostly how does it look you can see the all different variations so I can go and check which variation I like it so for example if I say you know what I think I like this variation then important things was to go down anyways I see the click on this recycle kind of Icon so we can take the seed of the image we like it and put it and this is actually important for us because if we don't have a seed and we'll start processing batch there will be deviate from each to another one quite a bit so we want to be sure we're using a same seed at this point um you know what and just so we don't have it too much from pauses we can go down to maybe five you know like right there but you can leave it six five it's fine okay so at this point we're ready we set our width height we set um our string what we wanted again this is very raw string so just uh it will have some problems I guarantee you because we need to define a little bit better like what background we wanted what color what dress and so on so but that is will work for now is okay we Define thus the noise strings are how close it's to our string so we Define that stuff seed we Define this again we Define same with canvas width and height inside the our control net control net is enabled and we want guidance string one so it's a the bigger Guardians the more closer it will be to positions of the bones it will create it I think we're ready at this time going to Arab batch selecting way we need it we don't worry about at this time about our ceiling sampling model you can switch to different for example um Karma and yeah let's go with maybe dbm is Japan which one sampling you like it doesn't matter this time Miller actually will process quite a bit faster so we'll just leave it that one as here don't worry about just your face or any other settings but this time it's when you preset what you like it and the reason is why I say you want to set this one image to image you unset them here those properties so when you gone to the batch they already set they're ready to go all your experimental done and let's switch our batch to one so I don't want to process so many of them and when we're done with this we just click on generate and right here what's happening you can see it will take it each image from that directory that we have right here each frame and as a batch processing it start processing them start creating and let's go ahead and see our output directory and you can see from this point right here it's where I actually start to generate okay up let me close this one right there look what's done it's actually creating notices background I'll try to create a similarity it's what we have before again it's a raw prompt so I don't expect too much in this case but you can notice from frame to frame because using same seed it's looking very similar and creating a similar way animation this is um in a way how I create that animation I think it will um interesting way to do it it's definitely needed more tweak around with the prompts and play before you have it like wow effect but even at this moment the animations that everything is matching is a very very nice start Point okay so after it's done you will end up with bunch of frames at this point you want to use the applications that you like it for example and it's just Camtasia you can use any other just import those frames inside their applications and you can assemble and and use them here the one actually big plus what I like it to create this way per frame because you can go to any specific frame and like right here you can notice how this Frame is just turned upside down instead see forward and backward you can take this specific frame and render as individual without a batch so it's meaning you can take this image put it right here with all your settings and from this point you can even create different batch count and just generate that single frame that you think it's need to be to fix it how say it's a better way to do with the framing in this way and after you just take them assemble in video so this is um from my experiment I find it's work very well to create some animations to the point I feel comfortable Now work on some more interesting projects that probably coming out as soon with the animations and frames in the next video we're going to look on some of these um models preprocessors and models it's what they're good for what kind of use it because for example depth math is very impressive and it's great to use it for the photographers I'll show you how and where you can use them as well if you like this video give it thumbs up subscribe share this video please help my channel grow I greatly appreciate all your support and have a great day to create your work
Info
Channel: Vladimir Chopine [GeekatPlay]
Views: 227,337
Rating: undefined out of 5
Keywords: Geekatplay Studio, Vladimir Chopine, Digital art, How to, How to do compositing, AI art, MidJourney, Stable Diffusion, Dreambooth, Dall-e, Free resources, Free learning, Digital art for begginers, Free tutorials, artificial intelligence, Digital Photography, controlnet, animation
Id: EAXUInT70TA
Channel Id: undefined
Length: 15min 33sec (933 seconds)
Published: Sun Feb 19 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.