Ai Animation in Stable Diffusion

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey everyone it's Sebastian tus I'm so glad to have you here it's been an exciting week I'm going to show you how to use these LCM lures to create these amazing animations with your own footage this is the advancement with him waiting a whole year for animation is finally here in stable diffusion stick around to the end and I'll show you how to level up your stable diffusion game using the Venture resolve first up let's add our prompt now like always I've got the name of a woman and with got the LCM Laura already applied in this prompt okay so this is a picture that I've actually made using blender so I use the human generator I believe it is so now we'll go into our settings what I've been doing up until now is using the oil array or the DPM Plus+ s Caris I actually did install the LCM which does not come stock with automatic 1111 I'll leave directions on how you can add this to your automatic 1111 it's it's just a winner It's Made for This sort of thing with the sampling steps we'll go down to eight which is great because it means that we'll be able to generate things a lot faster and that's really going to come in handy with this sort of image that I've got here now I'm actually only interested in this portion of the image occlusion is a big problem with stable diffusion so if I don't include this part of the costume it will actually think that these yellow stripes are her hands for some reason um and you may even find that some generations it'll actually turn these yellow stripes into her hands the picture is actually 1920 x401 what we'll do is we'll change our resolution to 1920 by 1401 I understand some people have computers that cannot generate images this large without it taking 45 minutes CFG scale we can go between 1 and two and our d noising strength always the one got to go into the control net the first one will enable pixel perfect and we will click on tile and blur get rid of the pre-processor control net is more important we'll skip to the second one enable Pixel Perfect and for the second one we're actually going to go in here and do temporal net and we'll go control net is more important and now we'll go the third control net or control net unit 2 enable Pixel Perfect we'll go with soft Edge leave this pre-processor as it is controlling that is more important I'm going to actually generate this and leave it in real time so we'll see how f it does the generation again keep in mind because I am doing such a large image it will take a little bit longer than most other Generations so most people are generating 512 by 768 images so it does tend to be a lot quicker because I am trying to do V effect assets I need it to be a lot larger than 512 obviously but if you're doing smaller images it will be a lot faster than this let's see what we get 33.4 seconds so the leather actually looks pretty good I like the skin the detail in the skin and the hair looks actually pretty decent it's actually improved the image quite a bit and you'll notice that it didn't change these into hands which is fantastic if I'm happy with that I'll come down here and I'll do recycle that is the actual seed number as you can see here from this image if you're happy with that one let's reuse it again right so we'll go to batch so now we'll copy our Link in here and our output directory generate that now in this second example I'm going to show you how to create the animated look that you saw at the beginning of the video and also in the teaser we're going to need the eal dark Gold Max model now this one isn't available on CID AI anymore but there will be a link to where you can find it in the description along with that I like to use the moist mix vae this really heightens all the colors and we'll use clip skip too now we're going to need our prompt for this and along with the prompt I also like to add this I find that these two statements help me get that carony sort of flatter look and we'll go into our Laura at our 1.5 and we'll drop in our image again let's come down to LCM sampling steps we'll go to six and we'll need 1920 by 1401 okay we'll reduce this down to 1.5 and we'll go to one for our D noising now we don't need a seed because it's going to give us a completely different seed than what we have had before we'll come to our control net we'll do our tile no pre-processor control net is more important Pixel Perfect this one we use temporal on net controlling that is more important Pixel Perfect the soft Edge and controlling that's more important and we'll generate and now we'll be able to see the amount of time it takes to generate this one as opposed to the realistic one we'll be able to tell the difference and this one was 29 seconds so it's slightly faster because it has a lot less detail to work with but the still the resolution is pretty big so as you can see see a 1920 by 1401 it's actually pretty decent as I was saying before I'm actually not going to use the entire image I just want this section here but the good thing about this is it gives you the ability later on if I was to do this with the green screen background it gives me an ability to actually reposition my character in The Shot which is great as I was saying does take quite a bit of time because I am rendering such large images if you're doing smaller animations there's a good chance that this will be a lot faster it also depends on your computer so you can also go the route of doing smaller images and then upscaling and imp painting them so they got better quality but I'm finding that I'm actually getting better results if I go straight off the bat at 1920 x 1080 yes it takes a little bit longer to render them but because I use 3D applications I know how long it takes to render those things so if I got to wait 30 seconds for a frame that's actually pretty quick that's almost in the territory of using Eevee in blender now now we do have quite a bit of flickering in the white suit I'm assuming that's partially because I haven't actually trained a model with this character yet but it could also have something to do with the actual character clothing so it is kind of shiny so I probably should have gotten rid of that shine but I'm actually happy with the consistency it's it's actually looking pretty good now there's obviously certain tricks we can use in denture resolve to Def flicker these things uh but keep in mind that if you don't have the paid version um you won't be able to use that so this is the original image that we have now we have the character taken a helmet off and I've actually gone in frame by frame and had to paint that face back in because it was kind of destroyed you can see how it kind of glitches a little bit so I figured out the better way to do it would be to actually render out in blender the same animation without the helmet as you can see the face is far more consistent than it is in the other one but the good thing about blender is that it can actually give me this as well and we can use that to our advantage we'll take all three of them and create a fusion clip go into fusion and we'll actually deactivate them and we'll see which one's which because they're all just called media and I'm pretty sure this is the okay what's this one without the helmet and with the helmet so the one with the helmet we actually want to plug that in here we'll come in here and actually turn the channel to luminance so as you can see it's just giving us the helmet now and we can get rid of one of these merges we don't need both of them and we'll throw this on top and the other one beneath it so now as you can see we have the full image okay so now when we go across you'll notice that the face doesn't change we have quite a bit of glitching in the helmet but that's purely CU I didn't train a model with the helmet so it's probably a good idea to do that go to our inspector you'll see that got that wide image that I originally wanted anyways that way we've got that cinematic sort of look now being a visual effects artist I'm used to working in layers so I'm going to be using stable diffusion purely to give me the layers I need to create the final image the idea that we can just click a button and stable to fusion and it'll give us exactly what we want unfortunately we're not there yet if you haven't already don't forget to smash that subscribe button cuz we're really going to be delving into this topic going forward so guys what do you think of this new method I reckon it's still early days but we have a lot to explore with it let me know in the comments down below what are you eager to use it for I want to actually use this for some live action looking stuff I like the cartoons don't get me wrong but the live action stuff that's where I think this is really going to shine if you like this video stick around for the next one
Info
Channel: Sebastian Torres
Views: 61,530
Rating: undefined out of 5
Keywords: stable diffusion, ai animation, ai art, stable diffusion tutorial, stable diffusion img2img, stable diffusion controlnet, stable diffusion animation, ai animation generator, how to animate with ai, ai videos, ai video editing, blender animation, stable video diffusion, stable diffusion anime to realistic, stable diffusion realistic vision, stable diffusion action scenes, ai action video, ai actors, ai artist, ai filmmaking, ai film industry, ai filmmaker
Id: 2mUEeeoA6G8
Channel Id: undefined
Length: 9min 45sec (585 seconds)
Published: Sun Nov 26 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.