AI Animation: Tutorial Animate your AI images with a consistent character

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

Awesome results, you're doing great work!

👍︎︎ 1 👤︎︎ u/Mocorn 📅︎︎ Dec 14 2022 🗫︎ replies

wonderful, thanks!

👍︎︎ 1 👤︎︎ u/illuminatiman 📅︎︎ Dec 14 2022 🗫︎ replies

Really cool guide, and very useful for just setting up SD on Colab in general. You've gained a subscriber.

👍︎︎ 1 👤︎︎ u/ErikT738 📅︎︎ Dec 15 2022 🗫︎ replies
Captions
today we are not happy with the static AI images  we are going to animate them what's more we're   going to animate them on a trained model this  means you can create your own a actor or actress   and make your own music videos or create your  own Tick Tock influencer let's not get ahead   of ourselves this technology is very new and is  evolving so it might not look perfect yet but   we're right at the very start all started with  this humble mid-journey render and then I took   it into something called dreambooth and it created  a trained model which means I can now create this   character into any pose or position and we don't  want to stop there we want to be able to animate   this character which means that the consistency  as well as the poses are a lot more Dynamic to   do this tutorial you need a driving video which is  a video of you doing some disturbing actions also   you will need a trained model and if you don't  have a trained model you can just use the default   stable diffusion or you can use my trained model  which is available to download off my website here   for free you know I'm good to you I also have  an alternative a method of animation and that   will be in the next video but I want to show you  both ways that are really cool in this tutorial   I'm going to be using Google collab Pro and what  this allows me to do is use a remote GPU that is   far far to period than my rubbish computer what's  also great about this method I can connect from my   iPad and start creating animations from absolutely  anywhere this tutorial is available on the prompt   Muse website for absolutely free in written format  as well all my resources are free what I do ask of   you is if you could subscribe to this channel like  and ring the notification Bell that helps me out   massively first method I'm going to show you is  the image to image we're going to be using the   automatic 111 web UI and you've probably seen a  lot of these tutorials Us online whether doing   it locally I'm going to be doing it remotely so  let's get ready and do the first tutorial when   you open a Google collab notebook this is what it  looks like so the first thing you want to do is   connect your Google Drive and log in so we connect  our Google Drive by running this first cell here   and when I say run you are just clicking on this  play button and this will ask for you to connect   to your Google Drive and just click run anyway and  connect to Google Drive it will then ask you to   log in this just connects your Google drive into  the file structure over here by clicking this file   and you will be able to see your Google Drive  once that's done if I come up here to refresh   and go to content you will see something called  G drive that's your Google Drive and my drive and   these are all my saved files on my Google Drive  currently I'm just going to close that for the   time being that has successfully connected because  I have a green tick once you've got a green tick   you can move on to the next cell and just click  play and this will install automatic 111 repo   it's essentially just installing all the governs  that you'll need to run this is not installing   it on your PC it's all remote once this session  is over your Google Drive will disconnect and   all this information it will all disappear once  you've got your green tick we're going to move on   to the requirements and again just play that cell  and that will take a few seconds we move down to   the model download load section and before we run  this we just want to make a couple of changes if   you have not created a model and you don't have  a file to upload do not worry we can just run   stable diffusion as normal you can use 1.5 or if  you press that you get a drop down window you can   select the latest version which is a version 2.1  and with burden 2.1 you have different resolutions   you've got 512 and 768 so whichever one suits  your project the best now if you do have a   model or you're using my redhead dot cktp file  you come down here where it says paths to ckpt   this is where we're going to load in our redhead  model file and this is sat on our Google Drive   currently I've put that there you can save yours  to your Google drive as well and just click on   this folder and navigate back to your Google Drive  and then find the model redhead dot cktp file if   you are very neat with your structures you could  put it in your AI folder and in models they should   technically all live there but I'm quite lazy with  my hierarchy shoot me so if we press on the three   dots here and go to copy path and then we're  going to copy that path by pasting that in now   you don't need to touch anything else that's good  to go I'm going to hit run on that cell and that's   now going to load in our model so once that has  successfully run you'll get this text down here   saying using the train model which is great the  next section is the start stable diffusion and   this is the last section and then our UI will be  ready I am just going to leave it on model version   stable diffusion 1.5 and I'm going to use the  radio server so I'm going to check this checkbox   here and that's it we just hit play on that cell  and one word of warning is the cell will continue   to run this is going to be the engine for our  uis do not close this browser down at all because   that will stop your UI running so this will  consistently runs you or not get a green tick what   you will get down here when it's finished loading  is a link to your local path or to the radio app   where you're going to be running the UI from this  take takes a few minutes to complete so go and   grab a cup of tea and come back and it will be  ready once it's complete you'll be getting these   two links you can run it on your local URL or you  can run it on a public URL if you click on either   link I'm running it on the Grady app it will load  up your UI and you might have seen this UI when   people are running it locally it's pretty much the  same if you go to the top left hand corner we can   see a model we're using there is the redhead dot  ckpt that's loaded in nicely if you're not using   a model it will have stable diffusion 1.5 or  2.1 whatever one you chose so if we look down   here we're not going to be using the text to image  we're actually using the second tab along which is   the image to image so click on that and then here  we've got where we're going to write our prompt   so what stylization do we want on our animation  first I'm just going to load in the first frame   of our animations we're using our image split out  into frame so I'm just going to click on there   and I'm going to select the first frame of our  animation which is this one here I'm going to   write in my prompt I've just written any old  prompt in here but one of the most important   features here is that I've put Painting of zwx  person so it's that the zwx is the trigger to   my model to create the red head character that I  trained my model on without that it won't give me   such a consistent character you can put whatever  you want in the prompt just if you're using a   model just remember the word that you trained it  on in the instances way back in dreambooth so the   negative means anything I don't want to see in  the animation so I've just put the usual blurry   blown out dust and blood you can put maxillism  whatever you want to put or whatever you don't   want to see in the animation pop it in here it's  going to be a negative so don't put no just put   the words you don't want to see so we've got our  first frame and if we just come down quickly and   have a look at our parameters so you've got the  sampling steps so that's how long it takes to   render each frame and in how much quality you want  in each frame and the detail the higher the more   detail and quality you'll get per frame but the  longer it will take for you to render that frame   so I like to go up to about 100 because I'm using  a remote GPU and it can handle that let's go for   100 so the sampling message is how your image is  decoded I personally like Euler a you can have a   go yourself and just try different ones but for  this tutorial I'm going to be using Euler a the   width and the height so the width and the height  of your output file so my input file is four four   eight and I think it was seven six eight my memory  serves me so that's the size of my input and that   will be the size of my output so they're going  to match there's not going to be any distortion   restore faces so I'm going to check the restore  face box and if you come up here on your top tab   you can see settings and you click on that and  we can see in the middle here in the column face   restoration select a different facial restoration  or load your own in you can use a gfp gan or code   former or none at all and you can control the  weight of the facial restoration zero being   maximum Effect one being a minimal effect so  sometimes the facial restorers can especially   on a train model make them not look so much like  the model anymore so you just want to get a nice   balance that you can click on apply settings and  then go back to your image to image Tab and we'll   continue with the parameters so the batch count is  how many folders that you have in this batch I'm   going to just create one you can create multiple  but for this I'm just creating one the CFG scale   is how much you want want the image or the output  image to conform to The Prompt so the higher the   number the more it will conform to The Prompt  the lower the number the more creative results   you will get denoising is another very important  parameter if you set it on zero nothing is going   to change your output will look like your input  and we don't want that so you want to have a nice   medium I think 0.5 is usually a nicer medium  for that you can go a bit lower if you go too   high I think it takes away from the animation  I think 0.5 is a nice balance here but you can   have a play around and see what you like so it  combines a little bit of the input and merges   it with your model as well as your prompt now we  come down to the seed minus one means it's going   to create or re-roll as a new seed if you've got  a seed that you're using you can put it in here   but it doesn't matter because we're just going  to see if we can get an image we like and once   we get the image we like by generating the first  frame we will save that seed and reuse it using   this button or copying and pasting it in here so  with all that done we're just going to generate   one frame and see if we like the results this is  the result of our parameters and our prompt and it   looks quite good if you look down here you can see  the seed but you can also press this button which   means it reuse the seed and it will pop your seed  for that image down there so if you hit generate   again it will just generate the same image which  we want for our animation so what you can do is   change your prompt or your parameters if you don't  like that and set that back to -1 and regenerate a   different image what I'm going to do now is just  load in another frame and just make sure that's   consistent so I'm going to click on another frame  I mean this is not a very Dynamic animation I'm   sure yours will be a lot better and I'm going to  click generate again and that's going to use the   same seed hypothetically it should look the same  as that in there it does looks great so it looks   very consistent from the first frame and then just  pick the coupler and just try it out so once your   happy with the overall output of your image if  you just head over to batch image to image this   is where we're going to set up the output and  the input of our animation and we're just going   to put the input directory which is the frames  that we're inputting so if you go over to your   first stable diffusion tab over on your browser  let's open up the Google drive to get our input   files I've already made a folder on my Google  Drive with my frames in it so I'm just going   to expand that and these are all my frames I'm  going to press the three dots and copy path and   come back to my stable diffusion and then just  paste that path into the input directory so it   knows where to look for those frames now if you  want to create an output folder go back to my   Google Drive and let's say I'm just going to put  it in out and then click on the free dots copy   path and then go back to to your stable diffusion  and paste that into your output folder super easy   and your settings are all carried across from your  previous image to image and all you need to do now   is press generate it will now save those framed  into your Google Drive so I just took my output   files and imported them into after effects  and compiled everything together and remove   the background and this is what I got and then  the next test I did was a low resolution about   lighting video of my face just to see what the  model looked like and I guess when you guys come   around to it you would have a much better setup  than I did so you can see what is achievable in   a few minutes worth of work it's pretty cool so my  conclusion to this video is using my technique of   using a model and then putting it through image to  image and controlling it with prompt and specific   parameters you get a really nice animation  now there are a few artifacts and I've got a   way to get rid of them you may have heard of this  program called EB synth where you can simply run   the first frame of your input which was this Frame  and then the first frame of your output which is   this Frame and run it through EB Sim you get rid  of those artifacts in the animation now you can   cop this all together in After Effects and get  a really really good outcome and I would love to   see what you guys create because you're going  to do something way more creative than I have   thank you so much for watching this video and  yeah that will do it until next time goodbye
Info
Channel: Prompt Muse
Views: 447,474
Rating: undefined out of 5
Keywords:
Id: 3wQBsFftbv8
Channel Id: undefined
Length: 14min 40sec (880 seconds)
Published: Mon Dec 12 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.