FREE AnimateDiff Video2Video Workflow!

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
making AI styled videos can be fun and with the power of my free workflow for comfy UI you to can make videos like these all in just a few minutes before I get stuck in if you do need help with things such as installing comfy UI custom nodes all that sort of stuff then do check out the installation and more basic videos first those will get you used to the essentials of comy UI and as usual the links are down in the video description so do check out those introductory videos first if you need to and there's even a whole playlist for comy UI to which takes you from Zero to Hero right then what sort of things can you do with this workflow well basically anything from plain face swaps to fully styled uncanny valley Creations you've got video input along with IP adapter styling plus a bunch of settings to keep things controlled but or set free let's start with a quick over overview of what you get as there are quite a lot of options if I zoom out a bit you'll see all the various colored groups most of which can be bypassed if you don't want to run that particular group as for inputs there are a few of them let's start here with the video input now this is the video you want to change nicely placed here in the video input group it's a good idea to test a small number of frames to start with as it can take 5 or 10 minutes to render an entire 6C video and you can do that by setting a frame load cap there I've got the value set to 16 which should give you a decent draft output just to see what it looks like if you set that down to zero so we put that down to zero that will render the entire video beneath that we've got the frame rate that has a maximum of 25 four which is due to the load video node above it but you can go lower if you fancy and of course there we've got the image scale now that scales it to roughly stable diffusion 1.5 sizes and to run this with animate diff you will need around 12 gig of vram you can lower that a little bit maybe reduce your vram requirements slightly but obviously there is a lower limit there next to that you've got these stable fusion and vae loader I've only tested this workflow with 1.5 models so far that's your best bet if you want to avoid issues for the IP adapter input group you can pretty much pair whatever model you want alongside the stable diffusion 1.5 clip Vision model not the sdxl one of course remember I've only tested this with 1.5 and whichever model you choose up here as you can see you got a variet You' got plus the normal one light or face which is the default here so pick whichever one of those you want and you will get a different output in your video here are some examples using the IP adapter face model they're often weird and creepy which is excellent and just in time for Halloween as the face adapter focuses more on faces it's good at making people in your Source video take on aspects of your input image Expressions style such as 3D anime photo realism cartoon Etc all of those aspects will be incorporated into your output now down here I've got it bypassed but there is a face cropping node as well with a variety of different options to choose from for the face detection so if you are using a face in there make sure that this image crop gives you a proper face in that preview because sometimes it doesn't quite work for example if you've got an anime face in there you may need to select the anime face model for the detector to pick that up and crop it properly now you don't want to carry on running if it if you do get an incorrect crop so what you can do is you can do view Q there so let's turn on anime face we'll run this we'll see if it gives me a little crop face preview that's any good no it's not so what you do then is you click cancel stop that from running and then change the model to something which will actually work or of course you can just bypass it all together bypassing it all together is very handy of course when you're not using the face model so if you're using the Plus or the normal or the light one I found the light one is very good for things like Styles and patterns and you can use it for backgrounds as well for example maybe you'd want a waterfall in the background so you can put a waterfall in there you'd bypass that node cuz it obviously hasn't got a face in and you change the IP adapter to something like IP adapter Plus or IP adapter the normal one or maybe light if you just want to style so there's lots and lots of options you can get in there use faces patterns Styles backgrounds and just see what comes out the penultimate input is the animated diff group this is just like all the other animate diff videos I've done and chances are you'll not want to change any of these settings from those defaults the final input is also optional as you can only use this for research purposes which is the react section this will do a very basic lowquality face only swap on your video so it won't do any styling You may wish to use this either before or after your original video but do note the license restrictions up the top here are the control Nets and also over to the right there there's a background replacement section as well if you're low on vram you can disable these and I've also added an auxiliary control net here which is slightly different to that one in that you can pick a variety of different pre-processors there so obviously make sure your pre-processor and model match up whatever you're picking in there if you do have it enabled remember you can just bypass it by clicking bypass the optional background removal group can come in handy if you've got a green screen video as your input and you want to use a different background later maybe you've got a video editor and you're doing fancy stuff in that it will work work without a chroma color background as well but of course the removal quality will vary here you can see some examples of removing the background you could also maybe tweak it a little bit to Output a green screen it's all up to you as to what you want to achieve I've also given you some notes down here as to what you can expect and a reminder of some of the things which I've just been through basically like it says down there if you lower some of the settings then then you can have it do more of its own thing or you can tighten some of the settings and get it closer to the input video so as you can see there with lower D noise so something like around 0.5 that'll be much much closer to your input video whereas if you set the D noise to one then it will almost completely ignore that video that is unless you're applying the control Nets now if you've got a lower control net that will of course ignore the video more and give you a better represent of the image that you've provided over there in your IP adapter and if you make the control net even higher so up here you've got the settings so if you make this quite strong that's quite a strong setting basically the more you let it do its own thing the more influence the input image will have and your prompts but the more you try to control it the more like your input video will be now I've tended to not use any prompts at all in these I'm just just using the IP adapter or perhaps single words to describe the image such as waterfall or man or dog or rodent obviously you would never use cat in your prompts just as an example there it's got the D noise of one so that is nothing like the input video and it has just created kind of your standard animate diff thing as a picture is said to paint a thousand words I've created this little set of examples some more closely match the source video and some go over 9,000 as you can see it's entirely up to you how far you want to go I know this is a fairly basic workflow but hopefully it helps to inspire you just a little bit maybe add some prompt scheduling in there that sort of thing eventually I'll put this up on comfy workflows too so when you click the manager and go to comfy UI workflow Gallery search for nerdy rodent or something and it will be up there too remember you can find links down in the video description and if you're stuck on the basics then do check out this next nerdy rodent video
Info
Channel: Nerdy Rodent
Views: 12,101
Rating: undefined out of 5
Keywords: nerdy rodent, ai, ai art, comfyui vid2vid, ComfyUI Video to video, ComfyUI video styler, How to style a video with COmfyUI, ComfyUI IPAdapter video, IPAdpater video2video, ComfyUI, Generative AI, How to make AI generated videos, AI video tutorial, guide to making AI generated videos, Ai animation how to, Ai animation Howto, AnimateDiff vid to vid, Vid2vid, AnimateDiff video, AnimateDiff, ControlNet, StableDiffusion AnimateDiff
Id: kJp8JzA2aVU
Channel Id: undefined
Length: 9min 20sec (560 seconds)
Published: Sat Oct 28 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.