Stable Warpfusion Tutorial: Turn Your Video to an AI Animation

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
the videos you're looking at were all created with an AI software called work fusion and here is how it works you give work Fusion a regular video tweak some settings tell it what you're going for and just like that it spits out a stylized output of course it's a bit more complex than that that's why in this tutorial I will show you how to use warp fusion to stylize your own videos I will share with you the key settings you need to change as well as some tips and tricks to get really good results and even though these videos look different the main steps are pretty much the same before we dive in I wanted to mention that warp fusion is a paid product and I will leave a link below and because it's still in beta some settings may change and before deciding which version works best for you make sure you carefully read the update logs in this video we will be using the 0.14 version and I will leave a link to it down below when you get there click on this attached file to download The Notebook and navigate to Google collab to run stable World Fusion select file and choose upload notebook and then select the file that you just downloaded we're gonna go over the settings in just a bit but first it's important to note that you can run warp fusion locally with your own Hardware it's a simpler option and have included a complete guide below it's recommended to have an Nvidia GPU with at least 16 gigabytes of vram you can check Yours by opening the Run commands open DX diag navigate to your display Tab and as you can see here I have 8 gigabytes and I know that this is not enough because I tried and keep getting out of errors and that's why I think it's better for me to go with the online method instead let's save the document to our Google Drive we can create a copy and give it a new name then just click here and select connect to a hosted runtime and by the way if you're willing to put a little more money into this online option instead of upgrading your computer you might want to consider getting a pro membership that way you'll have access to more resources to transform a video click here to upload it keep in mind that the quality of your outputs will depend heavily on the video you choose make sure the main subject is sharp and is clearly separated from the background I recommend that you avoid videos with high motion blur on top of that videos with movements patterns and textures will result in generating more interesting elements in your animation so keep that in mind both vertical and Landscape videos should work just fine as input I found a pretty good stock video that I would like to use for this tutorial and I will link it down below we will also use an AI model to determine the look and style of our output a very helpful blog post on stablecog can teach you more about the best diffusion models so be sure to check it out for this project I will use a model called dream shaper which you can download through the link below I have already created a folder on my Google Drive which I called AI models where I have uploaded dream shaper along with some of my other favorite modules you can follow the same steps and later I will show you how to load a model into work Fusion now let's continue setting up the notebook under settings you can change the batch name look for the animation Dimensions just below and make sure that the aspect ratio matches that of your original video for example my video was 1080 by 1920 but I will use 720 by 1280 it has the same ratio but a smaller resolution which will greatly reduce the processing time scroll down to video input settings here you need to specify if I where your video is located so right click on your original footage so copy path and then paste it inside the video init path input right below that you can change extract and frame to 2 to make the AI process every other frame this will create a jittery animation look but it will also cut processing time in half however I want to keep my output smooth so I will stick with one for video masking enable the extract background mask this will allow you to choose whether to keep or remove the stylized look from the background later on before we continue I would like to say thank you to today's sponsor skillshare for supporting this video skillshare offers thousands of classes for Creative individuals covering a wide range of topics such as AI photography and freelancing I personally became so interested in skillshare because of its career focused classes as a full-time YouTuber I need to stay on top of my productivity time management and personal branding game to keep up with the fast-paced industry I took Ali abdel's product Master Class principles and tools to boost your productivity drawing from his experience running multiple businesses Ali's tips are valuable for professionals and students he also shares book recommendations and thought experiments to help establish good habits and I highly recommend this course for anyone looking to increase productivity and if you're working towards a big goal like starting your freelance career or breaking into a new industry it can be intimidating to figure out where to begin however starting small can take some of the pressure off and skillshare teachers can walk you through all the steps you need to hit those goals of yours I'm so interested in watching more videos on skillshare and I have already added several courses to my watch list I really think that skillshare is the best place to dive into new topics and expand your knowledge and if you use the link below you can explore their entire library for free for a whole month next scroll down to generate Optical flow and consistency maps and enable Force flow generation the model path is used to direct warp fusion to a specific checkpoint file you can stick to the default one or to point to the dreamshaper model go back up to prepare folders and click on the play button to run it you will be prompt to connect to your Google Drive select your account and click allow wait a few seconds for it to finish running once it's done click here to refresh the folders list you should see your own Google drive files here go to drive my drive and find the AI models folder right click on the dreamshaper checkpoint copy the path and paste it inside the model path input next scroll down to the non-gui cell this is where you can change the default prompts which will be a series of words or phrases describing what the output video should look like this is a very important step and I have a video on patreon about how to find really good prompts so if you're subscribed make sure you check it out I've already written down my prompts which are meant to transform the dancer into the Statue of Liberty and the negative prompts are there to tell the AI what kind of elements or flaws to stay away from during the generative process let's move down to the GUI cell and once we run the notebook later this cell is going to reveal a bunch of settings you can choose a difficulty level here and the harder it is the more settings you will have to play with for now I'm going to stick with the default level you can also use this path to link to a pre-created settings file this file is generated and saved into your Google drive with every single run and you can load it back in to reuse the same settings if needed I will stick with the default settings for now and I will show you how you can modify some of them manually in just a minute right below you will find the diffusion section if you want to specify the exact frames that warp fusion should start or stop at you can use the frame range setting and in case you run into any issues with your connection or accidentally stop the process you can use the resume run feature to pick up exactly where you left off last but not least is the video creation cell one of my favorite settings here is use background mask video enabling this tells word Fusion to apply this style to the subject only you can choose to keep the original background use an image or change it to a solid color this feature is super helpful if you want to try out different iterations on the subject and your background another awesome thing you can do is to increase the resolution of your video by upscaling it after processing for instance you can go from 720p to 1080p by setting the upscaling ratio to 2. now as I mentioned earlier there are a few more important settings that we need to tweak and I'll get back to that in a minute however at this stage we are ready to run the AI to do so go to runtime and click on run all to execute all the cells you will be able to see the cells running one by one before the video frames start processing keep in mind that some cells may take several minutes so be patient and wait while it runs in some cases you may encounter errors and I have included some resources in the pinned comment below that may help you resolve such issues once the Run reaches the GUI cell you will notice that a new list of settings have been loaded into an easy to use interface from there you can adjust your target prompts and other settings and I will guide you through the steps to ensure that your changes are applied correctly right below that we have the do the run cell here you can preview your frames and decide whether you want to continue if you're not happy with what you're seeing you can always click here to stop the process to make changes simply head back to the settings UI one tip I have is to add more keywords and negative prompts to give the AI a better idea of what you're looking for and I've included the prompts I used here in the description below for you to try now I want you to pay a close attention to the next settings the style strength schedule and CFG scale schedule have a significant impact on the output style strength determines how much change the AI will make compared to the original video CFG scale on the other hand tells the AI how closely to follow the text prompt after experimenting with these settings I've come to the conclusion that using a scheduling format produces better results for example I encountered an issue where the prompt presence in my video increased over time until the video became distorted and I found that scheduling the CFG scale to decrease by three to five steps between the first and second frame gives me better results I had success with combinations such as 15 to 10 and 12 to 8 it but keep in mind that this is not a magic fix I recommend experimenting with these and other settings until you find a happy medium as The Sweet Spot of these settings will depend a lot on your input I usually enable mask guidance and set the seat to -1 I typically disable both fixed seed and fixed code although I've had success with fixed seed and fixed code enabled in certain cases so again make sure you experiment to find what works best for your video one more suggestion I have is to try decreasing the flow blend schedule to 0.5 once you've finished with that you can go ahead and rerun this cell just wait for the preview and if you like what you see simply let it run if you're wondering where to locate the stylized output head over to your Google Drive you will see that an AI folder has been created for you open it up and navigate to the stable warp fusion folder you will find another folder labeled images out this is where all of the batches you created will be stored I named mine tutorial and here it is once you open your batch folder you will find all the stylized video frames inside and every time you launch a new run you will notice that the Run number is assigned and increments by one the first one is labeled zero the second one is one and so on once the run is complete you can use the video cell to create the output animation to speed things up I will reset the upscaling ratio to 1 and then run the cell once it's finished simply head over to your batch folder on Google Drive open up the video folder and here you will see the video that you just created now let's give the video creation another go but this time let's enable background mask give it a minute to process and as you will see on your Google drive this style was only applied to the subject why the background remained untouched if you end up using warp fusion to style your own videos a lot of it if you tag me on Instagram or shared them with us on Discord you can use the share your work channel to show off your Creations or get some feedback you can get even more creative by combining warp fusion with other tools like Luma AI to create videos like this one if you're interested check out my Luma AI tutorial to learn how to use it stay creative and I'll see you guys in the next video peace [Music] foreign [Music]
Info
Channel: MDMZ
Views: 300,962
Rating: undefined out of 5
Keywords: ai prompts, ai greek statue, roman statue dance, ai dance animation, how to create ai video, video to ai, warpfusion statue dance, warpfusion greek dance, warpfusion explained, stable diffusion, how to use warpfusion, controlnet, temporalnet, ebsynth, ai google colab, ai animation tutorial, ai animation tiktok, ai animation app, kaiber ai, ai marble sculpture
Id: tUHCtQaBWCw
Channel Id: undefined
Length: 13min 19sec (799 seconds)
Published: Sat Jun 17 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.