从零学AI动画 ComfyUI AnimateDiff 工作流详细教程 AnimateDiff V3 AI 无闪烁动画转绘 丝滑动画制作

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Today I will introduce the most complete and detailed workflow of AnimateDiff Even if you are a novice Just follow me step by step Can also play AnimateDiff clearly The video is more than 30 minutes long I suggest you save it first So you can turn it out and take a look if you forget it AnimateDiff has also updated the latest v3 version in the past few days You can check the description of the author on GitHub Then jump to this HiFace to download Here are all the workflows I built for AnimateDiff Then I will send the entire workflow to everyone Next, let's take a look at how this workflow is built one by one Let's click According to our habit, first we build a processor Then we give a clip from this image prompt Then give a reverse clip Ok, let's pull this node out of the clip and give a stop Stop, we give a for This is also a unit commonly used in our web UI Then we pull a single model out of the clip Connect the checkpoint model to the node in the model in the user Let's pull out a front space This front space has a size setting Then we download a decoder from the decoder Here we give a default 84000 Inside this image output, we give a preview The basic workflow of the micro-map is built out Let's give a animation model We give a one-go for this image prompt Then give a backfill for the one-click In the reverse prompt window, we paste a basic reverse prompt Let's run a picture first Ok, the picture is still ok After the basic micro-map is output, we start to add a model of our AnimDev We know that all models are loaded between our big model and our developer Then we find the right click and find AnimDev Then we see that there is an AnimDev loader in it Load AnimDev Then we connect the node of this model Connect from the developer to AnimDev Then connect from AnimDev to our checkpoint In this way, we have a basic connection We change the new node to red The remaining three points will be discussed later Here we switch the model to our latest v3 model Because we are doing AnimDev animation So we set the value here to 16 Then we find a video in the video compile Then give it a video merge The number of frames in the video merge is 8 The total number of frames we give is 16 In this way, the video generated by AnimDev is a 2-second video Let's run the effect and take a look The effect of this video has been generated We can see that the whole dynamic effect has been generated The image quality is still relatively good Then let's take a look We set this 16 to 48 Let's run again and look at 48 frames Then a video of 6 seconds in total can be run It is prompting the wrong content The current video only supports 24 or 32 frames But we gave it a 48 frame So it can't support Is there a time limit? Actually it can be solved Let's take a look at the second We pull out a uniform upper and lower text option This is a uniform upper and lower text Drop this node Let's run another 18 frame rendering Because the time to run is relatively long I used an acceleration It should be a 6x acceleration speed You can see that I fell out of the background Look at the performance GPU We can see that the GPU occupancy rate is only 6.3G It is said that as long as everyone is 8G memory 8G memory can run this nvidia diff Then we also adjust the color of this 32 text option to red OK, then we have an animation effect of 48 frames It's already out You can see that the whole change is still dynamic The effect is still very large Here, in order to give you a video demonstration All frames are adjusted to 16 frames Just two seconds of video This will be faster to render Let's take a look at the third node This is a dynamic movement Is a dynamic load It has a total of 8 slots Is up, down, left, right Enlarge, shrink and correct time Clockwise and counterclockwise You can also control its strength OK, let's run again Here I gave it a shrink node 16 seconds We can see an effect The effect of the whole shrinkage came out again Here I changed the output mode to MP4 format Because of the GRF format The effect will have some color gaps OK, I also adjusted the dynamic load to red So we have a basic video workflow The workflow of AnimeDev is ready Make a group Then give it a name Named technical animation, okay We press the control key Then scroll it Control + C Then go to the blank space, Ctrl + V Copy it Press the C key again to drag it all Here, everyone, pay attention I took this connection point All the connected points are removed So it won't be at the same time In the same place It will be a little bit more complicated But it's not a big deal Then it won't be at the same time A few workflows are running Here I will show you Use LCM model or other LoRa models How to connect Load a LoRa model node Then put this LoRa node Connect to ChakraPond and AnimeDev At the same time, we connect the clip to LoRa Then we give it a LoRa model Choose one at random Then lower the intensity This is what we load A normal LoRa used on a regular basis What about here Because I want to use LCM model So I will load another LCM LoRa 1.5 LoRa model Then we add another LoRa node Then choose this LCM SD 1.5 This is a very fast A very fast speed model Then connect this node to our AnimeDev model We make an adjustment to the clip We optimize the entire workflow One of its order It looks more straightforward Ok, because we are using LCM models now So we have to be in the simulator Adjust its step speed to 4 or 8 Adjust this CFG to 1.5 or 1.4 OK, let's run and see the effect A 4-step speed effect It must be much faster than 20 steps Everyone can try it when actually using it Then we see that the animation has come out But the effect is not very good A bit broken Everyone can try to adjust the large model And LoRa model And our relative parameters You can still get a better result Then I won't show you here We still put the second workflow Group it for everyone Then change this name We will change it to LCM We right click Right click and find this TAD Then change to LCM OK, this is our second workflow today Everyone can try to disinfect yourself We still put this output here Connection point to it OK, let's copy the first basic workflow Then open the player V3T Activate it by pressing the C key Then introduce this to everyone Time travel function We double click the blank tree Then enter this best property BATCHP Then I found this work node We open this work node This is called time travel Time travel time travel Then delete the positive time travel directly Replace it with a node of our time travel Make a connection to the positive time travel You don't need to move the anti-clockwise Here I delete the sports LoRa You can get a quick image correction How to use this time travel function Let's take a look I first turn this node green Here I use 36 to show you Change it directly to 36 in this number Then delete all the extra numbers When using We only need to be at each time node Put the command you want to give it That is, the prompt input is fine Then in the first prompt window I will add this normal prompt Then in the time travel prompt In 0 seconds We give it a close eye Close your eyes When 12 frames Give it a real eye When 24 frames We give it a head up When the last 36 frames We give it a low head So we have a time travel prompt It's already input At present, we still use the latest v3 model Here we still have to put this Connect the output video Here I forgot to put the following frame rate Adjusted to 36 Still using a frame rate of 16 before Then get a result We can take a look Can see it next year It has an effect of open and closed eyes Then we can adjust this frame rate to 36 Because when there were 16 people in front There is only one open eye and closed eye Then we can see this gpu Still only 6.3 Then we can see now AnimateDiff runs out of a disinfection Its entire action is still very rich But There is no disinfection of this open and closed eyes Then everyone can do it yourself This time travel trial Then I will give you here Run the entire public domain Then everyone will try it out later Then we build a new group Then give it a name Name We will adjust this time travel OK, let's put this Turn off the output node first Now let's put the technical workflow Let's copy and paste a copy Then give it a group This is our fourth workflow today Let's change the name Change to an upscale That is to say This is how to do it for everyone After our video comes out How to zoom in Actually with The workflow of the text file is similar The workflow of the zoom There are actually many ways Then here I will use my own way Introduce it to everyone Then pay attention to it We are in this image of this pixel Inside this image The image is the final final code Here inside the image Find an upscale Then find this SD upscale This is a A node Then we give it a picture first A pixel load We still find this video Combine This is a video A video output after a zoom We put this The format of the output Change to mp4 So what about the model We are connected to the last one MDF model Here we are from the image Inside Pull out another node Add a load kernel Then we add a kernel model Then Give it a choice ID15TAR That is, this patch model It's actually a patch zoom Then on the right We connect a kernel downloader The image font of the kernel Connect to our clip image font The output font on the right Connect to our upscale This image font here What about v1 We still connect to the previous v1 node Then the code of this v1 This image here With our next kernel The image of the kernel below Make a connection Finally, we are from this Upscale model Give it a model Here I will give it a four-fold model So we have the whole upscale This release model is connected In fact, this upscale model It is used with the kernel model Use together So that the final result Can be the same as our original image A Effect Then we can see The images we get are all the same character images Angle and dynamic are exactly the same But the color of the whole picture And its fine-grained resolution All made an improvement We put the whole upscale A model Group it and then give it a color Make a distinction Ok This is our one Zoom in on the workflow Then it is Use this upscale and kernel at the same time Two models Ok, let's put this base workflow Copy a copy This is a kernel A control Full control Then we change the name of this group Then we change to this kernel Full reset Let's get this out first Connect the nodes We also delete the operating nodes for the time being Then we right click to find the kernel Then add a kernel model We give this model a rot Is a line Then right click We connect a kernel to a node A loader Then we connect a processor from the positive T Sli Then connect the positive T Sli and the negative T Sli of the processor at the same time What about the loader in the kernel We connect the positive T Sli and the negative T Sli Make a connection Then connect the kernel loader image And the image on our finally downloaded decoder Make a connection The model in the processor Make a connection with our previous enemy def model What about the light space We pull out a node Here I use a transition point What about the light space We are connected to the image of the blue leg in front Then we load a VE What about the code of VE We are still connected to the previous 8400 What about the last node of the kernel node We still give it a video combine Then we are in this model of kernel Add a full weight control Then we can use the full weight to control this value Then adjust the softening of our entire line OK, let's run and try Here I put the whole node of the kernel Adjust to a green OK, we see a video effect loaded by the kernel Then his whole expression action And his character shape are the same Just finally through this softening reassembly Achieved a effect Will be different Everyone can also adjust this kernel model by themselves For example, you use this VCD model or say open force model Can be a video reset Next, let me introduce the sixth enemy def workflow This is also a very popular one recently Adept workflow Let's combine Adept and IMDef Take a look at its effect Actually a bit like a video reset A result Let's take a look Here we first put the output node Video combine Make a connection Then we will reconnect a model from the checkpoint Then we find the RP emulator Then load an RP Adept model node Then load the Adept model Make a connection with our enemy def model Then we pull out its first node Then add a model This model is a full body model Or a model of the face Let's use the first one Then we add a clip version model Select the model We select default first Then load an image from the image This image can actually be from the health map Pull out a workflow directly Here, use a good image we chose directly So our Adept workflow is built Let's run and take a look Because this speed is also slower Here I will speed up the speed You can see that the video of the enemy def And our load image The whole style is still very similar A grand feeling of the character Because I am using a cartoon model here The load image is a picture of a real person So there will be some differences You can see that the overall character is still very similar Of course, you can adjust the model of this Adept Then try and change by yourself And adjust the big model of this checkpoint Can make some changes OK, the six enemy def workflows that I mainly brought to you today The basic text transfer video Laura and LCM use Time travel node Upscale release CoreNet full control And a Depth node I believe everyone will follow and do a good job You will find that the animation is so simple The model and殺鍵 used today You can download it by clicking the link in the video description below The workflow of the entire enemy def will also be given to everyone If this video helps you Please give me a three-line comment See you next time
Info
Channel: AI Artistry
Views: 2,201
Rating: undefined out of 5
Keywords: aivideo, aianimation, Stable Video Diffusion, Stable Diffusion, Stable Diffusion Animation, SVD, webui, comfyui, ComfyUI, animatediff, Animate-diff, Fooocus, Deforum, Ebsynth, TemporalKit, mov2mov, XTTS, controlnet, lora, LCM, Krita, AI cover, faceswap, roop, rope, facefusion, Reactor, deepfake, AI重绘, AI转绘, AI实时, AI动画, 动画, 转绘, 重绘, tutorial, IPAdapterplus, IPAdapter, SDXL, so vits svc
Id: -4lrAALLdnI
Channel Id: undefined
Length: 22min 54sec (1374 seconds)
Published: Fri Jan 12 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.