Creating an uber-realistic video animation from an avatar with Stable Diffusion

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
guys today I want to show you how to create a 3D avatar and use stable diffusion automatic 1111 to turn it into a hyper realistic video animation we will also need mixamo and blender but everything will be explained in detail in this video okay now let's jump right into it there's a free website called Ready Player me where you can create your avatar just create a free account and log in and after a few seconds you will see the starting page I've already created a few avatars before but I'm going to guide you through the process of creating a new one just click on create new Avatar then again create a new Avatar choose the body type and then you can take a photo of yourself or pick a file or just continue without it so let's quickly take a photo with our webcam hit accept and after a few seconds your new Avatar will be ready well looks surely better and younger than I am but overall it's quite okay so let's use it you can change any aspects of the Avatar but I just will give him some different clothing okay that looks good to me now press on enter hop and wait a few seconds until your avatar has been created still I think I'm going with the girl because it will be a dancing animation and it looks much nicer just click on the three dots and download Avatar glb this is a 3D format which can be used in many different kinds of 3D software and we are going to use blender which is free to work with the model now let's rename the model to make things easier and then switch over to blender and start a new project let me quickly turn on the screencast keys so you can see when I'm typing on my keyboard now left click on the cube and hit X to delete it and then let's import the glp file with our Avatar now hit file import glb gltf go to your downloads folder and import the glp file so here we are let's switch to rendered mode and as you can see the model has all its colors and materials and in the next step we need to export this file as an fbx then load this fbx file into mixamo and create our dancing animation as a path mode choose copy and click on the little box to write to it and let's just save the file into our downloads folder Also let's say if the blender file and keep it open because we will need it again later and then let's open the mixamore website link down below create a new account if you haven't got one and then login now next let's upload the fbx file which we just exported from blender clicking on upload character then select character file female fbx and open when the file has been loaded you will see a quick preview then click on next and your avatar will be available in mixer mode for animation for some reason the textures and colors are missing but we will fix that later in blender now let's search for a dancing animation I think if you go with the hip-hop then let's hit the download button export as fbx with skin and let's choose 24 frames per second because that's a little less rendering time then hit download now back to blender hit file import fbx and go to the download folder and import the hip hop dancing file let's hide our old avatars by shift-clicking the eye icon and then let's fix the missing materials first let's rename the animation tool female then click on the Little Triangle left to the name so you can see the meshes now click on the first mesh and then select the materials menu down below you can see it's wolf 3D i001 let's change that to Bull 3di that's the material of our original Avatar and let's repeat this process for each single mesh now shift click on the I next to the original Avatar right click on it and then select delete hierarchy which will delete our old Avatar because we don't need it anymore let's drag up the window with the timeline in the lower part and select your new Avatar and you will see the animation down below you can see that the animation is about 522 frames long so let's write 522 into the end of our rendering let's save the file again and hit play to play the animation now we are done with this part but we still need some background to guide our stable diffusion render later on and the easiest way is just downloading a free 3D model and I would recommend to get this model at the site called sketchfab where they have tons of 3D models free for download I will leave a link down below you may need to create a free account before downloading stuff then let's search for maybe a city scene click on downloadable and choose any free licensing so just uncheck standard and editorial I think there's a nice city scene click on it a small French Village with a little cafe okay let's download that one so download 3D model and choose gltf as the downloaded format once it's downloaded go to the download folder and double click on the zip file which will unpack it then let's go back to blender and import this scene so file import glb gltf open the city scene and click on the scene gltf file you can see it's way too small so we have to scale it up hit s on your keyboard and move the mouse upwards until it has the right size use the buttons on the left side for moving resizing rotating I'm just going to speed up the process a bit until I have the right perspective okay I think that's good now you can see that we have two cameras in our scene so let's delete the second one then let's split the main screen by moving the mouse between the screens until you see a plus sign and then drag it to the left side the reason why we do that is because I want to do some camera movements before rendering our scene and in one window we will have our main View and in the second window we will go to the camera view by clicking on the camera icon like that so now let's drag the camera to our initial position click on the little camera icon right on the screen so you can see the camera settings let's zoom out from the main view a bit until you can see the camera here it is and then drag and rotate the camera until you have the position you want you can see the camera view in the left window then click on the little arrow on the right side of the main window and you can see the items tab Now we move over the location and hit the I key on your keyboard and also the same at rotation this will add a keyframe for location and rotation at the current frame of our animation you can see a little yellow diamond in the timeline down below now let's drag the position marker in the timeline to frame 100 then move the camera to a new position and insert a new keyframe at location and rotation by hitting the I key on your keyboard you can repeat this process at other locations in the timeline until you're happy with your camera movements okay last thing to do in blender is rendering the scene select that little printer icon on the right side scroll down and choose an output folder let's just call it blender render in our case and this will be the folder where our animation will be saved now click on the little camera icon turn on ambient inclusion Bloom screen space reflections save the file just to be sure and then go to render render animation this will take a little while and I'll be back when the render is finished okay it's done now let's go to our blender render folder and you will see that there's an image sequence created by blender which we will use as an input to stable diffusion so let's open our stable diffusion web UI for automatic 1111 you might notice that I'm not using the standard stable diffusion model but the Uber realistic Pawn merge model now don't worry we are not creating any pornographic content of course it just produces highly realistic outputs so let's go to civit Ai and find the model search for Uber realistic click on the checkpoint and not the Laura file then download the model which will take some time because it's nearly 4 gigs large and once you've downloaded it move it into your stable diffusion web UI folder under models stable diffusion now back to our web UI hit the refresh button to refresh the models and select the model we just downloaded now let's go to image to image and upload the first image from our blender render then let's write a quick prompt let's activate the control net extension and insert the same image as a guidance let's use depth as a preprocessor and the same for the model and then let's make a quick test render it's quite okay but not really what we want so we need to play around with all the settings especially with the CFG scale and the denoising strength but also with the prompts and the sampling steps just make a few test renders also with different seeds until you get what you want now that's much much better as I said the Uber realistic Bond merge is really a good model for realistic renders for rendering our animation we will use the The Forum extension so let's transfer the settings from the image to image tab to The Forum we do need more steps than an image to image because the Forum is reducing the steps after the second image let's copy the seed and other stuff from image to image and rename the batch to clip which will be our output directory restore faces at the keyframe step choose video input let's set the strength to very low value like maybe 0.2 also the CFG scale should be low so set it to maybe 5.5 let's crank up the noise to 0.02 then let's go to the prompt step and copy the values from image to image we only need one animation prompt at frame 0 because we want to keep the video rendering consistent now let's copy our negative prompts and then let's go to the init Tab and click on video init now the Forum needs a video as an input and not an image sequence so we have to create this video first I'm on my Mac so I'm using QuickTime but when you are in Windows you can just do it in your video editing software like DaVinci Resolve once your video is created just paste the path to the video into the video init path here we go I also want to change the value in extract nth frame to 2 which means that only every second frame will be rendered more about that later now let's go to control net and enable it leave the settings as they are just use death as a preprocessor and depth as a model paste the path to your input video into the control net input video path okay last thing to do is click on the generate button with an image sequence with half of the frames of our video will be curated in our stable diffusion folder outputs image to image clip this is going to take a while so I'll be back when the render is finished okay it's done you can see an image sequence in your Clips folder with exactly 260 frames which means that each second frame has been skipped by The Forum now I will use QuickTime again for creating a video out of the image sequence but when you're on Windows you can just import the image sequence into your video editing software then let's head over to our video editing software in my case that's Final Cut Pro but you can use any software you like let's create a new project and import our output video into the project now the last thing to do is reducing the speed of the video to 50 and let your video editing software interpolate the missing frames and that's exactly why I just rendered half of the frames because the interpolation makes the video much smoother and more consistent you can try and Skip even more frames at the stable diffusion render depending on the kind of Animation you're using and on the capabilities of your video editing software now let's export the final video and then take a look I hope I could give you a quick overview on the whole process the most important thing is that you play around with the settings until you satisfied with what you see there is no right or wrong and this tutorial was just meant to be a little guidance for you so I hope you enjoyed it and see you next time
Info
Channel: Render Realm
Views: 73,569
Rating: undefined out of 5
Keywords:
Id: y3YhhsGjjek
Channel Id: undefined
Length: 16min 25sec (985 seconds)
Published: Wed Mar 22 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.