Bring 2D AI Characters & Scenes To Life with Budget friendly options.

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey welcome to my new AI animation tutorial this is a shiny new updated approach let's call it version two showing a new and potentially lower cost and slightly simpler way to bring AI images of characters and scenes to life oh and if you're on YouTube please press subscribe here we go hey thanks for checking out this new tutorial where I'm going to bring a 2D image of a character and a scene to life using a few different AI tools I'm going to try and share some lower cost or free tools you can use in the process as well as some more premium ones which deliver better results and I will also be using the Adobe Creative Suite particularly with Photoshop and Adobe After Effects I'm also going to feature a new tool that I'm developing myself called mesh magic Ai and before I get on with the video If it wasn't clear this isn't really me this is an AI version of me generated using haen that I was just trying out as an alternative for Avatar creation compared to something like the it's pretty cool okay and lastly press subscribe like leave a comment press the little bell icon on with the video step one create a close-up image of your character and scene using an AI image generation tool you could do this for free using stable diffusion or use Dary 3 currently integrated into Bing's image Creator so you can head over to bing.com create enter your prompt press go you could use Dary 3 for free via a Microsoft account and once your free fast credits have expired it just takes long do a generation or you can pay for extra fast generation credits the results from darly 3 are currently Square only so you could head over to clipd drop. co which is made by stability Ai and use their uncrop feature upload your D3 image and expand it to a 16x9 image press next and it will do a generative fill to expand the image giving you a nice 16x9 frame and a few different options to choose from you could then download that image you can also use clip drop to extend your scene even further so we can take our image scale it back down press next it's free to use but there will be a Queue at times depending how busy the platform is and you can pay to skip the queue you can then right click and save your image or press download you can then head over to firefly.com and make use of their AI for free and use their generative fill feature press generate upload your image you can then press the background button and it will remove the background giving you your character by themselves also alternatively you could use Adobe Express which is also available for free start a new project and upload your image press create new original aspect ratio and then click on your image and you can press remove background giving us our character by themselves and if you want to you could use the crop tool or the erase tool to remove any extra items that you don't want whilst you're here you could grab your character and scale him up and you can also change the background color up here and choose a bright green and then download your image we also want a version of our scene without our character in it with the background painted in so hopping back into the free version of Adobe Firefly got my image here and we can go down on the left and choose remove and then paint out our character doesn't have to be exact as long as you get rid of everywhere the character is and then press remove it then gives us a few options for that removed background you can choose your favorite press keep and then download the image this is all evolving really quickly so the results are really polished here now that we have access to darly 3 and those free Generations I'm also going to show a paid approach using mid Journey which I think still produces the best results although there's an argument for Dy 3 now being a very serious Contender um and I'm going to use Adobe Photoshop as well for the generative fill and character removal until they release a new updated mid Journey website you interact with it via a Discord server so once there write a call Prompt first starting with a forward slash and writing the word imagine or forward slash and enter and then write out your prompt and I'm using the code d-69 to create a 16x9 image and keep generating till you get an image you like if you get an image you like but it's not perfect for example this one has some weird fingers you can upscale the image and then press vary region highlight the area and regenerate right click and save the image plus zoom out two times and save your favorite image there as well step two is to use Photoshop in the Adobe Creative Suites which is a monthly subscription and there are student licenses available if you're a student um so with Photoshop open and this is the latest version version 25 that's got features that have just come out of beta so we now have some of the AI features I'm going to press new file create a scene that is 1920 by 1080 and there's a handy preset here press create then I'm going to bring in a close-up image of our character from mid journey and scale them up to fill our full HD frame I'm then going to use the new object selection tool and draw a marquee box around our character and this will use AI to detect the character I can then press command C and then command shift V to paste them in place on a new layer if you don't like the shortcuts you can just press edit paste special and paste in place via the menu then down in our layers panel I'm going to press plus to create a brand new layer choose green as our color and choose the bucket fill tool fill that and drop it between our character and our original image I'm then going to save this out as a highres JPEG or PNG to use in the next step so I'm pressing command shift and option and I'm just going to create a JPEG and press save we can then hide our character and that green background and go back to our original image layer I'm then going to use the normal selection tool here rectangle angular Marquee tool and draw a box around our character and then we have our new text prompts where we can use AI to change the image and paint out anything or replace things in the image so we could try leaving this blank and just pressing generate um I'm actually going to give it a bit of guidance and say back of room sideboard and books hopefully I'll fill in our background this might take a few iterations and press generate in the properties panel here we can cycle through three different versions it's generated all pretty cool I think I actually prefer this first one we could potentially remove that coffee cup that's been added here is it a coffee cup whatever it is let's get rid of it so I'm just going to draw a box around this section here leave it blank and just press generate and there we go it's given us an option where it's removed the item completely so I'm happy with that I'm then going to save a copy of this Photoshop file just in case we need to come back to it and I'm going to select all and press command shift C to copy our whole image you could also go copy merged and then going to bring in our wider view of our character bring that in here and then I'm going to paste those layers we just copied which is a copy of our close-up view without our character and we're going to scale this down we can drop the opacity over here get it in the right position and then turn the opacity back up we then have our wider scene with our character painted out I'm going to save a copy of this as a highres JPEG or PNG again we can then close Photoshop steps three and this is where we're going to create our cool 3D environment from that background image and this is using a free tool that runs on hugging face and it's in one of their spaces and it's called Zoe depth it's currently free I assume it will be for a while yet possibly forever who knows as a very quick side note the mesh magic AI tool that I'm developing and actively integrating into AI animation. comom is using some of the code from Zoe depth which is available ailable and licensable to develop my own app and I'm developing it further allowing you to swap out textures and potentially allowing you to turn video files into a 3D mesh as well more on that later on in the video for now let's jump back to hugging face and zoy depth once on the hugging space website you might need to have a free account setup to use this tool but you can then follow the link and open up the Zoe depth page and we're going to click here to image to 3D and then going to grab our background image with our character painted out drop them here tick keep occlusion edges and press submit and it takes just a couple of seconds it's taken that image and used AI to estimate the depth and then it's distorted a flat plane with that depth map to create a core 3D image here with the original image applied as a texture so really cool in the preview here the colors look quite desaturated it's quite dark but once you download and open it up in a 3D package the colors will be back to the vibrancy from the original so we can go ahead and press the button here to download our glb 3D formatted model and now for step four where we want to bring that 3D model into a piece of software to do some basic compositing later on bring in our animated character and do some simple camera movement um for the free approach I'm going to make use of blender the free open source 3D software which has a bit of a learning curve but it does keep costs down and can do everything we need and then I'll also share the approach using Adobe After Effects which you will need a subscription for the software to make use of it and in particular I'm going to be using the new beta version of the software as it now supports some 3D file formats including that glb format and the workflow is really quick and easy and much better than a previous approach I've shared in the past right so first of all I'm going to jump into blender okay so with blender open we can go ahead and we can delete everything in our scene start off with and press file import and choose gltf and the GL B format press that and navigate to where you've saved the download from zoy and just double click your file to open it and it brings in your distorted mesh and you will notice that it comes in without a Textra applied and that's because the color information is stored as a color attribute rather than as a texture file I'll show you that here if we drop down the menu here on Geometry click the data file and then in attributes you'll see a color attribute here to view that in your Viewpoint window while staying in viewport shading click the drop down here and turn on attributes and then you get a preview of those colors now we can quickly add a camera to the scene by pressing add and camera and then press the camera button here to get the camera view to be able to use our normal navigation tools on the camera we can go up to view tick sidebar click the panel here and choose View and turn on camera to view we can then zoom out and move the camera around using the tools up here now at the moment if you press render you still end up with an empty mesh because the the color information isn't applied as a texture so to fix that we can go down to here press the little tab here choose Shader editor drag this window up and then with our geometry selected we can go down to the materials panel and press new and then zoom in and see the node within our Shader editor it looks quite daunting but don't worry about it too much we're just going to go over here and press add input color attribute and we're going to use that color information that's been applied to the mesh and apply that to the base color of our material which is already applied to our geometry then if you change to viewport shading you'll see that material is now applied but the output is quite dark so back in our Shader editor we can right click here press add and search for brightness and add a brightness contrast node if you click here so so it goes from the color to our brightness and contrast and onto the base color for that material and in here we can up the brightness and up the contrast until you get to an output that you're happy with then when you press render and render image you get a full quality output which is pretty cool um we can jump into the camera and you could set some key frames and move around the scene and things like that but you can see how just moving around the scene it's coming to life already and opens up a few different options for bringing an environment to life right I'm now going to hop into Adobe After Effects and take you through a similar process to show how you can use the glb files in After Effects really quickly really easily and speed up your workflow and you have access to all the tools that after effects has available okay so here we are in Adobe After Effects and this is really cool this is a new beta version It's version 24.10 the beta version of Adobe apps are available to everyone with a creative class Cloud subscription you can open up the Creative Cloud app press apps scroll down to Beta apps and choose the application you want to install and they've improved their 3D model support which they already had in the previous beta but it's now much more reliable much better um and it supports the glb format which is handy so I'm going to go ahead and import that 3D model from Zoey depth press file import file bring in that 3D model so we can see our Gob file with this random name I'm just going to drag that down onto the new composition button here and you could try Flipping The Zed axis but I think that actually mirrors the image as well so I'm going to leave it unticked press okay and then we have our 3D model here and we can cycle through the camera controls by pressing C on our scene to zoom out and move around I'm going to click on our layer here press R to bring up the rotation tools and on the Y rotation flip it 180° and then using the zoom tool we can zoom into our scene press see again to move to the p hand tool move across and you can already imagine how we can then move around our scene just using the camera tools we can right click on our layer press new camera to create a camera for our scene and choose a focal length that we like maybe 35 mil double click on the camera and you can try out different presets maybe 50 mil is better cuz it's a nice close-up shot then on the camera you can drop down here drop down the transform properties and turn on key framing for point of interest and position move along your timeline hold down shift and pan and you can pan to the right without any fear of going up or down and set some key frames for that movement and then for fun you can just zoom out rotate around check out the model see it in all its weird Glory there we go a nice easy quick way to create this depth in your scenes from a 2D image using a 3D model so much easier than the different displacement map and Camera linking techniques I had in the version one uh version of this sort of video next we're going to look at bringing our character to life okay to try and bring our character to life with a bit of head movement some blinking eyes and some lip sync I'm going to show a couple of different tools um the first of these is sad talker which you may have seen from other videos on YouTube where you can use this free tool either run locally via the GitHub code or via space on hugging face where you can actually pay for a private space to skip the cues and produce quick generations and the output quality is okay it's not up to par with the likes of did or haen but it is a lowcost approach and potentially free if you can run it locally or you're willing to wait for a queue the second approach is using lamu or lamu lamu one second lamu studio um and they've got a demo available for free at the moment where you can upload a bit of video and it will animate a character's lips so you can create a short video clip of your character and then upload it and it will animate the lips to an audio file and lastly I am going to use did or haen to bring a character to life as I think the output quality is still far and above the best and um yes it does come with a cost but depending on what you're doing and if you're able to earn some money from your generations and if you're doing it professionally I think it's still a valid approach right so first of all sad talker here we go okay so here I am in sad talker in a huggingface face. c space um if you come here and you try and generate there is a queue you can duplicate the space to a private space and try and run it for free there or pay for a higher end GPU and produce quicker generations and skip the queue but I'm going to try and use the publicly available space for this tutorial so initially we need to upload our character image so I've added my character from mid Journey on the green screen which we added using Photoshop then you need to upload an audio file whether that's something you've recorded yourself for free using a microphone or used an AI tool like 11 labs to generate an AI voice and I've just got this short audio file here this is a shiny new updated approach let's call it version two so nice and short just to show the process here because it takes a while to do a generation I'm going to leave the settings here and reference video alone as when I've tried it before it seemed to crash quite a lot but um I think there's potential for being able to drive the movement of your character potentially using a video that you've already generated using did to improve your output on salker um up in here we have post style and express scale I'm going to leave those alone for now but you can play around with those try higher levels to see if you get more movement and something that feels more natural or more in line with what you're after I'm going to use I blink and leave that on and initially I'm going to keep the face model resolution at 256 to improve the speed of output during this testing phase and I'm also going to use crop so it's just going to create a video of our character's face and remove the rest of the image and then once we're happy with that output you can change to full and it'll use the full frame I'm also going to use py render to start with as it's generally a bit quicker in the generation compared to face to vid which we can change to later on for a slightly higher quality output and again you can test the results from both of these and I'm going to leave Ste mode off but if you want less movement in your character if it's going to work better with what your compositing with later you might want to enable that you can also turn on gfp Gan as a face enhancer which can improve the quality the graphics around the mouth later on I'm going to press generate and impressively it's added me to the queue I've not had that warning about the space being too busy but you'll see the last generation took 3,883 seconds um and I think this number is just based on whatever happened last so perhaps that was a much longer audio file we'll see I'm going to leave that running for a little while and see if it works but if it does take too long I will duplicate the space and pay a few dollars to hugging face to have access to a higher end GPU and skip the queue okay and then after a long wait and actually attempting to duplicate the space and and have my own private space um which failed to run and there are various runtime errors and there's something wrong with the code and looking in the latest Community discussion other people are hitting the same problem um but I left this running for ages and it eventually finished its initial generation here we go this is a shiny new updated approach let's call it version two now I know that sad talker works better with real human looking faces rather than a stylized character like this one which could explain the slight lack of quality but um for reference I'm going to leave this one going I'm change it to a face to vid make it a full size scene rather than a cropped one drive the 512 and turn on that gfp Gan as face enhancer to see if it does improve the quality around the mouth as well as give us that full animated clip and again it's successfully added me to the queue which is good because trying to duplicate the space and have it in my own space with a higher end GPU doesn't currently seem to be working at the time of recording but it has worked for me in the past and then I'm going to quickly hop over to Discord and show you how you can use sad talker there as well so over in Discord there's actually a server called floor 33 which is integrating sad talker with Discord so you can go ahead and upload your image your audio file plus you can use these various text variables and prompts to help guide the settings for sad talker as it generates an animated lip sync so if we move over to the talking video gen room click here and then down in the text prompt if you type for slash say and then upload your image your audio file and then add any arguments any text prompts to guide the settings for salker so I'm going to go ahead and try out our character to see if it works um and then maybe do one with a more realistic character as well so with my character added from mid Journey with that green background applied using Adobe Photoshop and then press plus two more and add an audio file press plus one more and add those text arguments which I'm going to paste in here so I've got-- size 512 space-- pre-process Space full so so hopefully this will work and let's just press go there's one up here I tried earlier when just testing the process to see how it works so if I press play take you on a magical journey into the world of AI animation the lip sync is actually really nice and it looks pretty realistic there's some discrepancies around some of the hair that's not moving as she tilts and rotates her head but take you on a magical journey into the world of for a free alternative if to something like did or he genen I think the results are really very good and here's that clip of the character on the green screen and it's a full non-cropped clip this is a shiny new updated approach let's call it version two and there is again some weird movement in the hair where it's sort of being cropped and only part of the face is being animated but the lips are being animated and it's not too bad and I think bearing in mind this is a stylized character rather than a real looking human the results are struggling to be as good as they might be so I'm going to leave sad talk there but hopefully it's introduced a new tool and particularly the new Discord integration so it might be worth exploring you could also try out enabling the enhancer to see if that provides slightly better quality of output and try different crop um I.E the extra full version might provide more movement around the character's head so yeah definitely worth further exploration but now I want to look at lamu Studio before also showing did or haen so if you go to lam. Studio you'll see this site is currently under development but there is a free demo that loads after 5 seconds and you can press let start close that and you're presented with the UI um and there's various clips that you can choose already if you want to just test out the system but you can also upload your own video file so we need a video clip rather than an image so what we can do is create a video of our character and I'll do that using Adobe after effect so to make a quick video clip I've brought in our scene of our character brought it down to make a new composition and then highlighted a 10-second area of the timeline and I'm just going to save that out so we have a 10-second video clip there's actually no motion going on but it will allow us to upload to laamu studio so with that there I'm going to go up to composition add to render q and I'm going to drop down and choose the h264 option and choose a destination for the file to be saved to and press render so now back in lamu Studio I can press upload video let it bring in our video file we can then choose to add new audio and choose one of their own AI voice over Bots and write a script here for it to generate a voice over or we can upload our own audio here so I'm going to press upload audio and you'll see it's added my short test clip here this is a shiny new updated approach let's call it version two and then press generate lip sync and I find this is genuinely very very quick this is a shiny new updated approach let's call it version two so the resolution that's generated is rather low it's quite pixelated around the mouth but you can see the actual lip sync is pretty good pretty spot-on but it doesn't add any additional head movement or blinking or anything like that and you can then right click and save the video to your computer it can be really useful so if you want to generate some clips using one of your character images in something like Runway ml or pabs you can then bring that clip into here and add animated lip sync to your characters so you might have a scene of a character walking moving around and you can then swap out the ellip sync pretty cool so all in all I think laamu studio is a really exciting option for AI animation artists I think it can complement other AI tools really well to be able to provide us with a way to add lip sync to characters um so yeah it be exciting to see where this goes higher resolution outputs having multiple characters batch processing being able to swap out the languages and things like that there's lots of details in the road map if you spin down on their demo page and lastly before doing a final composite in our after effects with our character I want to show one of the paid options for bringing an animated character to life um in the past I've used did which is very good and I think the pricing is pretty fair there but this time I'm going to try out haen which I've been testing and getting some pretty decent results and I think it's arguably better than did I'm not sure pricing wise whether it is slightly more expensive but um once logged in and having signed up for one of their lower tier subscriptions you can go ahead and press upload talking photo and upload your character still image and then once uploaded the character will appear here and you can click on them and press create video and I'm going to choose landscape I can then click on the image and choose a frame for my character I don't want any frame so I'm going to turn that off and it goes back to our original image it also gives you the options to choose if you want your character to be default or happy going to try out the happy one and you also have the option of talking style so you can choose stable or expressive you can then type in a script and select one of their Bots to generate an AI voice over or upload your own audio so again I'm going to use the audio that I've generated using 11 labs and once that's uploaded you simply press submit and this is going to use 0.5 credits and you can see the progress here as it quickly Works through that process so following a very similar process in did I've uploaded the image and the audio file and I'm going to press generate video and it's going to use two credits and I press generate okay so I've generated both the haen and the did version and compare the two and I'll play that in a second so please let me know what you think in the comments below I think it's clear personally that the did one is actually quite a bit better there's more animation to the mouth and just the general head movement looks better um is worth noting that the output from haen was higher resolution and I slightly scaled up the did output but um yeah it seems that did does better with a character that's slightly less Human Than haen does as I know haen can do really well with a realistic human Avatar particularly if you go through and make a trained Avatar model using 2 minutes of footage of yourself talking um but when we're doing animation and you want characters that are slightly more stylized did does seem to be the winner but even then the character does have to you know have some familiar human traits for the AI to actually work but um let me know in the comments what you think here we go hey welcome to my new AI animation tutorial this is a shiny new updated approach let's call it version two showing a new and potentially lower cost and slightly simpler way to bring AI images of characters and scenes to life oh and if you're on YouTube please press subscribe here we go okay and I think the did output is quite a bit better in this instance so I've jumped into Adobe After Effects where we have our 3D scene with that Zoe depth model brought into the new beta version of Adobe After Effects I've brought in our did clip going to bring it down onto the timeline and then going to click on the layer right click and choose effect keying key light and for screen color I'm going to use the Color Picker and pick that luminous green which removes a green screen for us and it still leaves that did watermark in the corner so up in here I'm going to use the rectangle mask tool just draw around our character so that Watermark is removed we can then play through the footage you'll see the camera is moving but our character isn't so I'm going to press the 3D toggle here to make him a 3D press P to bring up the position properties move him back into space by scrolling The Zed position press s to bring up the scale properties bring him down and just play around with position and scale until he's in a position that I'm happy with I want him coming partly through the desk and now when we Pan the camera he's positioned in the scene press U to bring up the camera properties move along the timeline and you can see as we rotate around how the character is placed in the scene and you can set new key frames zoom in and out obviously if you move too far there will be some distortion on that 3D mesh so you want to keep the point of view fairly safe and yeah play around with some key frames and you could take this further add some optical flares play around with a color correction perhaps add an extra Shadow bring in some lighting things like that but for the sake of this tutorial and the fact I've done a deep dive into all those different approaches for bringing the characters to life and touching on the 3D model I think this will bring us to a close cool so um yes that's it last bit of compositing nice and quick so the actual time in Adobe After Effects was super quick this time we just brought in a 3D model spun it around put a camera in and brought in our character keyed out the green screen footage and positioned him in 3D and then when you're happy with it you can go up to composition add to render que and Export your mp4 file okay and that brings me to the end of the video tutorial I was going to touch on mesh magic AI the new tool that I'm developing to go into AI animation. comom which is sort of an upgraded version of the Zoe depth um thing from hugging face but um it's getting on in the day and I I don't think I can be bothered but uh essentially it's going to be better it's going to have more things going on you'll be able to swap out the textures so you can use one texture to distort the mesh and then apply a different texture before you generate that glb file and I'm also exploring being able to use video files to create a 3D mesh of a video file that you could then move around an animated character that's already been animated which opens up some interesting things whether you're creating a little clip using pabs Runway ml or something else and then having this 3D mesh that you could then add some extra control extra camera control to um but we'll touch on that in a future video if you do like videos like this and you've enjoyed the random ramblings as I take you through these different tools and processes to try and bring some of your AI imagery to life using some AI tools and some more traditional digital animation techniques please press subscribe like leave a comment head over to Ai animation. comom and register free as a creative come and say hello on the Discord um all of that stuff all right uh I've just moved house and I've got a thousand boxes to unpack so I'm going to go cheers have an amazing day [Music]
Info
Channel: AIAnimation
Views: 193,174
Rating: undefined out of 5
Keywords: ai animation, animate midjourney, 3D ai animation, animate ai image, ai character animation, ai animation workflow, generate ai animation, bring ai image to life, ai image animation, turn ai image into video, ai animation tutorial, midjourney animation tutorial, 3D depth animation, artificial intelligence animation, 3D AI, 3D Ai animation, ai animation 3D, deepmotion, 2D to 3D, 3D midjourney, AI 3D, zoe depth, sad talker, Floor33, lalamu, lip sync, mesh magic ai
Id: 7u0FYVPQ5rc
Channel Id: undefined
Length: 31min 41sec (1901 seconds)
Published: Tue Oct 10 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.