Create Amazing Videos With AI (Deforum Deep-Dive)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey hey we're live this is my first live stream as an AI YouTuber so this is going to be real interesting um hopefully it's not uh total Amateur hour but thanks everybody for hanging out we've got uh about 55 people watching and I think a handful more are going to be jumping in and I've got some awesome guests joining us on this live stream today I've got uh revolved from run to Fusion which we'll be talking about what run diffusion is it's basically a a cloud version of stable diffusion where you can run stable diffusion in the cloud we got human joining us one of the original creators of deforum joining us on this call today so we're gonna try to create like a full-on master class in this all around the Forum so if you don't know what the Forum is or you do know what it is you don't know how to use it yet or you know you have any questions lingering about the Forum after you've spent some time playing around with it ideally we're gonna cover all of that in this call today so it should be really fun really interesting really informative have no idea how long we're going to end up going we're just going to go until uh you know we've we've answered questions and everybody's got a good idea of what you can do with the Forum here so I'm not going to ramble for too long I just want to get straight into the into the content get into what everybody came here for and I'm going to introduce you to our our first and Main guest of the show today I'm going to bring on revolved here he is um one of the one of the guys over at run diffusion he is the the guy who he's like their expert in deforum him and I have had a handful of conversations he knows what he's talking about he's got some stuff lined up that he's going to share with you today so I'm excited to bring him on so here is revolved thanks so much for joining us excited to get into this um before we get in do you want to just real quickly um let us know a little bit about run diffusion and what your role there is there and you know go ahead and roll into a little bit of what we're going to talk about today yeah awesome so yeah I've been with run diffusion for a little bit now run diffusion came out wow like October so you know the AI space moves quick right um the solution for run diffusion is basically we took the idea of making it as easy as possible for people to get in to stable diffusion um you know automatic 11 11 is like a fantastic tool but it's very complicated there's a lot of depth to it and um what we really wanted to do is just take all that part about like installing it and configuring it make it easy to use pop it up instantly as fast as possible so you can just get right to creating art um yeah in my role there so I'm I've was actually just uh started in the Discord helping people out and they brought me on as I have a background in software um like in particular around um you know revenue and sales but I'm also a video artist and an artist myself um as you can see in the background over oh wait here uh I've got like a video synth running and um this up here this poster is a video art conference that I used to help hold in um in the states so uh yeah video art is in my blood uh whether that's you know analog pointing a camcorder at a TV and watching that infinite Hall of Mirrors feedback thing happen or um you know more complicated stuff with the the hardware synths I just love it I love like vijaying I love art and cinematic stuff too so um as soon as I saw that AI video was coming in I was very very pumped especially when I started seeing the deformed videos I was blown away what people were able to make with it uh and knew I had to be doing that so yeah when I saw Rendezvous I don't have a very good GPU I have like a AMD like man like it must be like four gig or something like that yeah well it seems like it's uh stable diffusion they mostly want Nvidia cards anyway right you kind of it seems like even the good amds are having problems with it if you don't have Nvidia yeah there's there's ways around but it's like a lot of extra steps to get uh AMD cards working and um they're just a little slower uh to the draw on like drivers and stuff like that Nvidia has definitely been Cutting Edge even in like streaming tech for a long time so they've they've got the uh the edge for now for sure yeah for sure yeah well very exciting so like I know what we we you and I have had a few discussions about where we wanted to kind of take this and we're going to get into the details and and all of uh you know showing some stuff off one thing I wanted to do real quick just you know before we get too deep into it is for anybody that doesn't know exactly what deform is I did actually upload a couple of videos in here to kind of show off some of the the stuff that I made into Forum uh one of them I made on camera on one of my tutorial videos one of them I made off camera so this was the first video I made right here that you can see on the screen and I put it up on Twitter and it kind of went viral on Twitter and this was just me saying you know turn a monkey into an astronaut awesome video that came out of it so that's the type of stuff deformed is this is another one that I made that starts with a wolf because you know my last name's a wolf and then it kind of morphs into a lion here and then it morphs into a tiger and it's supposed to morph into a uh shark that needs a dentist yeah so anybody who doesn't know what deform it makes those kinds of videos if you're in the AI space right now and you're on Twitter you've probably seen that kind of stuff circulating a lot of those types of videos um and so that's the Forum that's doing a lot of that and a lot of the tools that are out there that you see that make those kinds of videos they're using deforum underneath the hood as well so you know you're seeing a lot of deform float around out there in the AI video space but um you know what we wanted to get into was a little bit of the the mindset around this and how to how to approach The Forum kind of set your expectations of what it can do and what it can't do and what you should expect from it so maybe that's a a good place to start and let's just jump into the deep end here yeah I think I think this is a really important piece of just AI art in general is the space is so new there's just like new stuff coming out every day you know there's like uh advances new Services um and just people who are contributing to this open source Community which is just incredible to watch this stuff kind of come together in real time and it's beautiful but there has to be like a certain way of approaching things because it's also new so an example of this is like you know if you're going into things and wanting to achieve a certain goal you know you see a video online you're like okay I want to do that exact thing um it's really good to just like kind of try to dive into the nuts and bolts I mean I know like when prompting and images came along you know someone would post a cool picture and then everyone would be in the comments below it like what's the prompt what's the prompt what's the prompt I want to try the prompt you know and it's like um kind of funny uh I don't think there's much hand-holding in the AI Art Space and we're lucky that it's just mostly because it's moving fast and the kind of stuff that we're doing I've seen in the communities like Matt what you're doing with these videos YouTube like people who are doing their own documentation sharing on Discord like those are all amazing resources to help figure things out but a lot of the stuff is so new that there's just not much out there for it and so the attitude that I've taken as an artist going into it is like okay I need to explore this be willing to make mistakes and try as much stuff as possible which is a really really important attitude to have especially when you're working with deforum we'll go into like some ways that will help you from wasting time but I think that it's just so important to have that like experimental mindset and that Curiosity because the curiosity is really key for doing that um so that's attitude I've tried to take is to just not like you know look to copy things one for one but to learn as much as I can now that being said there's some incredible settings that people have found out there and a lot of times that will be what gets you started that's the training wheels and once you have the training wheels tweaking things a little bit is going to help you get that last step but there's uh there's little breadcrumbs everywhere on like you know Twitter Discord Reddit YouTube they're all they're all kind of out there and then you kind of piece it together into this picture yeah anyways yeah absolutely no it's funny because that that's always my first thought whenever I'm coming on whenever I get on Twitter and I see one of these videos is I want to know what the prompts are not necessarily because I want to make the exact same video but you know one of the one of the tricks when it comes to prompting is all the additional sort of keywords that you can add to the end of a prompt to get the images to look a certain way so it's it's never really necessary I want to I want to recreate the same exact image but I want to know you know how did you get that look to it how did you get that level of contrast to it how did you get you know that sort of colorization or or style to it and I feel like that's one of the big reasons everybody's always asking for those prompts and so that might be another sort of rabbit hole we can we can go down a little bit once we actually start digging into deform a little bit is some of the additional keywords and little tricks and things that you can add to the prompts to to get those like those you know those really good looking images out of your videos yeah I do feel like there's a couple metaphors I would use like one is like when you're making air you are exploring and you can really find things that people have not seen before while exploring this space which I I think is incredible because as an artist like you know there's so much art that's been done then there's so much like that has already taken place but this AI art is a totally new field and there's so many new ways that you can explore things um and using it in different contexts and mediums as well which is which is a really interesting piece but going back to the prompt piece of it because you know we're all kind of looking for that gold right um and I think that there's sort of a almost like an Alchemy it's not like a science it's like it's like the old like Middle Ages like pouring sulfur and like strange liquids together and not quite knowing why the result you get is the result like it's it's a fun sort of thing to to be a part of this sort of alchemy and on the one hand you might like feel sometimes while doing this like oh I'm just gonna I'm just like banging rocks together hoping to make fire right like we're we sometimes we just don't know what we're doing with the AI tools but by slowly trying to like understand the fundamentals of it you can really accelerate your progress in the field and make some incredible stuff yeah absolutely so you know again this whole thing this is this is new I don't know how the whole flow of this thing is going to go but do we want to do we want to explain what run diffusion is so the videos that I put on YouTube it started off with me showing you how to locally install um automatic 1111 and stable diffusion and then deform is sort of an add-on that you can add into automatic 1111 and it sort of starts to get convoluted once you have all of these layers of what you need to do right you need to have python installed you need to have like your you know your git bash installed and then you need to get into automatic 1111 and then you gotta add this uh you know the Forum plug-in on top of it and there's all of these sort of layers of complexity to get to where you can finally start generating with the Forum right and I feel like that's where run diffusion comes in and this isn't meant to be like a webinar to pitch run diffusion or anything but quite honestly with something like run diffusion you can just sort of jump on the cloud right if if you have an AMD process that doesn't really like to process stuff with stable diffusion or you know if you're on a Mac you know Macs don't seem to really like working with stable diffusion very well um if if you're on an older computer you know you can't really do this stuff super easily or even if you have a decent graphics card sometimes it just takes forever to process it because you know it's it maybe it's a older graphics card maybe it's an older Nvidia card and that's where I feel like something like run diffusion where you can just go do this in the cloud and depending on you know time versus money you can you can spend 50 cents an hour and go and run this stuff in the cloud or if you know you just want it really really quick you can pay a little bit more use a little bit higher at GPU and get this stuff down a little bit faster and that's where I feel like run diffusion really comes in so maybe let's talk really quickly about the the uh what run diffusion is and some of the benefits of it I feel like I gave some of the benefits but you might have even more context because you know you're working at rendofusion every day yeah yeah um maybe we could maybe I can share my screen here let's see and then we can like kind of get that screen share started here we go I love there's a comment I just loved here like we are like ad Mech Tech priests from Warhammer forty thousand I as a 40K player I appreciate that back in the day yeah um I really feel like like that's uh that's kind of what's going on the if you don't know they like worship the machine you know they're like their religion is the machine the AI stuff we just like put in our incantations of these like random prompts and outcomes something incredible you know it is it is kind of the way things are going absolutely um yeah okay so we got the screen up so um I'm logged in already to run diffusion and um I'll just kind of talk things through we we have like a few different um things here so automatic and invoke um invokes like an in painting out painting thing automatic is kind of what everyone's using especially for deforum that's the only place you'll get deforum on on here um and then you can choose the model and I'm on our creators club which gives you private storage so you can load in custom models embeddings all of that um and then they'll persist from one session to another uh which can be really helpful um in getting things going you know I think that like what you were saying Matt about um like the GP having a GPU maybe you want to do things faster you know this is something where you can use these uh while you're using your local GPU so people will do things like train on their local and then take the models that they're working on pop it into here and start um you know seeing what the previews are like messing with it you can do the same thing with deformed videos so the idea that we have here is let's let's get one going here all this launch one so we have a timer that lets um you know we'll we'll basically shut it off after a certain amount of time so you're not wasting money um there's an API as well so we'll just launch this so it takes a little bit to get this set up but it's uh quite fast and especially if you're use if you've tried uh doing this in the cloud before you know I think that um you know in regards to where we sit like we're really trying to just provide the fastest and easiest to use service that's like our goal um and as well to provide an extra level of support for things so um you know although we don't know the answer to everything in automatic 11 11 because it's changing every day and it's complicated we're doing things like trying to make sure that it's the most stable version of automatic 1111 so we don't have people running into errors and we support them um with like uh any issues that they're having um so you know that support and and training and Discord help is is included in that and um uh you know we want to make sure that we're providing kind of the best and most stable diffusion you can get stable and stable diffusion yeah yeah so I see Dave in the comments saying that uh you know he noticed he jumped on the 2.50 one they do have a 50 Cent version but because we're live we want to keep the pace of the generations going so we're definitely going to use the fastest version we can just for the sake of being live um but yeah they do definitely have a 50 cents an hour version and um when I was demonstrating it some of my videos on YouTube that was the version I was using so yeah when I was doing the Forum like uh on my own before I even joined there we go we're already up here so when I was um what happened here diffuse Master this is a good one it's always live you know I was just I was just using this I've never had that issue before all right so we'll try it again um we don't charge for starting it up or spinning it down so no big issue there yeah and that is one thing I noticed too is you know when you when you're first signing up to and you're logging in it says like how much time do you want to use and you can set it as an hour that's not meant to be like you're going to pay for an hour that's more of like a safety net if you like walk away from your computer and forget about it it'll turn it off after an hour so it doesn't keep spending but if you're only in there for 10 minutes and then log out it only actually spends 10 minutes worth of spend yeah and this is in the chat you see run diffusion is responding that's not me by the way that's uh the uh one of the founders so uh he said there's a ton of people yeah spinning it up right now I should have uh maybe prepped a few of them and kept them running in the background on the warming up you know but uh when I was doing deform I was doing it on the medium setting that's before I actually joined run diffusion um because I felt that that hit the best balance of speed um which I need for deform you know you can't use the slowest one um and we'll show you I'll show some of those as well I'll spin up another instance um so you can see that there is the ability to spin up multiple instances at once but um I felt that that had the best balance um you know I think what I use large for is like I've got a set idea in mind I've kind of played around with it I understand what prompts I want to use I've like fully dived into the settings that I want to use and then when I hit large I can just run and just like hit it to render render render render so I'm not like having a large app spending the larger amount of money and you know wasting time diddling around with settings absolutely so just in terms of like the speed difference you know do you have an idea of like so you've kind of got a small medium and large like server that you can use what is the the do you have any sort of like benchmarks or sort of speed differences of like what the expectation when running like a deforum would be between the three like how long it takes you know to run like to to generate a video I guess there's too many factors to really answer this today too so um there's a bunch of settings that will affect it um I would say that the main one is resolution so if you're trying to make a higher resolution video you're going to want a bigger box um and that's again where you figure it out on a smaller resolution you kind of understand what you want you scale it up you get something a little bit larger and then of course you can do the upscaling and so on after as well you don't need to like do the biggest one from the get-go just like normal uh so that is one of the things the other thing is like um just like when you're generating an image you know there's how many steps you're using um and then to form there's some other settings as well that will affect that um but then also how much stuff are you piling into it you know there's like a hybrid video there's 3D camera controls there's all kinds of stuff and so that complexity can really um Drive the you know the uh usage that you need yeah and uh you know when I was when I was demoing it on a YouTube video I was showing you know if you want to figure out if you want to learn the various like rotations like the clockwise rotation or the zoom in and the zoom out that kind of stuff just generate like a you know 12 15 frame video because then you can generate real quick just to kind of get the example of what it's going to look like but then if you're trying to generate something that's 500 frames that's going to take a hell of a lot longer than something that you're just generating you know 12 or 15 frames on so yeah makes a lot of sense it was a as I was asking the question about comparing the three I realized the uh the answer wasn't going to be so straightforward so yeah there's there's a ton that goes into like determining kind of the the speed of things but um you know there's also so many awesome tools available um so yeah uh just like recently we've been testing or the The Forum has been testing like control net getting involved there's um hybrid video which is awesome using like you can load your own videos into things um so there's a lot of interesting stuff um yeah I see a lot of good questions oh man people stole all the large servers that's what's going on apparently apparently I need to use a medium okay okay [Laughter] that's funny well we're gonna have to uh reserve a few more of those bad boys let's see what we got here we'll go for medium and that'll make it easier to uh to see what a medium is really like here as well cool so when you first time I see people are already jumping in and making deformed videos when you first jump in it's very intimidating there are a ton of settings and this is like if you stepped into automatic and you were like holy crap there's a lot going on um you know be warned that into form there's a lot going on as well and it's not as immediate to get kind of the instant result you may want and that's what I'm talking about that experimentation the Curiosity um I think we're getting scaled up in the background on servers so we'll we'll get that rolling but um yeah I just know that uh when I first stepped in like the first few videos I got were just like garbage and sometimes like a blank you'd get blank frames you'd get like crushed deep fried images you know you'd get like Blurry images um yeah there's there's a lot going on but yeah like there's a comment in here we're we are light years ahead of where we were six months ago and just in the industry in general we're like years ahead of in the industry um what people are doing now is incredible um yeah you mentioned uh disco diffusion like the precursor to deform right people are doing a video in that so um it's just amazing to see how far things have come and now that um like hybrid video control net um even the tools Runway is brought out there's a lot more viability to using this in like even a professional sense [Music] oh okay I'm up now uh Matt let's oh there we go all right medium works this is a medium yeah awesome one thing um while you're you're kind of getting set there is you guys are giving away some credits to run diffusion so might as well give some of those credits away yeah I do have a little uh contest but it doesn't I can't tell if it's working yet or not so in order to test it um why don't you put the word deform into the chat and then I'll uh try to randomly pick somebody that types the word deforum and see if I can get this bot working to to do the competition here all right there we go spamming she loves asking if anyone used deep dream and gfp gone yes back in the day deep dream was the one that like I think everyone turned things into dogs remember like that was a good one it was like acid dogs like psychedelic acid dogs yeah yeah all right so many people chiming in what one do you want to give away Matt that's a good question um let's go for one of the big ones we're giving away two fifty dollar ones and uh let's do one of the 50 ones that was a question from Dave what uh price is the medium it is 99 Cents an hour all right my little my little uh random picker isn't working here so that's annoying so I'm gonna go ahead and just kind of uh throw a dart I'm gonna kind of blindfold and point at the screen and scroll up and down real quick and then just go all right uh dawaz lafoon uh that's the winner all right you won fifty dollars nice one that's uh that's a lot of time that's a lot of hours yeah so there you go that's that's the first 50 giveaway there I'm gonna go ahead and cross it off my list to let us know that we gave it away and to make a note of who gets it so dewaz lafoon all right let's go ahead and uh start exploring some of the Forum here yeah so I'm just logged in this on the side is our file browser um so you can see you know all your directories in here um this is the standard automatic 11 11 with this like top bar added so what I will actually do here is we can get a multiple session going so let's launch another session and we'll just do this exact same thing medium 99 Cents an hour 1.5 select and we'll get it going so now while that one's spinning up we can see it in the session here and it lets you you know keep an eye on the timer make sure you're not wasting anything by accidentally closing a tab and realizing the server is still up or anything you'll see it here on the side okay so we're in the deform Tab and uh you know I think first thing I really wanted to do and I'm just going to pop up my notes Here was kind of go over like an example of uh using 2D camera controls just get something basic started so um you know we can keep a lot of these the same there's a lot of different Samplers in here and um for deform I actually particularly like Euler a just because it's a little bit of the Wilder ones I will say that um you know what settings work for me may not work for you uh and it a lot of it comes down to Personal Taste and that's where that Curiosity trying things out can really help because you'll find what works for you not just what works for uh for other folks so a lot of the settings are in this keyframes tab here and I cleaned it up uh recently they did some fantastic work on organizing the UI there's a few different modes here um so the first one that I want to talk about are I think you've a lot of people have seen some of the more like rapidly like hyper growth almost like flashy flickery type of deformed videos and you can slow that down so when I started like a lot of what I was doing was around trying to slow things down and make it like a little bit um a little bit like slower and more my speed because that's my personal taste right I wanted something that was a little bit like smoother so what I like to do is change this to form Cadence now there's a long explanation there but the basics of it is if you have Cadence one it will basically do a diffusion on every frame and if I do two it'll be every other frame and so on to five so here we're going to get much less of that hyper growth and every five frames it'll start to change so we'll just rock 120 frames here we've got our steps strength is I would say the next most important and the this is because uh this is the cohesion between the previous frame and the next frame and it will really help determine like how Wild you get it starts at 0.65 here which is pretty good I want to keep it again a little bit smoother I'm going to put it up to 0.75 but in some cases you might want to lower it depends how wild and crazy you really want to get uh going down to motion here so there's a number of different things and these are all very important so this is for the 2D camera rotate clockwise or counterclockwise uh you know counterclockwise would be negative number here we're just going to leave that as is now this is where you kind of look at deform and go what the heck is going on so with this Zoom what you really can look at this as the mathematical equation of a wave now um I am not a mathematician um my you know algebra teachers would be ashamed that I might not know what this is without pulling out some fancy calculator graphing good old TI-83 but what I'm going to do is just uh remove that and replace it with a one and we'll show you some different tools that can Auto populate that I'm sure someone in the chat is already like chat GPT I'm just going to pop that in there it'll tell me what it does which is probably a good bet chat TPT and those other AI chat tools are really good for figuring this stuff out so the important thing to realize you see there's a zero here and the important thing to realize is like that is what frame that this takes effect on and if you leave it at zero it just means the whole time throughout the video we're going to be at Zoom one Zoom is this one setting that's very important to note that you cannot put zero on it that will not stop it zero just basically I don't know it goes into like nothingness Infinity it just like messes with it you'll get an error so keep it at one which is essentially stand still Zoom one not moving just keep that in mind and make it down to one for Zoom if you want to keep things flat without a zoom now we can actually add in maybe some keyframes here so here we've got on frame 20. you know we're going to zoom in a little bit frame 50 we're going to pull out a little bit frame 100 we'll just stay back and still and what it will do is it will smoothly move over the frames and blend that or tween I don't there's a technical term for it but it will slowly move from 1 to 1.005 as the frames advance from frame 0 to frame 20. so that's just a really good idea to keep in mind that that's how it will smoothly move through now X you know this is the left right as it says here um so we're going to go from negative 1.1 down to zero again so it's going to move and then stop and then move again and we'll do something similar with translate why down of course all right now going on to noise um noise is actually incredibly important and the reason why in deform it's important is if you think about it in one image you know noise is not a big deal you know you can kind of mess with it but it's necessary for making a diffusion but when you're dealing with a video and you're using the strength to have coherence from one frame to another what that means is that noise can feed back on itself and quickly get out of control and you see this a lot where like people are like uh what happened like my video started tiling and getting squished and the colors got all blown out and like very I lost everything a lot of times it might be because of the noise is too high and the strength is too high and they're combining together to just Crush everything into noise so for me personally I like to keep a lower noise not introduce as much but if you're doing a more chaotic more Wilder one then you can crank the noise up a little bit try it out find out what you like um you know become a noise artist if you want I'm going to try to minimize it as much as possible in this one so for coherence this is the color settings we'll go over some of them as we go through like different modes um and then there's an anti-blur as well I'm just going to leave those all blank config you may already be familiar with from uh text to image or image to image um you know lower values more creative higher values closer to the prompt so we'll leave that as is yeah well actually maybe we'll pop it up to nine or something like that okay now there's a bunch of other settings here which you can use to determine the different um ways that it uses seeds for example so I'm going to keep it on iterative which means that the seed will go up by one for each frame of animation you can if you want um go back in and change it to a fixed seed so you could even schedule that by a keyframe which is awesome like I love that they thought of this my seed is you know random so I'm not too worried about it it's just going to pick a random spot and iterate from there all right so let's put our prompt in now the key thing about this that's very important notes I always read these very very important um so with these prompts we're going to start at keyframe zero and this uses Json so what I would recommend is if you're getting errors double check that your Json is validated go in here pop in your prompt oh see our other one's definitely uh ready to go so this prompt we know it works I mean this is the default one we're going to try putting our own prompt in here like so okay and now if we go to the Json validator we can pop it in here hit validate says Bingo looks good um the one that I see a lot in the Discord is um if you do multiple like let's say we wanted to do one for um frame 100 like this you know and like cat uh here you don't put a comma but here you do put it all right the last one doesn't have a comma that's what I'll install it throws me the last one doesn't have a comma but if you do another line you need a comma so just keep that in mind and if you're not sure check it in the Json validator very important you'll see we also have negatives here so there's two ways that you can do this you can either put your prompts positive here and your negative here or you can put in your prompt here and then do the dash dash neg and then put in um your negative things so you'll see here it says you know do this one way or the other but not both how we doing over there any questions so far or anything uh coming up so far so good yeah I think um I'm seeing uh people having conversations in the chat but uh saving some of the questions for later that will Circle back around to um one thing I'm curious if you're able to do if not no worries are you able to zoom in at all can you you know Control Plus or something and get a little bit bigger on the screen there is this uh a little bit better yeah I think that looks a little bit better cool thanks okay cool um so this init image we're going to use an image to start things off um which I find in my creative process is like hugely important so we're gonna go into init um you see I've got like a bunch of glitch video stuff here on the side from my synth in the background so um I'm gonna click use in it now the strength of this determines like how close to the init image it is so I'm going to try going up a little bit like six five and we're going to use um you know this lzx one so for the path you can use the website path for this I'm using I've got the creators Club private storage so I'm just going to reference you know the storage on this Cloud Server that goes between instances and then the directory oh wait wrong directory in it image lzx1 okay uh hybrid video we're not going to talk about yet let's see whoops anything else I need to do here I think that's mostly it okay yeah we're good to go so here you can see uh we've got it started to generate on medium it's going to take a little while so we can pop over to our other session um it when you start off the timer I'll just go back to that when you start off the timer it looks like it's going to take a while and then it like really comes down in speed so you saw it started at like 12 minutes and I was like oh let's switch tabs but then I'm like wait this shouldn't take that long back 30 seconds so yeah you see that regular stable diffusion as well when you're just generating images right it'll say like five minutes and then 18 seconds later it's done yeah exactly so let's uh let's just watch this because it's it's almost done let's see how it looks all right so you can see there's a very slight smooth uh movement um you know I'm trying to reduce the amount of Blur by keeping the movement slow you could use the anti-blur as well but here you can see you know that watercolor came out gorgeous and when it slows down you'll see like the details kind of clarify a little bit so let's um let's see let's go into here and make the frames a bit longer and we'll use a new feature to me anyways but this is really important so you we've got this like video that we created from the init image and maybe actually because it's so quick maybe we just like try a different one so if we go to lzx3 um generate so we'll get that one going and see what the effect is by using a different one because it's going to change the colors it's going to change the shapes you can see you know in this image here uh um but out foreign so it's really grabbing like these these lines from the video synth and bringing them in making it this abstract kind of thing the strength is quite high so it's not getting too wild it's sort of sticking very close to that innit image but once it's off that first frame it's all on its own and it's just the strength that's keeping it together like glue as it goes on so I think that other one should be done yep so let's check it out hit update video there it is we got like a spaceship or something in there a foot uh oh feet guys going to be in the comments [Laughter] but yeah you can see it's keeping pretty close to it so if we wanted to make a little bit wild you know we could uh maybe lower this a little bit go over to keyframes bring this down to six five and uh maybe put the Cadence at four try again and this is where that experimentation comes in handy um because what I find is depending on where you start and kind of what you're working with your settings will be totally different there's no like this is the way to do it every time to make a great result and there isn't in some text image you can definitely see that when there's like a prompt that's just amazing and like people pop it in they all get the same awesome result with the Forum you can get some incredible results in it by copying other settings but what I find is it's really dependent on kind of like if you what you're using as your Source your inner image your prompt and everything that'll determine the settings so here we go now you can see it's moving like a bit faster and it's going off of our init image faster as well really goes into like an abstract sort of thing this looks like a fruit I don't know what's going on there right but you can get the idea that it's like it's shifting a lot more so let's use um guided images here now the way I'm going to use them is a little bit abstract because I like make abstract art but um you could try to bring it really close to it and make it like force it into a certain image that you've generated um to make like an animation and that's like a very complicated thing like I know we saw that incredible like anime that the guys made um forgetting their name Matt what is it oh the quarter crew guys Corridor yeah yeah that was a that was phenomenal yeah yeah that stuff's like amazing um but uh you know I I try to stick to the more abstract stuff but you can force it to go down those roads so there's a lot of different things that you can do to like kind of um have it do a specific type of thing we'll go into hybrid video after to start showing you that um because that's going to be very important so it's a very important read me here this only works with 2D and 3D modes interpolation is actually something in specific in um stable or sorry uh deforum and it is not frame interpolation which you might be familiar with what frame interpolation like basically takes um let's say you have like a 120 frame movie and it says like I'm going to make this 360 frames by multiplying um the number of frames and then like linking them together sort of like uh how deform is doing this with the strength to make it match from one to another but it's like tweening those frames and putting them together [Music] um so this is different in deforum what it's doing is it's actually tweening prompts it's going between from one prompt to another and bringing it there so just keep in mind that that is what interpolation mode is um video input mode uh in the old one I was actually not too happy with and yes you can use any image to start the animation like I'm doing here just some random stuff I grabbed off my synth but could could grab a new one from the synth right now and pop it in here no problem um or something I download yeah somebody was just asking if they can use you know something that they drew as the starting image so yeah I just wanted to clarify yeah any any image doesn't really matter yeah or you could like take something you drew pop it into control net and then put it in here right so there's there's like lots of uh capabilities um the video input mode I do not use so I will just say that straight up um I wasn't too impressed with it what I have used instead is um hybrid video but there's a few specific use cases where you might want to use that but anyways um see how it says set the init image tab uh strength higher so recommended 0.65 and then you need to set the seed Behavior to schedule so let's get the schedule piece going seed Behavior to schedule and I will put in the seeds that I have here to do one a little bit longer and then we can grab The Prompt for this so what is entered in the seed schedule there where did that come from now that that's just something I've practiced with before to make sure this was working properly but here I'll just walk through it this is seed five it can be any number I just chose five arbitrarily um and then negative one would generate a random seed at frame 200 we're going to do seed 555 and then frame 300 will be random again and then frame 400 the end will be seed five gotcha and the the seeds are just fairly random right those are arbitrary numbers yeah I haven't like you know gone in and investigated hundreds of seats or anything like gotcha and we can put any number in there if we wanted here on the images so what I've done is I've gone uh frame zero is image one two three and then back to two and one you see that it does say you can make a loop now what I will tell you is that it takes a lot of work to make a loop in this um and you'll really have to jog your settings because if you think about it we're starting with an init image it's getting like uh the strengths is like generating these frames off of it with the cohesion and the noise and then we're injecting another image into it at frame 100. and so it's going to try to blend there so because this this is very short like 400 frames is not long even if it's 15 frames per second it's not long um so to make a loop will take a lot of work there like it will really have to go through a lot of stuff in a lot of frames you can do it just know that this is probably not going to look like a loop even though we're going back to the original image but with some like in it image stuff we could definitely do that so it's not impossible um so since we've um changed this to 400 frames right let's make sure we've got that 400 and enable guided images mode okay I'll get there and we will need to change the seed schedule or sorry that's not C's code the motion so um the frames match up so this is zoom actually I need to change that uh translation X we're going to do this so now it's going to frames 100 and 300. and why as well maybe we'll add a 400 uh in here 400 . so just to clarify here so when on on the zoom and on the translation X and on the translation Y the series of numbers the so starting on frame zero the one is the one is basically no Zoom is that correct but on translation X and Y it means something else it means zero means no movement gotcha and so what so what we're doing here is from frame 0 to frame 99 essentially it's not going to zoom and then starting on frame 100 it's going to start zooming in slightly no and then okay it's frame zero it's going to start slowly increasing the zoom till it hits the value of one okay zero five on frame 100 yeah yeah because it's not it's not quite obvious um but uh but yeah okay so so from frame 0 to 100 it's going to slowly zoom in until it hits that 1.005 okay gotcha yeah so let's get that one bacon in the oven there uh three minutes on a medium uh again these are very fast um it's not our best card in here but um you know it's still able to generate all these pictures at a fast rate um all right do we have any questions maybe do another giveaway what do you think well we're waiting for this bad boy to cook yeah let's do a giveaway I did get my bot working so now I can just pick from anybody who's been active in the last 15 minutes so if you've been shutting in the chat for the last 15 minutes then uh you're eligible so I'm gonna go ahead and roll it real quick and uh well it picked run diffusion as the winner so let me roll it one more time well no we need we need people to say a word oh okay yeah we can do it that way right now it's just said as anybody who's been active um do uh tween I want to hear people say tween queen queen like p-w-e-e-n yeah all right it's a weird word [Laughter] you know run diffusion's pretty pretty happy that they uh they won there and uh well we're giving away a thirty dollars in credits on this one thirty dollars in credits all right tween and Weezer and good old bands all right I'm debating when to press the button because the the it's still rolling in yeah let people hold on let's make sure we get everyone in there oh we got some double double Dippers there yeah I'd only let them it'll only see him once okay all right I'm gonna go I'm gonna I'm gonna press the button so this is for thirty dollars in credit here so we got Tim Buck two that's Tim buck and the number two you just again one thirty dollars in credits nice hey there's nightbot all right awesome why don't we do uh why don't we do another one do a 20 or something like that all right so um all right so this is also gonna pick from tween since everybody's on the queen train right now yeah all right so we got Sergio F mohar Masters nation that was uh for a 20. nice all right let's do a 10. all right and then uh let's see for the ten we got deep love AI hey all right awesome the tech priests will be pleased just making some notes here so I don't forget who got what all right awesome what I did was I uh you can't see what I'm doing right now but uh I took uh I've got my little notepad here and I wrote down all the prizes I'm just crossing them off as we give them away so that I don't lose track here great great yeah we will make sure all those credits get handed out to you um if you're not signed up already you will have to uh sign up go into our Discord send us a message and just say hey you know I won the uh the prize we'll verify it and then uh get get the credits added to your account yeah so um somebody's asking how to get them yeah jump into the uh the Run diffusion Discord and uh they'll make sure to take care of you there sweet all right we are good on this let's update the video see what we got okay so it's it's starting in the same place um with the image but let's see how it develops okay you can see here it's very rapidly changing as it gets to those different images but the strength and color oh we got a little car in there that's cool the strength and color is um staying pretty coherent between these images even though they're very different so there's a few different settings that we can look at in there we're almost done and then it's like stopping so it kind of blooms a little bit in the quality you can see there's like a little bit of blur as it's tweening between the uh the different things there so if we go to guided images um there's a few different settings here and so it's important like if you really want to nail kind of an exact image this is where you would do that we've got the tweening frames frames schedule that word man um image strength right so you know how high do we really want this do we want it to affect our image a lot or not very much how do we want it Blended in now the blend is uh an equation and you see the equation listed there but it's a mixture of uh the blend Factor scope and the tweening frame schedule here so again math not that great chat GPT probably has the answer maybe human can answer in a bit here when we go on but um just know that these are fun sliders to play with and we can just try some stuff out you know might as well um while we're doing this again you want to make sure that um you know you've got some pretty consistent images I would say in here but hey experiment get wild put some weird stuff in there do a longer video try it out or maybe do tons of images like every frame like a different image or every 10 frames a different image you know there's lots of different ways that you can play with this and try it out color correction correction I kind of want to lower that how close to get the colors of the input frame right so let's actually bring that down and see if we can get some of these other colors that because we've got like some really wild colors in here right let's see if we can well these These are actually pretty consistent so maybe instead we'll go four and five try that out but we will have to move on to some other stuff here because I know I'll just like nerd out about this stuff because they're like oh this Art's so fun I'm gonna keep doing this now I do want to show the other uh two modes which will take a little bit so we should probably yeah and 3D is one that when I was messing with it I stuck to 2D I didn't even move into 3D because I wasn't ready to make a video with all of the various options that are available inside of 3D yeah I'm gonna try to see if we can get a large let's see run diffusion can you kick someone off for me no I'm just kidding okay all right let's see we can get here uh let's try a 2.1 select and launch so we'll see if we get something there but um we can go here and kind of talk about a 3D version because the 3D camera is incredible like I am I'm really blown away and I will say I do want to give some shout outs here um because the deforum Discord is incredible like the everyone there is super helpful they're all so nice um and they get bombarded with questions non-stop um but uh the communities really come together and helped out there in a big way so uh the shout outs I want to give are one snack pack I saw you there in the channel um that's how I Learned was like by looking at what snack pack does they post their videos they include the prompts you can go through figure out how things do what they do um and it's incredible like a big fan of Snack Pack um all right and we got a bunch of uh a bunch of fresh LG cards loaded up we do have this like scaling back end so you know we can techno go to ten thousand it just uh you know we're I guess our provider was a little caught off guard there but um so snack Pack's one of the shout outs the other one I want to give a shout out to and we'll talk more about it later is really big name um because they not only develop and add to the deform extension some incredible stuff but they also post all their videos with um like details of how they did it um so both of those creators are just incredible and definitely worth looking up in the Discord just go through their past messages and just see all the like crazy stuff that they've made because it's incredible um all right so we are in this medium one uh still spinning up the big box but let's uh jump in here so we're gonna do um maybe we'll make it a little wider let's try it out and 20 steps uh uh go to keyframes so we're gonna go 3D mode we are going to do rap so this is really important um and especially when you're doing 3D camera work you have to visualize that um the way the 3D works is it is going through um and making a depth map of what is active and then using that in a 3D space so there's almost like consider that there's a virtual camera and you're training the camera on where to move in this 3D space it's wild stuff um we did get our large box thank goodness let's let's do that in a large box instead and just you know we showed how uh it works on a um you know on a uh smaller box a medium box but uh maybe make it I'd screen oh got to be in the right one here for six 3D mode um Cadence I guess we can put that back at five again and maybe we'll do 1000 frames now uh I will say just as a note um before I jump into the big box and start like uh showing you all the speed and what it can do if you go into the deform thing here you can actually see all the individual frames of the image that you're making or the video that you're making mm-hmm and that can really help you um if we pop this open that can really help you when you're making a new video so this is also a folder that I delete quite often like I will go in here I'll just go search video hit enter now I can see all the videos that I did right like you know you know this one we can go in and see there's our video that we had before um so I will get all the videos that I want but here you can see all the frames for everything that we've done so far so it takes a it takes a bit it's best if you like delete it when you're done using it and start up with a fresh one but I will say that you can scroll down here and see in advance like oh this video is looking like it's not going to be so hot maybe it's uh more like a hot mess and then you can stop your video but the other way you can do this to um you know reduce the amount of time that you're experimenting and testing things out is you can do a much smaller amount of frames and then uh take a look at how it's looking in the first like 30 seconds or so and then try it again with a longer amount and again with a longer amount and then that will help you um cut down on you know server costs or time if you're doing it locally but it will really help you just at the beginning you know make sure that what's coming out looks great um and then do it it's nice because in deforum you kind of know if it's bad like pretty fast whereas when you're training um you generally don't know until much later by that time you've already trained most of it um yeah so let's go into this and I'll just go into some more settings here if you want to do another giveaway or something I'll I'll yeah I actually want to pull um human onto the stream if if uh he's available now um so he I want to pull him on to maybe help with some of the concepts and explain some of the stuff so uh now we got human on here who human is one of the original creators of deform he actually gave to form its name he's in the he runs the deformed Twitter he's inside the deformed Discord so uh he is the man over at The Forum so I thought it'd be cool to bring him on and he can help answer some questions as we're going through this and maybe help clarify some of the concepts that you're you're walking through as well so thanks for uh for joining us today human thanks for having me it's a pleasure to be here and yeah like uh lots of questions in the live stream chat I've been trying to answer them um I appreciate there's a lot going on under the hood um you know at the Forum you know we're we're a community and we want to give people access to as many tools as we possibly can unfortunately that means that there's a ton of settings but um we just try to make everything available so you have complete control now is there anything that's that's sort of come up so far that maybe you feel you want to clarify a little bit more or um you know bring bring maybe some more attention to because I noticed we were kind of in the in the in the background chat here um I've got my my head on a swivel here I've got you know chat over here and I'm trying to you know look at the cameras and pay attention to revolve what he's talking about so I'm I'm kind of looking all over the place but is there anything that that sort of come came up so far that you want to dig a Little Deeper on or clarify it all yeah so one thing that I think is really important to understand is sort of like how these diffusion models are working especially like in the context of these stable diffusion animations so I guess like the the key thing to understand is that this is essentially a like an initial image animation right so the diffusion model will take noise and with a text conditioning like produce some image um and instead of starting from noise we can start from an initial image and we can still use that same conditioning to sort of like modify and augment the the starting image and essentially like all these animations are is we're taking the output of a generation and we're putting it into another generation and we sort of like have this feedback loop and we have clever ways to sort of augment and like translate rotate Zoom the output of one generation put it into the next and we sort of like create this illusion of like a camera moving through 3D space or 2D space and then you know honestly like you could just keep going right there's there's so many things you can do so it's kind of like um it's it's kind of like the the sort of stable diffusion image to image but it's just kind of it makes one image and then sort of takes that image and then makes another one that's based off of that and that's why each sort of thing Blends so nicely because the next image is sort of based off the one that was previously made yeah exactly I mean it's it's exactly image to image so it's image to image but you're just taking one output and putting in into another image to image and then you know doing those little changes right like move the output or zoom in to the output just like a teeny bit and then you do this enough times you create enough frames you stitch them all together into a video and now all of a sudden you have a zooming video very cool so uh this is actually a question that I had for you Nathaniel's asking it here as well um but where did the name deforum come from I know that was uh you know uh you were the one who came up with the name so I'm curious where the where the name's from yeah so I I actually started sort of messing around in the AI space in I guess like late 2021 and you know before I found all these amazing communities on Discord I sort of like had this idea in my head that I wanted to figure out a way to like program creativity and that was like a really stupid idea but like I I started exploring and experimenting with code and the the first two folders I made um when like you know starting my AI Journey was was human and deform not spelled the way that they are spelled now but um I've sort of had these these names sort of from the beginning and deform sort of just worked and it fit and um it's it's a combination of design in Forum but also it's like a play on words like you know with this AI stuff we're like deforming these images and augmenting them with AI so that's really like how the name came together gotcha very cool very cool well so revolved was mentioning another giveaway I do want to say something real quick I know it was uh judge works I guess when I was messing with nightbot the first time around and it didn't work he won one of the things so I'll make sure that uh you know he gets one of the the prizes as well so I just want to say that real quick uh before I go any further and uh you know somebody gets upset with me on on that so you won one two um so very cool so with with the Forum let's um if if we go back over to some of the settings and maybe start messing with the 3D I kind of want to dig into the 3D and it would be cool to have both um you know revolved and human on you know on voice chat here as we're going through it to talk through some of the what's going on as we're we're messing with some of the 3D stuff because you know 2D to me was fairly straightforward when I was messing with it there's only you know so many motions you can do with it when you start getting into 3D you know there's quite a bit more knobs and buttons you can press and that's where it was starting to get a little bit over my head so what we're seeing on the screen right now this is this is a 3D generation yeah so it's some of my earlier work in the 3D um you can really see the depth and like how the camera is moving around I actually wanted to bring this one up because there's some interesting stuff with it so I used an init image here right so this is some of my video art and then it like kind of goes up and you see these like lines you know actually human I was kind of curious I know this one comes up in the Discord but um those lines are kind of when it the camera goes out of bounds and there's a few different ways I think it's like you adjust the strength setting you can kind of tone those down I mean I kind of like the effect but um you know do you know why that's that's sort of happening well so I I think these are just like artifacts of the diffusion model um and I I personally love them um I kind of try to accentuate them as much as I can just because they're sort of like this emergent behavior of these models and you know they're not they're not desired necessarily but they're they're certainly cool yeah I agree you can get some really awesome textures with it it's it's a ton of fun um this one's more of like a sideways sort of one but still using um the depth map here a little bit you can see there's the camera like turning and it kind of twists everything around I really love it because it kind of gives us like zero gravity feeling you know you're like floating through space and um there's some of the artifacts and you see here they got interpreted as uh desert Dunes you know um there's these little flashes that come from this is actually from the frame interpolation that I did afterwards to make it go smooth so I use an um this is prior to frame interpolation being added into uh deforum um I used uh free application called flow frames or I think they might have like a donation or something like that but uh flow frames was what I use and it does kind of make things a little bit uh a little bit flickery in that area but otherwise I mean you get this like nice smooth animation which is really cool um so yeah we can dive into the settings I just kind of wanted to show off some of this other work while we were chatting this is awesome it's so the the initial the initial art that you were doing that's all the stuff that's in your background there you make it with all the all of your I'm not quite sure what you call all the crazy wires and cables and stuff going all over the place in the background but that's that's built with that stuff yeah um the initial image yeah actually if you change to my camera here um this is the video synth that I have running right now so then over here I can like tweak these settings to get like this is a synth oscillator and then if I want I can change like the blending between layers wow or I can this one uses uh like shaders which you can shaders are amazing you can get like look at like programs like vdmx and there's there's like Shader toy actually if you're looking for like cool stuff to run through deform check out Shader toy because there's a ton of cool like applications that you can do to make like awesome little weird animations and stuff like this and there's a whole Community around making these things you can also run videos through Shader toy um to give extra effects so wow that's crazy that's so cool yeah that's like a whole world I've never even dived into before right yeah there's like uh there's a ton of stuff out there in the video world and it's neat to see like areas where it sort of impacts um with AI stuff so um maybe we should switch back and I just got I've got Shader toy on the thing here you can see that there's like different uh fun things and then the codes here so you can actually like go in and be like Oh I'm Gonna Change some of these vectors and make this do other things you know oh wow and then you could load that video into uh into it you can use a camera input there's all kinds of crazy stuff in here so lots of fun to experiment with wow that's really cool uh real quick question for human um deep love AI is asking if there's if you see like any sort of Adobe After Effects extensions for deform or anything like that in the future so I was actually thinking about this earlier um I think the next sort of evolution to these animations is actually going to be the dream Studio Pro I'm not sure if either of you are familiar but um so from the Disco diffusion days which is like the precursor to The Forum um they were using like sort of different diffusion models but like similar ideas where you're using inits to create these animations um a lot of the developers of disco diffusion are now working for stability and actually working on dream Studio Pro um but I I know there's like a lot of like development going into sort of trying to make this smooth animations as possible um but who knows when they're actually going to end up releasing that they've been talking about it for months um I've seen on Twitter some little leaks some little like people trust testing out the new Goods so we've got uh SD Excel I've seen people testing that I saw one guy say like this means no one will use 1.5 ever again which is a big claim um we'll see if uh you know the anime waifu stuff goes over into sdxl maybe maybe not um but uh I've also seen that there's another thing beyond that which is SD 3.0 which will be the consumer version of it which sounds fascinating so lots of exciting stuff going on um emad still taking a ton of flack though actually it's funny uh you know he's taking so much Flack for 1.5 because it uses the artists and uh he was posting on Twitter he's like I didn't even make that like Runway made 1.5 and released it um but they get no Flack I'm just like a person so everyone attacks me instead of the company you know um so uh yeah I'm glad he's still out there building You know despite the haters he's out there making stuff so uh yeah I know dream studio is going to be coming out with uh some fresh hotness and then I'm assuming um emad's got that lovely thankfully um attitude about open source so the tools will be coming to the general population as well which means automatic 11 11 will be able to use it too so that's exciting stuff I don't know if you've got any inside uh Scoops they're human but uh interested to hear your thoughts on the the next next generation of AI tools with stable diffusion so I actually have access to um stable diffusion XL and it's amazing it's it's amazing it's so good it's like you get all of the benefits of version two um plus all of the benefits from version one it's really it's really amazing awesome wow that's big that's big hype is there is there any word around I know we're sort of getting off topic here but is there any word around like when the expected releases for the General Public so I I heard rumors and of course you know things are super Dynamic so you know I I have no idea if this is gonna happen but I I would feel like in the next month but again I'm not I don't know I really don't yeah I saw he um the nice thing is because he took all this Flack for like you know runways 1.5 release um but he's made an opt out model and then opt-in so if you're an artist and you wanted your art in the model you can reach out he's got there's like a website which is a third party um website that artists put together and he started working with them um so you can opt out or opt-in of having your art in the next 3.0 release which I think is is really great you know it's not like he's just like you know just saying screw the haters we're gonna do whatever we want instead he's like actually working with people and trying to build Bridges and not like you know fast and break things like I think that's a good way to go I I feel like just in the last you know four weeks or so I've seen a lot of the AI hate sort of die down it feels like everything moves so fast right but even just on my YouTube channel I used to get comments all the time from people going oh AI is stealing from artists and this is you know what you're doing is evil and I used to get all those comments and a lot of that has sort of sort of phased out a little bit just in the last several weeks which I find very interesting hmm yeah um I know that there's oh this one I'm gonna extend this one too um I know that there's yeah there's there's still a ton of hate out there emad I mean deactivating his Twitter was was pretty he made like a faux pas I think he was talking about like firewalls and everyone thought he was like admitting to stealing but what he was talking about was like working with companies like Getty um or other companies to like have their um own like models and so they are working to like work with companies as well that have their own like copyright concerns obviously um if you've done any amount of work in 1.5 you've probably seen the Getty Images watermark on there so um I just wanted to point out one thing really quick on this so this is the guided images thing that we did and now you see the color um comes out a lot more and uh uh from the individual images that we injected and there you can see that uh it's closer to the images that I put in so just by adjusting some of these uh blend image strength lowering the image strength um we were able to kind of get it a little bit closer to the um the images and I think we've got a few minutes on our 3D one um human was there anything you wanted to chat about or should I just jump into like some hybrid stuff and start spinning that up what do you think yeah I think hybrid sounds neat yeah it's all it's all great and this is so hybrid is something that like and correct me if I'm wrong here but uh I think a really big name contributed to the project and like made this on his own and then it was brought in Via GitHub is that is that right yeah so I was actually just scrolling through the user Discord and I saw a really big name sort of just doing this incredible like ransacked hybrid video stuff and I was like Hey like we gotta add this this would be amazing and he worked really hard and he put together um like a pull request to update the repo and boom now we have hybrid video and actually like his work is cited by uh runwayml I think in their like gen one paper uh but it's incredible that you know just you can be in the community you can have a cool idea you can you know get help from people and all of a sudden you know you just made some groundbreaking AI Tech yeah I saw the craziest story on Reddit too where like a guy um who had some like he's not a programmer but he knew like something about like video I guess and he just used uh bing and chat GPT and coded his own anti-flicker like he's got posts of the current build on Reddit right now so this is just like such a quickly moving space but what I love is it's so different from some of these other big movements we've had in the past where there's that like early internet feel of collaboration which is which is a really incredible thing um so you know fingers crossed we don't end up with uh corporate overlords ruling everything but at this stage anyways we've got this like fun creative space where anyone can jump in anyone can contribute to a project um people like talking on GitHub about problems they're having and fixing things like those are the real heroes that are keeping the kind of like open source movement alive so um yeah if you think you can help jump in by all means because the space definitely needs as many you know people working on it as possible absolutely uh we've got our 3D video so let's take a look at it um took about 12 minutes there 2 000 frames um it'll take a second to load it into the ram here I believe to play the video there we go and if I wanted full screen I could just like navigate to the folder and pull it up but we've got a full minute of video and you can see the 3D Happening Here uh I love the color palette of this one yeah yeah it's fun I tried to like throw in some sci-fi elements have some windows with planets behind them it's very contrasty too it's got some nice contrast to it yeah like the like Michael Bay lens flare action going on there I think a lot of that's coming because I put in the prompt um like astronomical phenomenon or something like that so it's got like you know galaxies and clouds and strange strange things but um but yeah this is uh based off of a really big name prompt where he did this incredible one of like human you probably saw this one the like never-ending Egyptian obelisk exploration video incredible if you go in there and check it out in that you'll see the prompt for this like amazingly good uh work on the prompt there all right so that's the um 3D video might as well do this generate another one and we'll go over to our large instance I spun up here now can you clarify for me what so we know what the difference between 2D and 3D is what what was the difference with a hybrid yeah so um hybrids so really big name um wasn't quite satisfied with video input he um had been talking about it and like to him it didn't really do much and I agreed it was like it was kind of rough around the edges uh the video input mode and it didn't really like there was no control that there was no nothing right so um he wanted to bring in the video and then use like uh like I showed you on the mixer where on the video mixer that I have there to use those elements to like blend the input video with the generated um prompt and make it so that um you can control that parameter and you can sort of effectively really add the generated piece over top of the video input piece and then he took some work that was pioneered by um this uh fellow I forget his name it's like sxela that's right and he's got a a patreon um where he does warp fusion which is on a collab that you have to be on the patreon to do but um he's got some brilliant stuff he's just solo Dev crushing it like I'd see like daily builds all the time the guy's like nuts like um so is the hybrid similar to what like uh Runway is trying to do with their gen 1 is it kind of that similar idea yeah um it is I mean the Gen 1 really took it to the next level where they're bringing in things almost like um uh instruct picks to picks so now we're seeing this sort of like interesting space come out where like hybrid is like an early version of this stuff that's I think still really uh great to use gen one's got this like very like crisp professional like full replacement mode um instruct picks to picks and control net are sort of like different ways that you can break it down per frame um and reimagine it so a lot of people before um you know the deformed tools came around they're making videos by like just grabbing large amounts of frames and then piecing them together like stitching them together with a video program to like take all the raw images in a row and then stitch them together um they're still doing that with um with uh like control net stuff right now um human we should definitely have you talk about control now maybe after the hybrid piece because I'm I'm so curious about what's going on there but there is here where you can um take I'm in the output tab under Forum you can take frames and turn them into video right from a batch here so I would just like you know put in my path Etc um here and then it would stitch them together so you could do that for like control net um or uh using the instruct picks to picks um yeah so uh let's jump into hybrid here um this this generate input frames you have to do it once but that's basically like doing what we're just talking about like it's generating it's taking all the frames from the video you put in and breaking them all down splitting them all up and putting them in a folder so you do it once um and yes I'm seeing a comment hybrid mode has been really confusing it is a tough one I have to say there's a lot of uh complexity in even a small amount of settings it's still very complex so we're going to try it out I'll show you some examples that I've done um here actually I have a good example here that I can kind of pop up uh uh whoops right um I wanted to take my glitch art that I do and put it in here so this is the glitch nope I think I might have just lost audio on you yeah I can't hear anything oh sorry we're back add audio on it so maybe like it switched Focus to the oh that might have been it had like music um but uh that was the raw input video and then um here I've like reimagined it so this is an example of the first example that I got of it working with hybrid so you can see how it like took that glitch stuff turned it into kind of an art nouveau painting like you can see the colors like the different glitch aspects turned into like faces and clouds really awesome stuff here's another example using that same one but I like toned it down a bit you can see there's like a figure coming out of the clouds this one I used a custom model that I made um so it can make some really incredible stuff yeah I wanna explain something real quick um so in the original like video uh input right all you're doing is you're taking your video and you're separating it into frames and then you're doing individual Generations on each frame and so what this hybrid mode is essentially doing is it's it's splitting the video into frames but it's also like getting more information from the video it's getting information on like how things are moving frame to frame and it's like using that information in addition to the frames in order to like make these sort of warpy videos yeah yeah like sort of like how strength works by going from like one image to another there's like it's picking up some of that information as well would you agree with that human yeah yeah definitely okay cool um one of the problems I had with my video which is like it's really glitchy like it's coming from like a TV low-res like lots of noise lots of like random stuff in there so um when I was trying a lot of other modes it wasn't really working very well but this one was able to um with the hybrid mode like really come up with some creative ways of using it um which were pretty awesome this one was like a more like a weird Subterranean cave ancient ruins sort of thing I had going on which was which was really fun um and these ones were more um similar inputs but like looking at doing more of like faces and portraits in 1.5 um one of the things I found in my work is like if I do a lot of interpolation or I'm working with hybrid mode sometimes I'll get this sort of like um I wouldn't describe it any other way than jello I got this like sort of jello going on where it like it Wiggles so uh that's the interpolation going overdrive because it's not like using the um information from the video it's using the frames so then the pixels are moving in a way that's not consistent with the way that the video is moving and that's uh really important for hybrid video so if we go back here we're going to do this hybrid composite which means it's going to be that original video and generated video on the same thing we're going to use the first frame as the init image right we want it to start with like what the image looks like now this is where it gets interesting this is what it where it's able to take in the hybrid motion and flow it's able to take that information from the video on the movement in the video and transform it um into the generated video so that's where sometimes I'd get the jello sometimes I get movement that was like beautiful like um this Optical flow model uh if you really nail the settings is incredible like you can get some brilliant stuff and a lot of what I showed you uses this Optical flow model with uh dis medium or Farm back I don't do much masking at the moment um but it's definitely possible but I will say like unless you're putting a mask thing in it's hard to get it working properly for me um so we're going to start with hybrid motion none for this input video but uh just keep in mind that these other methods are really great for capturing motion and so um test them out try them out for sure so let's make sure we have all of our other settings in place so uh the frames thing keep in mind that it ignores the max frames when you do a hybrid video and it will only use the input video as the number of frames so when you're testing this out if you don't want to sit around forever waiting for a generation make sure that you use a short video like 10 seconds take a clip from the start of the video that you want to use pop it in and then you won't be wasting a lot of time and or money doing this so uh we are going to just go through this quickly Max frames is fine Cadence you want it one um and the reason why is because hybrid video is trying to generate and diffuse off of every single thing so if you do less it's going to have more Jello in my experience it's going to be guessing and tweening and causing that like goop that's caused by not moving in motion but if you're having every frame generate this is again I don't know if my logic is correct a little bit banging rocks sometimes with hybrid video to make fire but um Cadence one I find is best so we're going to use the 2D animation mode now with strengths you want it really high like really high if you consider like strength one like nothing will show up except for um you know what what you put in um or sorry what would I how would you classify strength one human if it's like [Music] yeah so I'm I'm gonna potentially like flip the directions but um yes please essentially you have you have two ends right you have um only random noise as an input to the generation or you have uh just the image as input to the generation and then there's this entire Spectrum in between but essentially like at 0.5 you have half random noise and you have half input image um and so what I think you're saying right is that um for these hybrid videos you want to be closer to the image that you're putting in than you want to be to the noise reported much more gracefully than I could have so definitely it's awesome to have you here for sure um I'm going to turn up the config because I want it to be pretty close to my prompt which I'll enter in so we're going to make um this extra bits here we're going to make this input video which I'll play in a second we're going to make it comic book style um which should be pretty cool I'm going to take a weird 80s movie trailer um I will say I showed you Shader toy I will say as well check out archive.org um go into video there's a ton of stuff here like I love archive.org it's amazing um and you can really find like a lot of Creative Commons video here that you can use so I grabbed a Creative Commons video and you can use it for hybrid video or art no problem licensing is all good there very cool actually that brings up a question that popped up in the comments this might be a question for human but um any videos that we generate into Forum you do have the rights to use them however you want commercially or otherwise correct yeah so like the Forum is distributed under MIT license which means that you can do anything you want with it and you know the people who make it who make the form aren't liable um but uh essentially the rights um to generate or use any of the outputs from deform for like commercial purposes I guess falls under sort of like the licensing of the model itself so that's completely okay with stable diffusion models like the stability released checkpoints of the stable diffusion models gotcha now I guess there would there be some like gray area though I guess if let's say you were to take a commercial video that somebody else created that you don't have the rights to and then sort of remix it with the Forum then I feel like you might start getting into some gray areas there right definitely and I think it's really important to like try to understand this stuff is as best as you can because you know you'll you'll be saving yourself from like accidentally doing something you're not supposed to [Music] yeah um and you know the nice thing is there is like some protection in the form of fair use laws so just make sure you understand those rules if you are going to be using this stuff when it comes to Art though I mean I make music I'm like make hip-hop sample things so you know my understanding of things is a little bit different but I make it for personal use I'm not releasing it for tons of money or anything it's just for fun so um so yeah I'm I have no qualms using this this movie is Amazing by the way uh absolutely Twisted strange um sci-fi from the 80s liquid Sky it's like half uh fashion 80s nightclub aliens come that have to like have sex with you to live or something like I forget what it's like very very strange but um I liked this uh because it's just like face Center thought it'd be a good example person you know I think a lot of people in the beginning are maybe not doing weird abstract video stuff like I am so they want to get like maybe a person or a style so let's take a look at that and pop it in here I popped it in the video in it path um a few other settings that are really important you'll see I noticed the noise um anti-blur so uh this is something that really big name is not a fan of when I saw his tutorials so reduce it to zero to get a better image because you're not doing some of the things that in normal The Forum you'd be doing like uh with tweening or so maybe um but uh the anti-blur is going to uh just get in the way um for seed we're going to do iterative and strengths already really high so I think we're pretty much ready to go we've got our prompt now one thing you do since we're using this we're not going to click use in it we are going to increase this because it's going to use that initial um initial image from the video as the init image so we're going to increase the strength so it shows up yes fan art definitely what we're doing and again this input frames you only need to do it in the first step now this is where um there's a lot of important settings under hybrid schedules so um comp Alpha so Alpha means transparency there's no tool tips for these we're going into the unknown here we're going to step out into this uh you know area where it's a little bit shake your ground but we're going to have some fun with it if you do have a mask blend then this will be used contrast you don't want to use this unless there's certain cases like it's the light in the frame so you could use it to make like a transition you could like brighten it up it's going to use that same like um when you're going from one frame to another it'll like slowly increase so you could do like a flash you could do like a darken like end scene so very important to keep that in mind uh maybe you want to like at the end of your video it slowly gets darker and then you can use another clip to go to another scene I saw some guy earlier in the comments who wanted to make a Netflix uh video on deforum like full length you're gonna need to do a lot of separate clip renders on this so get your transitions in now with this Alpha this is the blending of those two videos so sorry not of the two videos this is the blending of the input video and the generated piece of it so you really need to make sure that this is closer to a lower number to get half half basically when I was doing the these videos this is actually very low so this was like 0.3 so you can hardly see any of the original video in it but when we're looking at some of the other stuff this one was also actually quite low so just keep in mind that you're going to want to try different settings oh and here's the jello I mean everyone wants to see the jello this is the jello so I was making um some like underwater stuff with it but that's what high frame interpolation looks like okay [Music] um so we're going to lower that comp Alpha and uh I think we're good let's give it a go fingers crossed all right seems to be cruising um we did generate another video here with the 3D so let's we can check it out get a second load there should we do some giveaways yeah let's do a I got another 50 a 20 and four tens to give away so we got uh let's do another 50. why not yeah um all right should we do another keyword then what keyword should we have them throw in hybrid all right let's see hybrid let's see and then also we do um you know it'd be good to do some some q a here too so maybe actually after we do the hybrid we'll do the Q a because otherwise I'll probably miss the questions yep definitely um and then I'll just run through really quick as well we'll just I know we've gone like we've gone quite a while here but um I think there's some really important tools that we can just like maybe throw in at the end like a few mentions for some uh some Cool Tools in the ecosystem cool can I full screen this no there's a way anyways um that little square there next to the three dots doesn't do it it's uh grayed out now it's like yeah it doesn't do it in here but you can see we've got uh the 3D stuff I really love it's like it's not perfect the depth map and I think control net is gonna really change the game for 3D um because it's going to use like a much better depth map I've actually seen lots of other new depth map models coming that have like more detail in them as well so I'm guessing sdxl and other things are really going to take this 3D realm into a new new space all right so we have a winner we have Lionel a l-a-i-n-o-l you just won uh fifty dollars uh credit to run diffusion let me make a quick note of that big bucks all right Lionel expecting a movie full full movie now no pressure and then I'm Gonna Roll it one more time for a 20. we got Gert K Nielsen you just won 20. all right Gert all right I got a I got a handful of ten dollars left and I'll I'll save those for towards the end but now I think would be a good time if you do have questions while we're waiting for some of this stuff to process throw your questions in for for human and for revolved and uh we'll make sure we get them answered here this one ended up cool whoa this is like some of the settings I messed around with it a bit so you can see it's really uh getting a bit Wilder now whew very fun that's cool so somebody was asking this is a question that popped in earlier how do you add the music to it I know there's a feature in there to be able to add some music to the videos as well yep in this output you can uh specify a soundtrack to add to it um I personally uh like doing it after the fact because um I can you know actually uh line it up to the video a little bit better which brings me to uh talk about a few cool things that you can do with music so let's just go over those really quick um there are a number of different tools for it so they're third party so here's an example of a keyframe generator so you can this will actually parametize and give you a math equation or this you can see that the string is up there um or sorry down here there's the string so that saves you a lot of time if you're trying to figure things out um there is an audio keyframe generator linked from there as well so wait go back real quick the the string that you just generated what what are you actually generating that for again just let's say I wanted like let's say I had 100 frames and I wanted to adjust the strengths parameter over the course of that uh okay so now um you know I've got my idea of what I want this to increase to 30 then 64 then 77 you know or that that's the frame number I guess but if you come down here you can see that's the keyframes I can pop that into the quote you keep it in Disco format but not into the into the um into the parameter whatever parameter I want could be x-axis movement y-axis movement strength hybrid schedule whatever it is so that's really cool yeah it's great for getting wild um things really fast huge Time Saver yeah now that's an early one here's a really wild one so this one's more for like beat generated um stuff so if you know the BPM you can like pop it in here 120 is really popular for like house and stuff like that so obviously we don't want it that fast but you can also change the um frame rate so since we're on 15 it could be like something like that 120 frames so uh the sync rate this will be really familiar to like um music producers and stuff and then you can actually this is stuff that I do on my sense all the time where I'll modulate it with like another wave so then like the waves will change and collide with each other and create something new but you can also um load a file here so if we wanted we could load some music lock in the tempo and frame rate and have it like automatically generate this and then boom we've got down here all of our frames oh that is so cool beat generated yeah that is so cool so now so now the video that it spits out from deforum like some of the Transitions and the movements or the changes and angles or whatever you're doing will change with the Beat of the Music yep and it will be any parameter you want so you could have it like camera movements um you can have it like uh you know get to a certain point and then like change it at a certain frame put in some different movements in there you can make it like beat uh maybe like beat uh like um generating like it'll go like faster generations and then slower Generations depending on the beat and stuff like that so a lot of really really awesome stuff just for like a real practical example can you show us where you would copy from here and where you would paste it instead of the Forum yep so copy keyframes or formula so you see here we get the complex math version then I just go back to here and like let's say down here I wanted to make it the strength parameter boom hmm very cool and you can set that on you know like the x-axis movement or the y-axis movement or the zoom or any of that exactly and that math function is exactly what you're seeing um up above the graph so it's it's it's an equation that describes exactly this that is cool this that just blew my mind right there these last two websites you showed me are just like holy crap man that's so cool oh yeah now um there is uh extra hard mode for Super Advanced folks I will just mention it um but there's this tool called parsec that they brought in recently which is completely nuts um and very very intense uh where is parsec here I haven't used it much at all um motion that's right now where is it human where is uh parsec I I'm not sure they they rearrange things they do rearrange things all the time uh SD parsec see the parsec section in the keyframes tab guided images that just disappear oh oh you know what it is I think we rolled out of uh fix to deform prior to the show and it might have uh disabled parsec but um I'll show you what parsec is because um if you thought that was crazy wait so you see this stuff so this is like Mega complicated heavy math stuff but you can basically put everything in here load settings save settings um audio analyzer and um you can do a lot of crazy stuff like more um uh working with frames and time and you could be like okay at this time I want to change this you know people ask me that um a lot like how do I do frames versus time and that kind of stuff like here you can see a graph that will show you frames and where you're at um so I don't do much of this to be honest in my like work uh and what I'm doing with uh video art like I find like a lot of the automatic beat stuff to me doesn't work with the art style that I do so I'm just more like I like to turn knobs at the speed that I'm generating stuff and when we get to uh distilled I heard uh someone mention at some point the 30 frames per second um stable diffusion stuff so we'll be able to we'll get some crazy like you'll be able to do this live like with knobs and it's going to be nuts wow so here's our um hybrid one it came out you know like maybe it's a bit it's a bit flashy it's it's going to be a little bit like uh glitchy um but you can see like a lot of her the original video came through and it's a bit crushed so maybe we can try this a little bit lower and adjust the strength a bit um not sure higher or lower actually at this point eight six but uh we'll give it another go here see what we come up with um was there other tools I wanted to mention oh uh yes so um there's a fantastic Docs thing here mention so this guide is pretty good there's also docs on the Discord but this one goes over a lot of the settings in detail so highly recommend it we can like probably pop these in the video description hey Matt yeah any sort of links that we mentioned we'll try to get into the description once uh once we're done recording here yeah all right cool should we uh go to questions and I'll just yeah working on stuff while we're chatting yeah if uh if you have any questions pop them into the chat and we'll we'll do some q a here while we got a human and revolve both on um so Styx is asking how do we find parsec so what when it is actually available it's like on the bottom of this keyframes tab it's like down here under perspective flip I believe yeah let's see uh David is asking when you hit generate where's the processing happening on your local machine no this is on run diffusion so run diffusion is actually Cloud GPU so it's this is all happening in the cloud yeah which is great because if we were trying to crush these videos locally we might be like glitching out on the live stream and stuff like that oh for sure in fact this is uh so tinkered thinking is asking can you explain what a seed is and this is actually something that I get asked quite a bit especially when I'm talking about things like mid journey and stable diffusion is what the hell is the seed um maybe one of you guys could probably explain it better than I can because I've tried a few times I could explain it yeah yeah so when you're making like a diffusion right uh you're starting from from noise and you're sort of like diffusing the image from that starting noise but that starting noise can be sort of like any random pattern you want and to sort of keep the model scientific what they do is they use like a pseudo-random number generator which takes a number or a seed as an input to sort of control the randomness of this noise generating process and so when they train the model they're training it on random noise when you're inferencing the model you can specify a seed number which will essentially determine what that noise pattern will look like does that sort of explain it yeah no that's great that's a great description of it um you know my knowledge of what a seed is mostly comes from like uh playing games with random map generators yeah you know like back like Minecraft exactly in a seed and if you put in the seed you get the same thing it's like a point in the randomness um that you can uh you can choose from so if you choose the same seed that's a way to get consistent Generations if you do a fixed seed and then you specify it then when you're creating a deformed video if you're like oh you know I kind of like this one I want to get it like pretty close but uh maybe I want to change like the strength or the frame rate or you know anything like that you can go back to that same seed because he's fixed and then carry off from where you were or redo it yeah the the corridor crew video that you mentioned earlier as well where they're explaining how they did that whole anime thing they actually have a like a a good like four minute explanation of how the seed works and how the you know it starts off with noise and basically they're they're making something that's really noisy and then removing the noise down to the picture they're trying to generate and the random seed I guess kind of generates that initial noise that they're then sort of taking away to get the image their explanation was really really good so I highly recommend that but yeah the only reason that the term seed clicked with me was also I've done a lot of Minecraft in the past and you get a you get a level you get a kind of seed for the level you could give somebody else that seed and they'll get the exact same world that you played on so that's why it kind of clicked with me but that question definitely pops up a lot um Luma is asking here about if this is going to be available in other languages like Spanish so I don't know if that's something human you can shed some light on do you know if it's going to be available in other languages so I'm not aware of any efforts currently um by like the deformed developers but you know if if you can speak multiple languages and you know you want to help translate I think that'd be incredibly valuable also I I think I saw this question earlier um or maybe it was like a similar question um maybe you can use like a Google translate feature if you're using Chrome yeah yeah and there's also um localizations in automatic 11 11 I believe so I think you can upload a localization file but one thing to keep in mind is this is changing all the time right so it might be best just to use the translate in the browser like like you're mentioning or um chat GPT I don't know Cloud you know that's another uh another one um but yeah there's uh there's a few different ways so um but I think like if it's not translated they will definitely need help getting people to translate it put into other languages so definitely a place where people can help for sure then uh Kunal says do we have to pay for the server so you can install stable diffusion and deform locally if you want um I do have a video that shows you how to do it the demonstrations that we've been doing on this video we've been running in the cloud um the main reason we're writing them in the cloud is because they do take a lot of system resources to run and if we were trying to run these in the cloud and stream at the same time it probably wouldn't go over very well and also what you can do in the cloud here are way better servers than you most likely have on your computer at home I'm kind of over generalizing here but I'm assuming most people don't have uh the level of gpus that you can get with something like run diffusion um so there there are free options to do this or for you know 50 cents an hour you can do it on something like run diffusion yeah um I just want to point this video out because this was a common this was a question that I got asked like just before I jumped on um so uh you can see here that it's not very close to our video right like we put in a video and it kind of starts there like you can see her face but then very quickly it just like gets crushed right and it's just like all these shapes and then there's like a random person so um one thing is with automatic 11 11 and I've seen like some mentions of this in Reddit and stuff there are memory leaks in the application and when you're switching from different like prompts models extensions things can leak so it is open source software uh it is contributed by a lot of people but there's so many people working on it sometimes some things don't caught up they will generally fix this stuff but I'm not saying that this is it this is most likely a setting thing but it could also be a memory leak thing so just keep in mind if you're getting weird results that look like crushed puzzles um something where you might want to like tame down your settings or restart the session um human do you have any like advice for weird crushed puzzle kind of uh Generations I I love the the randomness no I I don't have any like general advice but yeah restarting sessions I think is a good way to just reset everything yeah and and this can be a sign of like like basically what's happening is it's like the noise is getting compounded right so you're losing the signal and what you're trying to do is find that signal in the noise um to have that like cohesion or in training you would call it like the convergence of the like the perfect place where the signal meets but has that noise to keep the generation fresh so that's kind of what you're looking for in the hybrid mode um I'll just shut that one down for now but if it gets crushed know that it could be too much strength too much config too much of something or you might just need to reset it um so that's a common one that comes up uh do we have anything else to give give away yeah I've got a I've got a handful of ten dollars that we can give away and then I'd also like to give away one more thing um so I saw this question like how can I use custom models and stuff like that so on um run diffusion we do have uh the private storage available it's like 100 gigs and it's Ultra fast nvme which if you don't know storage that's like very very fast Flash stuff that's associated with your server and it will jump from server to server um but it costs a bit so um there's a monthly subscription that we offer for that service and let's give away like a month let's give away like two one month subscriptions for creators club and maybe pair that with the 10 bucks get something started um I'm just doing this off the coffee but but I think that'll be really valuable to people to like try their own custom models and do things um another thing I'll mention when you're doing this if you're getting it really blurry it could be because of your model and I actually have a theory on this um some custom models just don't work well with the Forum and I think it might be the merges like I get a feeling because like protogen doesn't work well into Forum I'm like that's mostly made up of merges I'm thinking human like maybe that's why people get the blurriness from the custom models what do you think yeah I mean that definitely could be it um I I know for a fact that models are just incredibly delicate and when you're doing things like merging you know you might get a better portrait but you might also be like negatively impacting other parts of the model um I know for for instance that like the the V1 stable diffusion models if you don't use any of the color correcting we'll just Trend towards like red magenta and that's why the color correction is necessary with those models but stable diffusion 2 um sort of like doesn't have that problem um but it's it's really hard to say what exactly is is the root cause it is and it could be a setting thing too but um anyways let's get to the good stuff and give giveaway yeah a couple so uh throw out a keyword you want them to type and uh we'll we'll do it uh let's just say uh creative creative all right type in creative and uh I'll do some rolls in a second while we're waiting for people to type that in and uh where you know let people get a chance to do that the question has popped up multiple times now is there any news around control net yeah and that's definitely a question that I'm seeing over and over again here in the comments so is uh is is that something human that maybe you could talk about do you know um what the the sort of status is on control net into forum so my understanding right now is that it's sort of just being built out and developed um I don't know of any sort of like solid release date or anything like that but I really do think control net with stable diffusion is just insanely powerful oh yeah yeah and people are doing stuff with control net already so they're doing like batch image to image of processing things um which is incredible to see and it looks great and I just it blows my mind to think about what it's going to look like when it's like into Forum because it's going to be mental and then you're like multi like what like like I can't even yeah it's gonna be crazy I'm excited about that especially you know because with the like the open pose and stuff you can actually put like two figures in there now and yeah it's gonna be real interesting to see where that all goes I'm I'm excited to play with that when it's when it's ready um here's a question from Rando I'm not sure I totally understand what they're asking but they're saying what is the main problem that stops from generating consistent frames and images is that something yeah is that strength strength parameter is really important um so if you're trying to make something consistent um try addressing the strength parameter first um and maybe reducing the diffusion Cadence but if you're trying to like tame that that uh can generally help that's a good place to start because um just think lower strength means it's going to go off on its own and do its own thing um more frequently and diffusion Cadence means it's going to go off and do its own thing faster if it's high uh lower diffusion Cadence meaning it's hitting every frame so higher diffusion Cadence which means it'll generate less frequently and then higher strength means when it generates it'll be closer to the previous frames versus like doing its own thing and when you start everyone does this they'll mess up and get like something that looks like you just hit uh text to image generated like 120 pictures and then smushed them all together into a video and it's just totally jumbled like 120 pictures um so don't worry about that but just keep in mind that that strength is like the glue that will stick it all together in the image gotcha um so Harry is asking can you input images made in image to image into Forum I mean I don't see why not but uh absolutely see I'm just kind of reading the questions until I start to see the creatives pop into the chat here a 3D video we got some this is the up close those like uh little artifacts from the generation process turned into these like beams which I think is awesome looks very cool it starts out the window and like ends up going out out of the window here I very rarely see something that comes out of deforum that I'm not impressed with you know it looks like cool in an abstract way or cool just in like a to me like the colors and the contrast and the yeah I mean it it's hard to make bad stuff with the Forum honestly yeah and you can use create like custom models too so you know if we wanted to we could switch to some different models and try some things but we'll leave that to people to try out so here's a question um I'll just say 99 JoJo um how do we use different models inside of run diffusion yep so when you have the creators Club you'll be able to load those in we have connections to like Google Drive where you can just have a direct link there's instructions for how to download it off of civitai or hugging face or anything like that um I don't want to spoil too much but there are some like big releases planned for uh run diffusion in the near future um solving a lot of the ongoing Growing Pains that we've had of using the platform like dream booth um but also um bringing in some some new capabilities um around how we do the models like in our shared plan which is when you don't have a subscription you get like 25 models that you can choose from on the shared storage and uh well let's just say we were going to be reworking how that all works so uh stay tuned to like kind of watch what we're working on next because we've got some fun things planned cool here's another question for run diffusion uh do you also count the time I'm logged in or just the running time so I'll go ahead and let you answer that one real quick yeah so you saw what happened at the start maybe maybe if you were around you saw I tried to spin up a large and we didn't have enough available at the moment that's because you're actually reserving the GPU when you're doing this you're renting the server so it's like it's your server which means you can generate you know what you want on it um within the you know some level of because we're a hosting provider we have to take like certain concerns about what's on there so you can see the terms on our website but you get it you get to use it for whatever you want commercial purposes we're just giving you the equipment to do this stuff um so when you're logged in you're reserving that GPU which means you are charged if you just opened a session and sat there and you weren't doing anything you're messing around with settings you are charged for that time because you're reserving the GPU however what we don't charge you for are startup when it's booting you're not charged for that because that's on us we try to keep them as fast as possible so you just get in there and can start creating and also shutting down but for the time that you have that server and the GPU reserved you are using that time just make sure you're using these timers up here properly you know there's notifications and stuff sounds that you can use so and you can also keep track of all your sessions on this multiple sessions tab cool this this question might actually be for human here um is there recommended specs for those that do want to try to run it locally so I think we talked about this at the beginning but having an Nvidia GPU will save you a lot of headaches um like you can use Windows is fine you can use Linux if you're used to that system Mac OS again like you're not necessarily using that Nvidia GPU and you can also have pain points there as well um I think with automatic 111 specifically you can go as low as I believe it's six gigabytes of vram so you want to get a graphics card that has like preferably as much vram as you can you know reasonably get on your budget but um I think six is is the minimum gotcha yeah I I run an Nvidia RTX 3070 and it runs fine on my computer but to be honest I still go and use run diffusion because it actually runs faster and I can do other stuff on my computer while it's doing it so um I've heard sort of like a RTX 3060 and up is kind of ideal yeah I think the 30 series cards are a lot more affordable than the 40 series cards um and they should be good enough to run anything locally cool um let's see so Patrick's saying I have a video of dancers on a black background I still haven't even gotten to where people are typing in Creative yet in the chat over here there's a lot of questions popping in um so I have a video of dancers on black background I want to change their form to dry flowers and plants what would be the best way to do that is that something that would be possible to kind of have um again starting with a video okay yeah that's actually a really good point so I didn't touch on this much I just sort of sort of like hey you put your prompt in here but I didn't really touch on the fact that you can go in here and put in frame 100 and now put in uh beautiful you know watercolor painting of flowers on a pond something like that so um might want to still carry the negative over Maybe keep it consistent throw in some of these but yeah this would be a good way to do that where you would um be bringing in that extra prompt and then it would go from one to another so um I guess I've got this generation going so we won't see it for a minute here but we can spin another one up in a second but that's the best way to do it in the Forum um this was some of the guidance image prompts you saw when they're going a little a little weird right now for the fun but um I kind of like up where it ended up in this and this part's really fun the weird like camel and the skull and the cactus cars just awesome um so yeah you can really take it to weird places just by using this to um go uh to different places in the prompts and then that lets you jump through and you can use seed schedules too if you really want to like nail what the seed is at the same time as the prompt that can be really helpful for getting a consistent image cool should we do a draw oh you're muted you're right I am muted sorry about that I'll go ahead and do a draw and then uh we still have more questions flooding in so I'll try to sift through them here uh so this is for the word what do we put creative yeah creative all right and just plug this in here and let me roll it and this is for ten dollars plus um create a club creators Club subscription yeah yeah let's see so we've got Nathaniel Whitlock Nathaniel Whitlock hey he's right there I saw him chiming with some questions earlier too Daniel just making my notes here I'm gonna do one more draw and then we'll go to some more can you like set it over a longer period of time because you did have it we did have it up for like a long time is that possible uh let's see here to have like last 10 minutes or anything like that yeah so I have it set for the last 15 minutes here oh perfect perfect okay so anyone who said it so far has got a good chance that's great yep yep so one more time here I'll do it we have Bill Stevens Bill Stevens all right bill and I got two more ten dollar giveaways that I could give away here yeah and then uh let's see here's a couple more questions here uh so not asking sometimes only the first prompt will generate any idea why that happens like maybe your max frames not long enough also keep in mind your strengths can really play into that right if you have your strengths going along at a certain time and it's like a higher strength and then a new prompt comes in it won't immediately change to that so um it's really important to like change the strength settings you can use the strength scheduling so one thing I've seen actually that's pretty interesting is people will keep their strength consistent and then right on that frame and just like 10 frames around it or one like a couple frames around it they'll Spike the strength low and it will like generate something with this new prompt and then uh they bring it back again and continue it so that'll let like this like sudden Lurch into the new prompt and then continue with it cool so one another question that's popped up I'm actually I actually lost it so I'm not going to put it on the screen here uh but uh somebody was asking if you can explain what a model is so we were talking about the various models like uh you know 1.5 you got 2.0 we were talking about um you know protogen can you give just a quick explanation of what those are for anybody who may not understand that think that might be one for human sure yeah so so essentially like the the model is the numbers or weights that are associated with this neural network um and so what you can think of sort of is just like a bunch of really simple math equations plugged together in parallel and those equations have like a variable a um where it's like taking an input multiplying it by a number and giving an output and what's really cool is like when you essentially scale these models and you have like a bunch of these really simple equations you can approximate like any anything and what these weights are is they're essentially just the the coefficients or the numbers that are used in this complicated math equation um comprised of like really simple neurons and um they're like trained in a way that they give a particular output so you train on a particular style of images and now all of a sudden the model can produce images in that style and so like the differences between the models is literally just these numbers but they have like a huge impact on sort of like the visual uh identity of like the output of these models awesome yeah and then with the fine-tuning of these custom models you're like you can train in just certain Concepts as well so at that fine tuning just really like drives in a point from the like the broader model of like 1.5 2.0 or 2.1 or now Excel and so the base model is really just like this vast vast um collection of different um weights and trainings but you drive it in with the fine tuning to get that custom model where it's got like the anime style or the RPG Style or whatever other style we won't talk about those other styles but yeah um pretty much the custom trainings can do some really wild stuff and uh yeah I I love training stuff man it's so fun like you can do some wild things with it too um yeah cool so somebody's asking here this is a run diffusion question what are the various specs on the machines you guys have um I did see we had an answer pop in there so um it's just like basically changing the amount of vram and RAM available small servers have eight gigs of vram medium have 16 gigs of vram large have 24 gigs of vram basically it um and you know the industry is changing all the time so uh one thing I'll say like if you jump on our Discord we're happy to help if you have any questions we're happy to take suggestions um we want to make sure that this is like a tool for everybody um so that uh you know we're enabling artists to come along and create things um easily awesome cool so I think I've gotten through most of the questions here it got uh it got a little bit crazy with all of the the creatives in there but if you have any questions that we haven't gone into yet feel free to type them in even if you already asked them and I missed them go ahead and type them again and we'll try to get to them we got about a minute left on this next Generation here um you know one thing one thing I do want to talk about you know uh revolved and I were talking about this on a call yesterday is sort of where we see all of this going you know where we obviously talked about control net um somebody was asking in the questions here what is control net and basically it's it's this ability with stable diffusion to get a lot more precise with what your image Generations are you can get the exact pose you're looking for on images you can upload images and have it sort of trace the outline of the images and then make new images that follow that same outline you can upload images and have it look at the the sort of depth and create a depth map of the image and create a new image that follows that same depth so control net really is these additional sort of add-ons disable diffusion where you can really sort of tailor and control the exact output that you get and so when we were all getting excited about control net being into Forum it's we're basically excited about the ability to do that but also with the Forum um but let's talk a little bit about where we see a lot of this AI stuff and the AI video generation going and um you know what what's the future looks like for a lot of this stuff because I know that stuff that we could probably nerd out and probably do a whole nother hour live stream on but we'll we'll try to um cut our nerdiness down because we're almost at the two and a half hour mark on this one but um you know what are you excited about what where do you see things going and you know I'll start with revolved here yeah um it's a good question I think I think like the exciting thing about this space is you know the speed is so so fast um I've seen some really interesting tools that pop up that just sort of hint at the future so one of them is this guy who's working on a project called Discord and he's like essentially got like a VR headset and the two controllers and it's like making stable diffusion images live as he's moving the controllers around human I don't know if you've seen this guy but you got to check it out because it's wild it's like a live generation thing and it's all like it's like deforum VR live it looks you're talking about Scotty Fox's work no it's a Ryan something okay um yeah Discord it's like a media server sort of thing that he's got running um oh I saw that I I think I watched his tutorial video oh I don't I haven't seen that there's there's not much like he doesn't even have a Discord yet but it's like I just saw it the other day on Twitter um but it shows that like we're gonna add in more tools um you know the um Alex with warp fusion has like video in painting going there's like uh Runway has video and painting going um which is really fascinating like being able to like take a person walking around in the background and like take them out of the video that's just like really useful from a professional aspect um but I think that um the space is going in a different way sometimes like not just the corporate stuff which is going to be very useful and while it gets a lot of use out of it and you know be great for Hollywood but you know I think there's been this big promise of this metaverse that never happened we've got this like lame like Facebook thing and everyone's like yeah I'm not using that looks like it's for five-year-olds right like no one wants no one wants to use that uh but what if we could create our own avatars using stable diffusion what if we could walk into a space and based on our conversations that's being picked up by like a a bot like an AI bot it's changing the room to match our mood or what we're discussing and as we're talking there's like these Generations occurring of like our conversation and like you know spacecraft flying by In This 3D space so we're talking about sci-fi books and it's just like doing that dynamically on the fly or 3D shape generation like what if we're doing this now we're looking at this like 2D depth map thing what if it's like we're getting out there and it's like texturing 3D spaces and we kind of had this like silly promise in no man's Sky of this like algorithmic space but now what if we're doing it and creating our own worlds and I think that's like really the power of you know the metaverse or whatever you want to call it is like enabling creators to do cool stuff yeah because that's going to take over rather than this consumption div sort of like Hollywood thing you know how many Redux movies do we need now that we're going to be able to make our own movies based on like chat DPT scripts just pop in there and like be like okay this scene is this this scene is this and it's like boom written done video send it quick presses it's gonna be incredible I'm most excited about it's just the sort of democratization of all of this stuff right like the the fact that anybody has access like that Corridor crew video that we've referenced a few times just kind of shows you that like now anybody with a computer can make their own anime and you don't have to be an insanely great artist you don't have to you know know how to draw you you don't even have to know how to script you can actually have tools write the scripts for you now that are pretty damn good um and I just think that that is so cool and you know I was having a call with uh I was having a chat with Robert scoble the other day and he was talking about how like this is all sort of working towards the Holodeck right like the Star Trek Holodeck where maybe we do it in VR goggles maybe it's an AR thing but we can be able to say you know what I want to be right now I want to be on the moon with my buddy and next thing you know you're in the surrounding of the Moon and your friend is standing across from you like that's probably more of a reality than like the metaverse that Facebook wants is that we're going to be able to just put ourselves anywhere we want AI will generate the area that we're in so we can be in different destinations whenever we want because the AI just generated it around us and you know he seems to believe from all the people that he's been talking to and all of the advancements he's been seeing that that's what we're working towards we're working towards that Holodeck kind of concept yeah I think we'll be like I've joked around that with the Run diffusion guys I'm like we're gonna be like hosting metaverses and like AI personalities and stuff in the future and you know our goal is to really like curate that experience and like you know as specs will increase or whatever new hardware comes available we'll pop that stuff in there but um all right human I I'm so curious to hear your thoughts on this stuff like well so I mean there's a lot of what-ifs right and there's a lot of really cool ideas floating around um and it's really just a matter of someone committing themselves to implementing that idea um and I think that's amazing but I think I'll speak just like more in like the the short term and what I think is a little bit more certain and what I think is certain is that we're gonna have all the control we want um especially with these like 512 by 512 sized images all the control we want um it's going to be like a lot easier to use um so you don't need to you know know how to program a wave equation to keyframe the animation it's it's it's going to be a lot simpler to use you're going to be able to have all the control you want and it's going to be really really fast I think I think that's where this is going I love it yeah so cool oh I saw one mentioned as someone mentioned this in the chat but it's uh the MRI stuff yeah I'm actually not familiar with what's going on with the MRIs and state of diffusion that's that's new to me I'm I'm kind of curious to hear a little bit more about that no it's straight out of sci-fi like human you saw this stuff right I saw the title I wasn't able to read it oh my God so like um they took MRI scans and through the advancements from diffusion models we're able to like tell the they would tell the subject that was getting the scan to hold an image in their mind and then they were able to get like take a picture of what they were perceiving read the person's mind essentially to create the picture with a diffusion model based off the MRI so like without telling the diffusion model what it was no prompt it was able to determine what the person was visualizing oh I did see that okay I know what you're talking about now that is that that was absolutely insane yeah they're basically reading people's minds with stable diffusion now yeah yeah so I think they like show someone a picture and they have that image and then they're like taking readings of their brain lighting up in different places and you know how like some people have the ability to like visualize images in their Mind's Eye like I I would guess that the same parts of their brain are lighting up and now they're able to diffuse which yeah but you know what's funny is this is something that really changed my impact on like how I view the world but like not everyone has a Mind's Eye not everyone can create a picture in their brain also some people um like have like when you're reading and you're like reading to yourself and your mind like some people have that dialogue all the time some people don't have that dialogue at all and so we're all like thinking like we're the same people out there in the world but everyone's so different in how they perceive the world and when it comes to tools like stable diffusion deform and everything it's such a powerful tool because people who don't have a Mind's Eye can now go and like see the things that they are thinking about suddenly it's like it's like a superpower for these people to Now understand it and like perceive things in a different way so um yeah it just blew my mind to like think of how everyone else's brain is so different um because in our society we're just kind of told like oh yeah everyone's just like a person you know you're a dude a lady whatever you're just like the same but we're all so so different yeah like what deep love is saying there like incredible imagine something and then stable diffusion or deform will generate it for you yeah yeah it's crazy to think about all right well what do you think should we rap we've gone like uh we've got a long time I've answered a ton of questions totally totally cool well I do have uh two more ten dollars to give away here yes um so let's go ahead and do that real quick um human picks the word this time it on no I'm thinking no it's too much pressure too much pressure too much pressure it's a forum well I just do deform though yeah let's do the forums because that's the topic so type into forum and uh we'll pick we'll pick a couple winners to get ten dollars in credits to run diffusion and then uh we'll go ahead and wrap things up and while you're doing that let me see if there's this video here too it looks like you know not only did we get ancient pyramids but it also looks like some some Mars maybe snuck in there some Stargate stuff we've got it's very cool how the 3D came through and you could just keep doing this video like forever do like an hour of this so cool like I can start to form videos like all day long I mean when I whenever I am scrolling Twitter and I see a deformed video it's always a scroll stopper for me while also making this stuff like I have to remind myself like oh yeah I am need water like like be present don't you're not a computer you know but it's just like generate generate generate oh man you just lose track of time oh yeah I remember the very first time I ever got into stable diffusion automatic 11 11 installed it on my computer I'm pretty sure I didn't sleep till 4am that day horrible stuff the first time I ever got into stable diffusion was just like and then when I when I use dream boot for the first time and then train myself into it and then I was like oh I can make myself an astronaut and a viking and uh I could make myself into Superman like any of it oh my God and then yeah it was just uh I was down the rabbit hole from there all right so I'm gonna go ahead and draw one real quick and then I'm gonna give it another minute or so and then we'll do another one so we've got Django Marine Django Marine just won ten dollars and then one last one and I think that's the last we got and we'll go ahead and wrap her up for the day we still got people throwing the word deform in all right last time Super shoutman X you are our final winner of the day thanks for hanging out with us awesome so anything that we should uh say before we wrap it up I know um we did a deep dive I'm sure we've blown a lot of minds and a lot of people are uh you know their heads are spinning on what they could go and do with this stuff um we definitely covered a lot of ground on this two and a half hours in um but uh any anything else uh human any anything about deforum that maybe we should say that we haven't said yet any anything that maybe we we should make sure to slip in there before we wrap this up yeah I guess I I just like to reiterate that like this is an open community of just like-minded people who like to learn like to you know play with these tools and if you're interested in you know messing with AI like you can be a valuable member of the community I love it and I just love how the whole you know the the generative AI space right now is kind of in that space where everybody it doesn't feel competitive right now it feels very collaborative right now it feels like uh you know and I hope it stays that way and I think it's due to the open source nature of stable diffusion and and what you guys are doing it with deform and stuff like that it's everybody can sort of contribute and add on and bring new ideas to the table and new models are getting developed and people can you know put their models out there and and you know people can show their work and show the the seeds and the prompts that they used and everybody could just continue to iterate off of what everybody else did and because of that I think that's why we're seeing just the pace of all of this accelerate right like the last the last what three or four months just the pace of all of this stuff has just gone insane and I think it's because of that collaborative nature that hey I built this what can you do off the back of what I just built a hundred percent yeah I couldn't say it better and then uh revolved anything else you wanna you wanna say before we wrap this one up here I just want to say a huge thank you Matt for having me on um letting me show this stuff off like you know I I do this art like as a hobby and for fun so to have a bunch of people on it's like uh really really fun and uh human thank you so much for deforum and uh all that you guys do over there because you've really made a fantastic tool um and uh I'm so just like happy that I just have access to all this stuff it's incredible so thank you no this has been amazing and I appreciate both of you joining me today and taking the time because this is something that you know I've only kind of scratched the surface with myself right you know I did make a video about it but even my video was at very surface level and I'm even realizing throughout this stream that I even probably said a couple things wrong on the video that I was corrected on during this live stream so um you know this this has been absolutely mind-blowing to me and I'm excited to go back through and watch it a second time and um you know ideally I'll pull some some clips from it and you know help people with some TL VR who can't sit through a full two and a half hours of this stuff but uh no this is this has been absolutely amazing mind-blowing you know learned about a ton of new resources so uh you know I I can't thank you guys for spending over two and a half hours with me on a Sunday this has been amazing awesome it truly has thank you awesome well thanks everybody for hanging out with us today um really really appreciate all of you spending your Sunday mornings with us and nerding out onto Forum with us and um you know check out run diffusion make sure that you jump inside the deforum Discord if you're not already in the deforum Discord uh jump inside the Run diffusion Discord if you're not in there if you did win one of the prizes there um they'll help you out inside of the Run diffusion Discord and yeah thanks everybody for for tuning in today really appreciate you spending your Sunday with us and see you guys later thanks again bye thank you bye
Info
Channel: Matt Wolfe
Views: 97,821
Rating: undefined out of 5
Keywords: Deforum, AI Animation, AI Video, Stable Diffusion, Stable Diffusion Video, Deforum Video, AI, AI Art, ai, ai video generator, ai video, ai video editing, ai art, ai video maker, ai generated youtube videos, ai content generator, deep learning, content creation, ai video creator, ai video editor, best ai video creator, stability ai, anime, machine learning, animation, ai video editing software, artificial intelligence
Id: 1uFK36QsqkM
Channel Id: undefined
Length: 158min 48sec (9528 seconds)
Published: Sun Mar 05 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.