CivitAI, Understanding How to Use Models, Krita and Stable Diffusion

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello and welcome to stream tabulous so we want to take a look at models today so we're using we might be using automatic 11 we might be using easy diffusion or we might be using Creer with the AI diffusion plugin and uh will'll take a look at creature a little bit um today since it's the one that uh I'm using more than anything at the moment but we want to take a look at the understanding of a little bit on how AI works the uh the models that evolved what they what that actually means and how that actually uh affects what you're actually doing so the first thing that we need to understand is there's no images in AI um a model is called a model but it's a neuronet of information it is trained by showing the AI an image and then creating static over it over and over again till that image doesn't exist and then you have your layers of the information of the image which is destroyed with static and then you have your AI learning which is the creation of the model the creation of the model is ones and zeros um and what is done is the AI recreates static and it puts static over and over again and it tries to use its language model to create the interpretation of what you're saying even if you describe um with great accuracy the original photo it's never going to look the same cuz AI is shorthanded so it doesn't have the world's best understanding of translating what you're trying to describe to create that artwork it's like asking a million different people to describe the same thing everyone may have a different way of actually describing it and we see that um in crime cases where uh people describe the vehicles differently and so forth and unless you can describe something with great accuracy you're never going to get an accurate interpretation of it and then the way the AI art generators are used is you're describing something which is usually very unique and the AI is taking its trained learning and trying to use that information to create what you want now this is where the understanding of having more models comes into play so we'll go over and we'll just take a look at um a stable fusion page quickly right after the [Music] intro so here we are on a um stable diffusion beginners guide and I'm going to leave this in the link below because it's actually very very handy and you come across to it and it just tells you everything that you absolutely need to know about Ai and you know when you're beginning out you might just be using um a basic model uh but you're going to need to know how to navigate CD Ai and um what you're looking for or C AI I always add an extra I there but you can see here this is showing three examples of models so you have realistic Vision which is based on photography and then you have anything 3D which is applying a cartoon style basically to it dream shaper which is a mix of the two every model that you use is different depending on what you are trying to achieve you need to know what model you're using uh your vision to create that artwork uh how to control your prompting and how to apply the right models and control those models in the direction that you're looking for but but there's a lot to it there's not just models we also have things called um the hyper networks which uh other neuro networks so other models but they're neuron networks which uh have descriptive information to change the way a model is working so it might um describe more details of what a dog looks like or a superhero looks like or perhaps what a cartoon looks like that it's not realistic that it's usually um a more limited color palette and um styles like that so they can be ejected into the model to help control it uh it's the same with embeddings it's the same sort of thing again it's um more wordings that goes in and then there's the big one which is the Laura model which uh you've seen myself use to put my face onto pictures uh Laura models are extremely powerful you have your base model which is a massive amount of training information so it understands what a dog looks like what a cat looks like what a car looks like what a plane looks like um Sometimes some models are missing something some models might not know what a train looks like so you may have to try a different model until you get what you're actually looking for um like a Christmas tree for example so with a Laura model you can have a Laura model just trained on Christmas trees so when you're typing that it's looking at the main model and then it knows what a tree looks like and then it goes oh and this is what a Christmas tree um my understanding of Christmas tree is and it creates that Vision uh Works exceptionally well with faces as uh you've seen um it works well with clothing if you do if you're training it without the head in it if you're training it with the head in you'll usually get that face overriding if you're trying to control a face and of course Styles I've trained a Laura with my own art style but of course with the way it works having a base system and then the other changes in the layers of that neuron Network the training that you put into it to try to copy a style in my case my one uh didn't work because uh it has all these other layers and it goes well you know this is the basic um brush work that this that I'm learning here and this is what this looks like and this is what this is and it's combining them all together creating something new uh something unique and um it could be really beautiful I mean I'm bad faces and I I cheat a little bit um always have when it comes to faces and with the the way the model worked it's really beautiful with the way it creates the faces and they're not really an anime face and you know not as basic as mine they have far more details in it a lot of my paintings I'll um just do no face uh so the way all the models work together is exceptional and then you have the base model the AI the core of it and it is I mean say like an onion it's got lots of layers to it so there's lots of neuron Nets so models within models so one model will uh contain so it's static information it's what it understands of image which isn't an image it's ones and zeros and combining static together to try to create a picture kind of like looking up at a cloud and uh saying I can see a puppy in the cloud so you can think of it like that it's looking at the static and it's trying to see something in the static and create it based on what you're telling it and then it's got the language model it may not have the best language model um stable to Fus 1.5 models don't have the best language model so when you're typing in the prompts they don't come out as well if you've tried um Bing for example which is paired with uh I think it's Deli uh for its art generator and chat GPT for its language model it uses that together and it has a better understanding of language and how to use that language with its understanding of how to create an image so you get um really precise uh Creations a lot of the time when you're looking at a art generator that's using stable conf diffusion as the core depending on the model it changes the way that language is working it changes the way the images is working Everything Changes uh for some trained models as you're coming down the layer we have something which is called clip Skip and the last layer you can think of it like a clean up a refiner uh where it's making something look more detailed and more realistic but if you're wanting to do say an anime look you want to turn off the last layer of that Ai and not use it so you'd set clip skip to two so one is using everything two is going through it gets to the last layer and then it doesn't use the last layer and that something that you need to look at when you're using your models and what you're using you sort of have to play with it and find that balance and what you're actually doing so we'll come across to Civ AI which is uh what we want to look at today and show you guys how to use it and navigate it so and you'll see here we have a checkpoint and um this is game Icon so it's specifically made to try to create icons uh this one here is lauras on unicorns and we can come down we can see control net control Nets are the core model they're the big ones and luras are the trained ones now some people have said they've gone on to this page and it looks very different it may be where you're clicking you're on home if you come across to models it will open up your models now I'm not signed in because some of these will be blurred out for obvious reasons which we've uh discussed before but those are normally trained with um more skin realism uh so Juggernaut XL is a fantastic model and it's gotten a lot faster if we open this up the first thing we'll see across the top here is all the models going back to the first model so and down here this is a SD XL1 model and we haven't gone to sdxl 2s as yet so we're still on sdxl 1's but we have turbos run diffusion which is a lot faster these are all little changes which have been added on um depending on what you want to do I find if you've got the storage space more is better you can come through and click on this information here and it'll tell us the prompt and we can copy that prompt and we can try to recreate this image uh and since we have the prompt and this AI is trained to know this character as this when you run this prompt with this model you should get something which is uh almost identical because you have the exact prompting information um especially if you put in the seed if you put the seed in that is like I've said it before it's like opening a page to a book and that page is always the same in the book so when you put all these together it is going to that page of the book and it's like that specific memory that you have in Vivid detail that specific day and it recreates that every single time so you can test this you can take the prompts and put them in and then you can test it with the different versions and see how that image changes with each model that you use so we'll come back to here and we'll go back to the models if you come back to your filters you'll actually see that you have checkpoints embeddings uh the stable diffusion page here when you come down to embeddings will actually tell you how to write your embedding and actually use your embedding to uh create the wording you can see here that would be the name of the downloaded embedding so it knows specifically what it's creating for that embedding with Laura's it's an added model which if we come over to 70 AI um sorry into uh Creer come over to this test one and I've added a model here and you can add your um your Laura here so this one is specifically a cat and I've done cartoon then you describe it and depending on what um model you're using what lore are you using the entire effect of what is being created changes so come back across so you can come through and you can specifically look at just luras and then you can say well you know I only want to do the uh SD XLS and we can take a look at those and it will reload and we can see just luras to add on to models to create specific things so if you wanted the Star Wars character you would add this Laura to it and then underneath here you would add that to here and you'd have grou as a cartoon character rather than a more realistic usually when we click these we should see our little grou character Star Wars character come up when we click that information and we come down we can see the model that is been paired with this Laura to create this style of image so you can mix and match them and uh create your artistic Vision differently so that's something to um keep in mind so there's all different ones you come through and you can just go to checkpoints turn lwas off um you've got all or you can just have trained if we go to trained SD XLS when this loads up they're not merged a merge is is where someone has taken say realistic Vision or Juggernaut for example and then they've taken this anime and they've put the two models together that will create a new model which may look like this um so going to look like that but you get the gist uh it will put this sort of look and this look together so you'll end up with a 3D sort of um anime looking character so that's what merged are so these are just the realism ones here so and these are all main models so you may want to come through and take a look at those so and you can go through and see what the art style is to it so we can see there's a bit of steampunk coming through here you sort of got that steampunk running through there as well for the um type of clockwork antique background that we have on it um we have sort of a um mystical sort of look there uh obviously some style of aliceon Wonderland got these wonderful um vibrant colors coming through in high contrast and each model that you use is not the same as another one so when you're creating your artwork you need to um know what you want to do and mix it together to get that Vision to come across and then once that's done you can just simply come through and you can select your um your model you can play with it you can adjust it and come through and just end up rendering um what you're trying to create so uh it's just a matter of mixing those models together and um which are the neuro networks and understanding how it's done so on this one we could change it again so we run this once if we come through up the top here and we come down we can change more so we can put this clip skip to two and then we can run it again and that will have a different result um there's also a part of the neuronet is underneath the quality presets we have the Samplers every sampler that you use will be different uh so the ulay which is one of my favorites will have a different resulting look to the uh DPM and that's because the language model the Styles everything in it is slightly different again so when we play with those and change them that heavily affects what the model is going to create because you're changing the neural network you're changing the way it is linking with the way it is thinking so one might be thinking uh more 3D one might be thinking more 2D uh one might be thinking more realism one might be thinking more cartoon so as you come through and you change various things in the Samplers it changes the entire way the model actually kicks out its image so this one being a uh a cartoon the bottom layer is usually best turned off so I would imagine having it on clip skip 2 we get cleaner images without the distortions that we get in this one we can see it's very minute it's not like the major change but it is just toned back and if we come under here again and we change this to Ula we'll see it very different again and of course then we have the lcms which can be applied now to most models I don't think it can be applied to a sdxl I think it uh sorry to an SD 1.5 I believe they can only be applied to a sdxl um but we'll take a look at it and we'll see what happens so first of all we'll do the UL a and we'll see how that actually affects the model we know this is the style of this one see just they're basic they're the same that one with clip skip on it so this one's got more rounded 3D balls on it this one sort of start to um lose that detail of that three-dimensional which is what you would expect and you can see with the ulay it is just completely different again it's uh not even the same so it's changing the way that model is handling the information with an LCM the guidance strength needs to be dropped down uh I find on two and usually you would turn the um the sampling steps the more steps you have the more work it puts into creating the image so the guidance is towards the prompt line um but it also the higher that is the AI has more um free Liberty so it thinks more where with an LCM you're taking away its ability to um think more independently and it is just taking your key word and finding what that means in the neuronet so they generally do work a a bit faster uh as I said this is a uh 1.5 model and not really was never designed to work with that so uh I believe this will be a hot mess and it's taking a lot longer than it would if it was on an sdxl model which uh tells me that the AI is getting very confused with it but Sur surprisingly enough we have a a fantastic image which um we don't have our tree there as I said um with the LCM it cuts down um It's freeth thinking so it seems to be missing the tree and concentrating on the word cartoon cat but it's not to say we can't run that again and um see what happens we could also control Arrow up here and put weight on it and put weight on that Christmas tree of three and we can come through and put weight on cartoon as two and we can put weight on cat as two to equal to the tree and then we can run it again and um see how that actually handles with the render uh keep in mind the GTX 1070 here is rather old and I am encoding OBS through the actual um graphics card as well so these times are are a lot longer than they usually would be and the more I run the more I fill up um the ram so we can see how that changes I mean it's a very big difference every single time um this is the court case in regards to AI um stealing people's artwork uh in some countries has already passed where you can train the AI with copyrighted images because it is now understood as I'm showing here that it's pretty much impossible to recreate that artwork because of the models don't have the information in it to create a one to one of the original they are designed to create something completely unique and uh we can see how that works with the way that neuronet links to each other and the way it changes entirely every single time we do it there you go and we can see with the weights the way the cartoon is done that it changes the image again so that's civit Ai and I'll leave the link down below so you can read up on embeddings and how to use those they would be with the little outward sort of Arrow things where you'd put the name of the embedding that you've downloaded and of course you would put that in your embeddings folder in your models directory so that would be different for easy diffusion automatic 11 and um of course the um the creter AI diffusions directory also so which we've discussed in Prior videos so this gives you an idea of understanding models and how they play and impact and it's very handy to go through and read and have a better understanding and then know how to apply them because if you wanted a um say say something to look like Dragon Ball Z for example the best way to do it would be find a model which has that look to it and then you'd find Aur which has those sort of character designs and styles to it as well pair those two together and then you'd have to come through and find a sampler which will work with that well enough to create the vision that you're looking for and that's one way to also control a constant style and a constant vision of what you're looking for is to uh and one of the things that I like with the um Creer AI diffusion is you can actually go through and create a uh a name of a Json and then you can link that model to that style every single time so and have the settings the prompts your Samplers so every single time you open it up it is the same you don't have to try to remember it because if you forget it then you know you'll end up with completely different artwork and I think that's um something which is really cool with the Creed AI diffusion so yeah hopefully this helps um show you how to navigate the AI artwork and renders a bit better um it's also like very important if you're doing something like um photo restoration which is a Hobby in M that I do for free that um you know you use models which are very realistic and you may apply a Lura to that which is for skin imperfections so you get more realistic when you're actually fixing a photo because of um I mean AI is absolutely beautiful at doing a photo it saves a lot of time what would take you weeks you can literally do in under an hour so yeah don't forget of course to like subscribe and get the Bell on for notifications and share this video to your other networks and your other friends get on and look at the uh prior videos if you haven't uh checked them out check out the libraries check out the playlists uh there's a lot in there which will help you out um I am doing a lot of the Creer Ai and I will be continuing to um be working on those ones because I think the tool is just absolutely fantastic and I love it a bit so I'll show you more tips tricks on how to use that with upcoming videos and of course I will see you NE in the next stream tabulous video thank you for watching my video and sticking around to the end if you like my videos it'd really help me out if you could like And subscribe it helps the YouTube algorithm to push my videos out there to more viewers which in turn helps me and helps everyone so thank you for watch watching my video and hanging around to the end and I will see you in the next video
Info
Channel: Streamtabulous
Views: 1,279
Rating: undefined out of 5
Keywords: Android, photos, restore, art, Android photos, old photos, photo Restoration, fun, entertainment, Android apps, Ai, Ai art generators, stable diffusion, computers, krita, adobe, paintshop pro
Id: QBQVPFFtiNE
Channel Id: undefined
Length: 27min 4sec (1624 seconds)
Published: Tue Jan 02 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.