SDXL Lora Training [2024 colab]

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] welcome back let's get this thing moving so what I've got here for you today is just to point out that back in July we were using this collab August last time I updated was August it's still been working I've been training things every day I use it daily um the point is I've put a new version out for everybody just here February 15th 2024 um and this is just a fork of Lin's Coya La Laura trainer so what I'm going to do is just quickly run through it this one is preset for an a100 so if you have collab Pro you can just do the minimum so I'm going to show you the minimum you need to do first and then we'll go back and explain what's going on quickly so here we go first prepare your environment so we check that we're actually getting our a00 GPU we then install Coya trainer just click on the button every time for each section in sequence once that's installed you will put in your hugging face token you can click here to go and get it it's just the read token put it in there that will download the models so obviously put the stuff in then click play directory config again this is for my drive it's set up so you've got all of your folders on Google Drive already with your captions already set up okay so existing data set you just put the path in right uh if you don't know what the path is when it's running you can open this navigate to the folder right click and copy the path and put it there and that saves you there in for bucketing and latence I have recursive on because I use subfolders so you can have multiple data sets in one Laura you just need to put all your data sets into a single path on Google Drive and then point it all here and use this recursive all right it's set up for 1024 cuz that's sdxl it should automatically rescale all your images obviously that would be down not up to have the best results so you know bigger images are okay and it will just automatically put them into aspect ratio buckets for you um and set up all the metadata comp put all of the tags that's the text files that are paired so when we talk about the data set it would be an image with a text file same file name and then the text file describes what's the image and you just have many pairs that's all they just need to match we can do a whole video on captioning and data set curation so I'm not going to get into that right now I use Laura C3 Lea with 16881 all right which is literally the default which is listed there there's no secrets there no special source um I've scaled the learning rate because I'm using 30 batch so it'll process it'll learn 30 images for every step so we've increased the uh learning rate and that means we only need to do two Epoch okay so it's a very quick Training Method if you don't have a lot of images this is very very fast constant with warm up you don't need to change anything here nothing here needs to change so just click play and move on every time just confirm the settings click play and move on this is my multi multi R noise I think it's just basically the default to be fair multi-res 6 and 9.3 I use SNR gamma of three I found that was good don't forget to put your project name in otherwise you'll get a Laura called enter project name here uh everything here can stay the same now if you are going to change the batch size say you want to do a batch size of three maybe you want to do repeat of three that would be 10 times less so what I would do is I would put the epox up to from 2 to 20 or 10 and then maybe not necessarily totally half the learning rate so you have to scale it I've left an in I've left a point in the uh article basically you have to scale the learning rate with the batch rate and then that determines how many Epoch you need all right to finish to make sure that you've actually trained the images but again we could do a whole video on that so if you don't know what that is okay leave it alone if you do know what it is put your uh formula in because everyone has their own everyone has their own approach and you just change the settings to how you want to use it all right anyway I don't use the sample prompt because I find that training these can take like less than two minutes so at that point I'll just get the Lura and test it it's fine um it will produce two luras so you obviously click play and then as we come down it's going to confirm all of the settings all right and it's going to make one halfway at Epoch one it's going to make another one at Epoch 2 you'll know which one's which because Epoch 1 says 00000000000000 one and Epoch 2 just has the name of the project I find that the second one is generally speaking the better one but I found it useful to keep the partially trained one cuz sometimes they're a bit more elastic they're not quite as good they're a bit more elastic so you can you can have some fun with depends what you're training really for a person it's useless but for a style now you can blend things in different ways anyway final step start training and that's it it's going to go away and do all the stuff all right you would have had an error further above if something had gone wrong or it will give you an error here if you've done something wrong but like I said you only need to click the start buttons in sequence you need to put your hugging face key in then you need to put your data set path in and then you need to name the project and if you're if you're using an a100 that's all you've got to do you can just play play play play play play as long as see I would put the stuff in first and then play all and that'll be fine so this is a nice way for you to train your luras uh once you're done because you signed into Google Drive you'll find the lauras in content my drive Coya trainer and here you can see what I was talking about with the half the half you know Epoch one and then the final Epoch 2 usually it's finished by Epoch 2 it really depends like my settings work for what I am training and they're quite versatile but they do sometimes need some tweaks and it's it's always learning rate an EPO you've got to do some experimentation if you don't really know what you're doing but if you just want to mess with this you can train things without changing anything it's only when you start changing the settings that you run into trouble and uh usually the reason people have to change the settings is because they're not using a big GPU they're using a little GPU and so then you have to kind of back it off and make it take longer to train the same information all right so that's pretty much all I have had to tell you um like I said big thanks to lra Coya and the stability team for letting us play with all this stuff and uh you will find the new trainer here all right bested working as of today so that's as bad as good as it gets all right see you next time
Info
Channel: FiveBelowFiveUK
Views: 2,602
Rating: undefined out of 5
Keywords: ai, diffusion, lora, low, rank, adaptation, training, models, colab
Id: coWPuQ7963U
Channel Id: undefined
Length: 7min 53sec (473 seconds)
Published: Wed Feb 14 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.