Are you making any of these mistakes
right now and will you be able to create better images after this
video? Probably. Let's find out! I've been dabbling with AI and art for some time
now, and I get a lot of questions on how it works and how to get better images. So I figured
I'd make a little list of, what I believe, are the 7 most common mistakes that
people make in Stable diffusion. Some of these are easy fixes, but one will totally
change your results for the better! Oh, and today I have 3 dadjokes for you. Let me know in the
comments which one you thought was the best one! Prompting
Generally, people come to Stable diffusion and have tried tools like Midjourney,
where a good image can be had with a very simple prompt. Stable diffusion is a little different
in that regard. It's harder to get a good image, but you also have more control. So most of
the time, you need a more complete prompt. And think like a computer, not like a human! Skip
those filler words. Instead of "painting of a cat in a hat" you're going to need to expand that to
something like "cat in a hat, painting, picasso, rembrandt, darek zabrocki, conceptart, cinematic
blue lighting". Add as much detail to your prompts as you can and the AI will understand you much
better! Remember, the AI is creating an image, it doesn't know what a beautiful image is if you
don't tell it. And throw some artists in there while you're at it. By the way, do you why artists
stay cool all the time? They have many fans. Missing denoising strength in img2img
Many of the values and settings in Stable diffusion can be quite challenging to
grasp at first. If you follow a good tutorial, like my ultimate guide on stable
diffusion, linked in that corner, you'll get a good grasp of what's going on. Still,
img2img is one of the most important tools and it can be challenging to get right if you don't
understand denoising strength. 0 will make no changes to your image and 1 will completely change
it. Make sure you find a middle ground, start high and work your way downwards as you keep iterating
your image until you have your final result. A rule of thumb could be something like, start
with 0.7 to make larger changes, then move down to 0.4 when you're close to what you want. After that
you should be very close to your perfect result! Not giving the AI enough time
I understand, you're busy, you want a perfect image straight away. But to get
that in Stable diffusion, you're going to need to give the AI some time to work with you. What
the AI can create is very much up to the seed, almost like the pieces to a pussle. You might
have the best prompt in the world. But you still might need to run that for 4, 8 or 16 images to be
close to what you're looking for. And after that, several more in img2img. Start with many images to
choose from. Move on to img2img and work in steps. Baby steps forward and you will
get what you're looking for. Hey, just a quick intermission here. If you
like this content and want the algorithm to show you more, make sure to like and subscribe,
that is really going to help me out a lot! Copying settings
Now don't get me wrong, you should copy settings. Try everything, borrow and learn
from the best. Find good looking images and copy those to adapt to your own style. But don't expect
that what worked for someone else in another style is going to work for your style. Copy settings,
test what changing things does to your image and take it from there. Learn the tool so you can copy
and adapt to what you want! Speaking of adapting, my poker playing friend has this new prosthetic
arm. He's having a hard time dealing with it. Not copying settings!
Are you stuck in the same old rut, making anime waifus that keep filling up your
harddrive and they basically all look the same? Take some inspiration from other creators,
check out what they're doing and what they use in their prompts. I guarantee that you
will find new ways to incorporate other's ideas into your own workflow. Perhaps you
can input photography lenses into that anime waifu prompt. Or lighting modes that are
generally used for film. Try everything! Messing with the resolution So this is a tricky one. Stable diffusion works
best in 512x512, but except for instagram, it's not really a very usable format. I get it,
you want your image in a different resolution, you want it horizontal, vertical, maybe full
HD 1080. Yeah, that might be an issue. Now I said 512x512 works best, but you can still create
horizontal and vertical images. But stick to the low resolution stuff. You want horizontal? Go for
640x384, then upscale it 4x times in extras. Then you're going to have your high resolution image.
But whenever you move away from the square format, expect to run more batches of images.
Because the images will be weirdly cropped, especially if you're creating images of people.
Multiple hands anyone? No? Only me? Alright. Forgetting restore faces
I see a lot of AI images. A lot of them. Some are fantastic, but the majority of them lack
one thing. Good looking eyes. Now it is an issue with Stable diffusion, and for high resolution
it's still not perfect. But most people out there can improve their faces and eyes in images by 10
times. Go into the settings, activate Codeformer as your face restoration model and whenever you
render people, check that 'restore faces' box. Bonus time! All right yeah so there's
one more mistake I want to talk about and whenever people find a good prompt or a good
image in the result that they like they tend to forget the settings. And there's actually two
ways to retrieve that. So the first one is a setting where you save the text information about
generation parameters inside the PNG file. Sort of like saving the metadata. So then you can take
the image pull that back into PNG info inside your automatic1111 and all the settings are going to
be there. You're going to see your prompt, you're going to see the steps, the sampler, the scale,
the seed, the size. Even if you have upscaled it you're going to see how many times you upscaled it
and with what upscaler. The second option is you can create a text file next to each image, that
has a text file with all the settings. This is saving the exact same settings just in a different
way. These settings can be found in the settings tab in automatic1111 on the left side so just go
click those and whenever you're trying to retrieve an old prompt or an old setting for an image you
got it. Boom! Yeah so that was the bonus mistake. So those are my top 7 mistakes people make
in Stable diffusion. What mistakes did you first make when starting out?
Leave it in the comments below. Oh, I almost forgot, we only did 2 dadjokes
so far and I promised three. So, yeah, so how about this one. The other day my
wife saw a spider and my wife told me to well, take it out instead of killing it. We went out, had a few drinks.
Nice guy. He's a web designer. Anyway, I hope you learned something
and had fun. Have a good one, cya! Oh here's a reminder, don't
forget to join our community in Discord. Link is in the channel
description. Get help and support, talk AI art and just chat with the lovely people.
We got Dreambooth expert Maui, grumpy Finn Hili,