Discover Easy Outpainting Techniques in Stable Diffusion, Inspired by Photoshop's Generative Fill

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
let's be real adobe's new generative fill feature is pretty easy to use but it's technology that most of us have been tinkering with in stable diffusion for a while now our painting was first introduced by Dolly from open AI months before Adobe adapted it into their programs there are some advantages to generative fill mainly that it's backed by Adobe and it's super user-friendly and accessible compared to running stable diffusion so I get why it's so popular it takes what we've been doing and packages it up into something anyone can use without needing to Tinker with stable diffusion kinda smart on adobe's part actually they know their reputation will give generative Phil a Lego even though we've basically been out painting within stable diffusion for a while now even if it took a bit more tinkering on our part the downside to adobe's generator fill is you will need to pay for Photoshop to use it and Photoshop will run you 20 bucks per month whereas stable diffusion is completely free being that you have the hardware to run it as someone who uses both Photoshop and stable diffusion I had mixed feelings when Adobe first announced generative fill on one hand I was pumped to see how painting capabilities brought into Photoshop in such a polished way but part part of me also wished it stayed a stable diffusion exclusive feature don't get me wrong now that I've tried generative fill with my Photoshop subscription Adobe really did simplify our painting down to an easy click and drag tool it's integrated so smoothly into Photoshop but what if I told you that there is a way to get that same ease of use with stable diffusion's outpainting yes that's right it's possible with photopia stable diffusion web UI extension a stable diffusion automatic 1111 extension that integrates with photopia if you're not familiar with photopia it's basically a little older version of Photoshop but online you can use it straight from the web at photopia.com and what this automatic 1111 extension does is it brings photopia into its own tab on the automatic 1111 web UI and in this video I am going to show you how it works you will need stable diffusion automatic 1111 control knit and of course photops stable diffusion web UI extension if you don't have the first two I suggest you install automatic 1111 and control net then come back to this video but if you have stable diffusion automatic 1111 and control knit installed then let's continue first we will need to go to the SD web UI photopia extension page the link is in the description once here click on the green code button now click on the copy symbol now open stable diffusion automatic 1111 next click on the extensions tab click on the install from URL tab now under the URL for extension's git repository section right click and paste the git repository that we copied earlier from the previous step click install once it's installed click on the install tab you will see the new extension under the install tab called sdwebui photopia embed if it's installed click on check for updates next click apply and restart UI after you've relaunched stable diffusion automatic 1111 you will now see the photopia tab click on the photopia tab you now have photopia installed within the automatic 1111 web UI by default you will see image.psd tab click on the x button to close it now for this tutorial I am going to keep it simple and generate a default Square image of 512 by 512 in this case we want to double the size in photopia and use our painting to fill in the gaps therefore in photopia we want a resolution of 1024x1024 which is twice the size of the 512 by 512 image that we will be generating so click on file new and set your width and height to 1024 by 1024 then click the create button now let's head over to the text to image Tab and generate a 512 by 512 image I'll keep my settings as default I'll use the Epic negative embedding for my negative prompt since I'll be using the Epic realism pure Evolution V3 model for this demonstration however you can use any model you want along with your own prompts for my positive prompt I am going to keep it simple and use something like portrait of a cyborg in a neon City next let's click on the center photopia tab here now that we are in the photopia tab we need to make our selection to the left we have our toolbar menu just like Photoshop click on the rectangle selection tool or press M on your keyboard make a selection around the picture but leave a border around the image doesn't need to be perfect just leave some image data around the selection like this next press shift plus control plus I to inverse the selection you could also hover over the select drop down menu and choose the inverse option now on this iframe height slider grab it and scroll to the left to expose the menu you could also scroll down on your browser window keep in mind that if you use your mouse scroll will it might conflict with the photo pi Frame Window causing you to scroll down inside of the photo pi frame therefore I recommend collapsing the iframe hype window now click the and paint selection button it will send the image to your image to image tab under the in paint upload subsection let's go over the settings click on resize and fill late nothing only mask set your Dimensions to 1024 by 1024 Deno is set to 1. expand the control net window click enable click and paint click control knit is more important lastly resize and fill scroll back to the Top If you want you can add a positive or negative prompt for me I'll leave the positive prompt blank and use the Epic negative embed for my negative prompt click generate there we go now this isn't exactly like photoshop's generative field but it gets the job done and personally for me it's the best way to outpink within stable diffusion's automatic 1111 with UI if you're not satisfied with the results you can shape it more towards your likings by guiding it with a positive prompt for example let's try photo of a cyborg standing outside a dark neon lit City you can also increase the batch count for example each generative field with Photoshop will give you three outputs I'll just use two of course with better prompting and guidance messing with the settings such as CFG scale sampling methods Etc you will generate way better outputs than I did with my default settings but I got to admit even with default settings using just basic prompting the Epic realism model and the Epic negative embed the results were pretty amazing now for the hell of it I am going to take this same image and plug it into Photoshop beta and use generative fill on it I'm just curious to see the results I was going to end the video here and do the testing but if you're still here might as well stick with me and see what we get when using adobe's Photoshop generative fill on the same image we generated using stable diffusion so the first thing I'm going to do since the original image is 512 by 512 and we expanded and now paint it to 1024x1024 with stable diffusion I'll do the same and create a canvas of 1024x1024 on Photoshop and import the image since there were a total of six images that I generated with stable diffusion I'll run generative fill on this image twice to get six different outputs since generative fill produces three outputs per generation I'll leave the prop blank I accidentally clicked generate to create the second batch of images to get the six images however upon doing that I remembered that I entered a prompt in the last generations and stable diffusion so I'll do that and enter the same prompt once this generation is complete I still say stable diffusion's output was way better with the blank prompts however let's enter the same prop that I did on stable diffusion to see what we end up with which was photo of a cyborg standing outside a dark neon lit City well there we go this wasn't a battle video however based on these results stable diffusion wins well I hope you got valuable information from this video and if you did please don't forget to like And subscribe keep in mind if this is your first video that you are watching from me I am not a strictly stable diffusion Channel I cover AI in general with a huge focus on generative AI such as text to image text to video text to music Etc I provide tutorials just like this and I also cover new AI tools so hopefully I'll see you in the next video and before I end this video let's take a look at the comparison of the images thank you
Info
Channel: AI Controversy
Views: 5,275
Rating: undefined out of 5
Keywords: Stable Diffusion outpainting, Stable Diffusion tutorial, ai art, controlnet stable diffusion, controlnet tutorial, easy outpainting, out painting, outpaint controlnet, outpainting, outpainting 2023, outpainting stable diffusion, stable diffusion, stable diffusion ai, stable diffusion ai art, stable diffusion art, stable diffusion browser, stable diffusion controlnet, stable diffusion gui, stable diffusion installation, stable diffusion online free, stable diffusion secrets
Id: rwuo-ppo0_A
Channel Id: undefined
Length: 9min 42sec (582 seconds)
Published: Fri Jul 28 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.