Automatic1111 Running on Macbook Pro M3 Max 128GB VRAM

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everybody this is something that I wanted to Showcase as a part of um as a part of the New Year stuff so as you can see I have actually used a stable diffusion 1.5 and I'm actually testing on my MacBook Pro M3 Max which has got 128 GB of RAM uh it is a topnotch uh MacBook Pro that I've uh that I've got for myself it's 14 in but um again I just wanted to test test things out I wanted to perform the stable diffusion stuff on my new Macbook and this is the output as a part of this video you would actually see how well uh this whole video has been created uh I also wanted to showcase the power of stable diffusion uh and the AI um and how the videos can be transformed into an artificial intelligent character uh an Anem or any type of prompt that you actually uh showcase or put up and uh you'd be able to see some of the stunning output um as you can see like uh this is a 9sec video which I've again slow down so this is the output that you will be able to see and you can see the background uh is different all the time well I could have controlled the background by putting some additional prompts but I just wanted to try things out and wanted to see how well these things are puffing out so um you can you can definitely see the power of AI and how the MacBook Pro can perform flawlessly to produce this beautiful output so this is the individual stuff that I'm using um and again you know I'm using after effects to have a lot of flexibility on uh doing post production but yeah this gives you a comparison of both the output that I've produced and uh in this video I would also be going through the performances that um a MacBook Pro M3 Max can provide right to help generating uh these beautiful beautiful AI characters um I will also be exporting this side by side um video for uh for viewers to see uh and it will it will be easy for you to understand things uh I'll be more than happy to provide every resources that I've used I'm also using control net and uh let's dig down more that uh let's see the interface of stable diffusion and the level of parameters that I'm using uh in order to produce this entire output so of course I'm using stable diffusion 1.5 um and this is this is a standard um standard uh page that you actually get um in this I'm using a model called as analog Madness uh version 70 as you can see I'm actually using the analog Madness realistic model um it has really good um results and um yeah you can pretty much download this particular model uh the text to input is I'm using uh so let me let me take you down to the point where I'm uh what are the main important steps that I'm taking here so as you can see I'm using control net uh the control net script that I'm using is movie to movie so this is basically a movie file which I got it from internet and as you can see um pretty much you know it actually showcases um uh the girl is dancing behind a gray background and she's wearing a red jacket and her expressions and the body gesture is something that we want to replicate um and produce uh some best output right and um so this is the control net that I'm using the other important thing is about the prompt uh The Prompt is going to play an important role here so young girl is the prompt uh I'm using open hair the hair highlights and pink sweater so instead of the red instead of the red sweater I want a pink sweater to come and the negative inputs that I'm using is deformed disfigured and ugly uh the this is just to ensure that I'm not getting any uh Stupid output from the stable diffusion uh the sampling method that I'm using is DPM Plus+ 2m carage uh the sampling step I'm just using 20 I could have increased it to uh the 2530 as well but I just wanted to try it out that how well it actually works um although the video is uh rectangle which is uh 16 by1 19 uh fr frame uh resolution but here I'm just trying to limit my output with width and height of 5 and2 and 5 and2 CFG scale I'm leaving it as it is as seven or seed I'm using as minus one which is the default you could have change something else the most important thing that I'm actually using is the control net version 1.1 which is the latest one um the important thing that you have to remember while producing in the output through control net is you have to click on this enable and Pixel Perfect um the pre-processor that I'm using is the line art realistic uh which is the model that you need to download uh and uh the model I'm using is the control uh net line art SD 1.5 uh which is again can be downloaded from the GitHub let me know if you guys do not have the link I'll be more than happy to provide that um then these are the default uh parameters that you can actually set it up um then you click on the generate now while you do a generation uh if you can go to your terminal window this terminal window will provide lots and lots of information about everything which is possibly happening so every second of the video has got number of frames so even if you go with like uh 25 frame per second so 9 into 25 will produce around 225 frames and uh so those individual frames are being rendered uh based on the inputs provided in our uh text to image so you can see almost every particular image is nothing but the replica of what we get as an output so you can see the utilization of the GPU and CPU so so you can see that 74% of the GPU is being used by Python and CPU is around 46% so this actually pretty much gives you an understanding that um how much load the system is uh taking up so it's it's fairly I mean it's it's not maxing out the CPU and it's not even maxing out the GPU as well well I hope you enjoyed this beautiful um video on stable diffusion running on MacBook Pro M3 Max with 128 GB of RAM and um well I hope you also had a good chance to learn a lot of things about how the control net can be used and um how control net actually can produce um can help us to produce some of the best outputs uh by making sure that image is still retaining its uh body position and it is able to give uh some uh good output uh the other thing that definitely worth trying is different models So currently I'm I'm just using analog Madness but yeah if you guys have different models uh feel free to use it um again I'm using pink sweater you could you could use shirt uh you can have like you know black hair and um yeah the possibilities are definitely endless uh there are a lot of things that can be done but yeah this is first video on stable diffusion of the Year 2024 again I I wish everybody a Happy New Year and I pray for everybody that everybody's year become so prosperous and everybody enjoys um progression in their life thank you so much uh and bye for now from Sydney
Info
Channel: Digital Abstracts
Views: 4,496
Rating: undefined out of 5
Keywords:
Id: 6FCbcHm7vEc
Channel Id: undefined
Length: 8min 46sec (526 seconds)
Published: Thu Jan 04 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.