Hello and welcome to another episode of Stable
Diffusion Experimental. Today we'll be looking at IC light, something that just came out a few days
ago and it's oh so close to be actually production ready and hopefully it will be soon. But let's not
get ahead of ourselves. What is IC light? Well, long story short, IC light is a new way to relight
an image. In this example, you can see that we are starting from the image on the left and getting
the image on the right. It's been completely relit by the mask over here. Now, this mask is being
created through these nodes, but you can also import a mask of your choosing that you created,
for example, by using Photoshop. Right now, we have two different implementations of IC Light
inside of ComfyUI and some implementations of it in Automatic1111 as well, and all of them are able
to relight an image by using a mask as a source or by using both a background and a light or just the
background. So today we're gonna take a look at two different workflows, one for the background
and one for just a single light. But first, what is IC Light? Well, IC Light is brought to
you by none other than the dev that brought you ControlNet. And in its native implementation,
it can work on Gradio or an automatic1111, and it's been ported to ComfyUI as well. And as
you can see, it's a powerful tool to relight an image. Here on the GitHub, you can see a lot
of these examples. And if you want to take a look at the GitHub yourselves, you will be able
to find the link in the description below. Now, another really cool thing about it is that
by relighting a single image multiple times, you can combine all of these images and create a
normal map. I'm sure this is invaluable to those of you who like normal map control nets, but also
to 3D artists out there who are more used to using normal maps when compared to us generative
AI people. And to me, this is really insane because one thing that I've been missing a lot in
generative AI is the ability to work with normal maps. I think that they're much better than depth
maps, at least when light is concerned. But how do we install and use all these goodies? Well, as
I said before, right now for ComfyUI there's two different projects being developed, and you'll be
able to find the links in the description below for both of them, but the one that I'm gonna use
today is the one by Kijai, or at least I hope that I pronounced that correctly. Now this node suite
is being updated constantly and every few hours so the one you will end up downloading might be
slightly different from the one that I'm using today but the overall inner workings should be the
same. The first thing you want to do is go over to the green button for code here and copy the github
link then go over to your comfyUI folder, comfyUI, custom nodes and then right click open in terminal
and write down "git clone" and paste the link to the github repo and press enter. Now in my case
obviously I already have it installed so I've got an error, but in your case it's going to download
the repository. If you don't want to go with the git clone method, you can always import this
workflow by dragging and dropping this image here inside of your ComfyUI instance, and then install
missing custom nodes by using the ComfyUI manager. The next thing we're going to need is some models
from HuggingFace. So we're going to click on this link here and we are going to have three models
here. Now in the ComfyUI version only these two, these first two are supported. The FBC and the FC
versions. The FBC is the one for backgrounds and the FC one is the one for the light as a mask.
So we're going to download both of them and we are going to place them in the models UNET folder.
Now it's really important going to place them in the models UNET folder. Once you have downloaded
all the models needed, you can import one of the two workflows from the GitHub. This one,
for example, is the one for relighting via a mask. Now I'm going to separate all of what's happening
here so that you can get a better understanding of what's going on. Okay, that's better. That's
a little bit more understandable. So what's happening is we are loading a checkpoint and
for now, IC Light only works with 1.5. So we are loading either Photon in this case, or we can also
work with some other models that work as well, so like EpicRealism NaturalSin. Then we have the
usual CLIP text encode nodes, one for positive prompt on top and one for negative prompt on the
bottom. And then in a little bit closer on the top we have a load and apply ic light node. This
node sources the model from your UNET folder and depending on the workflow that you want to use
either the one for the light via mask or the one with the background influencing the overall
lighting you want to select either FC in the first case or FBC in the background case. So in
this case, we're going to select FC. The model that's going through the load and apply IC Light
is ending up in the KSampler. Then a bit further down, we have an IC Light conditioning node.
This node needs some different inputs. We've got a positive and negative inputs coming in from the
positive and negative prompt. We have a VAE coming in from the model and we have some foreground and
optional background inputs. In this case since we're using the model for the relighting with just
the mask that's acting as a light source we don't have the optional background, we only have the
foreground. Let's take a look at what's happening in the foreground. For the foreground, which
basically in this case is the whole picture since we don't have a new background, we are loading
an image with a load image node. Then we are resizing it and we are encoding it. Since the IC
Light conditioning node only accepts latent as inputs, we have to encode it. Now I've actually
asked why because I was getting an error it only accepts latent and not images and I was told
by the dev that they switched it from images to latents because it would run out of memory on
large batches. So this way it wouldn't run out of memory. So now that we have everything that
we need inside the IC Light conditioning node, the only thing that's missing is a multiplier.
I will show you what the multiplier does, but for now, let's keep it at the default value
of 0.182. Now as you can see we have three outputs from the IC Light conditioning node. We have
positive and negative those are the prompts and we have a latent. But we are not using that latent in
the KSampler. Instead we are using another latent, and the latent that we are using in the KSampler
comes from an encoded mask. The mask is created from the dimensions of the image that we
are loading and resizing. So in our case, that's 512 by 640. And then it's shaped by
selecting a shape, in our case, a circle, square or triangle. And then by selecting a
location for positioning of the light source and a shape width and shape height. So in our case,
the location is 264, which is basically the center of the frame. And you can see that the light here
is at the center of the X axis. Then the location for Y is 338. And you can see that it's a bit
slightly down from the center since this whole height is 640. And then it's very narrow and very
tall. And that's reflected in this shape width, which is very tiny. It's 196 and a shape height of
454. Then we have some settings for grow mask with blur so that it's not a solid circle. And then we
have a remap mask range and what this basically does it just remaps the range from black to white
in our case it's starting from zero which is pure black but it's ending at 0.65 which is not pure
white pure white would be one. So in this case the light's not too harsh but if we go all the way up
to one and hit queue prompt again we can see how the light's a lot harsher. So the remap mask range
basically controls the intensity of the light. And then let's analyze the KSampler. As I was saying
the latent is sourced from the mask that's been encoded as an image. And the way all this works,
I think, and I think so because of an earlier version of this node suite that was using instruct
pix2pix instead of a proper IC light conditioning node, is that this IC light conditioning
node is basically a front end of sorts and, behind it, there's an instruct pix2pix workflow.
So instead of projecting the light source on top of the original image it's doing the opposite it's
starting off from the encoded image of the light and then it's projecting the original image on
top of that. Now for the KSampler values we want to use a good number of steps in this case that's
25. I have tested this with a CFG range from 2 to 3 and it works fine in all of my testings. And for
sampler name I've tested it with Euler, DPM++, it all works. And for scheduler I've tried it with a
simple DDIM, Karras, it all works. So you can just default back to Karras or normal and it all works
fine. The last thing that we need to look at is the multiplier. Let's see how the multiplier works
by going down all the way to 0.05 and you can see that the image is a lot undercooked. Now let's go
to 0.1 and the image starts to get right. Let's go all the way up to 0.25 and the image is a little
bit overcooked. Now this might not be as apparent with this kind of light source, but you will see
with the background workflow that the multiplier is really important. Now another way that you
can influence results here is by working on the positive prompt field now we have "spotlight on
face", if I remove "on face" for example and just leave "spotlight" you can see that the light's
not good at all. So directing the light helps a lot. If I put "beauty dish from above" for example
we can get some direction from the light as well by maintaining the original mask, but giving it
a direction of sorts. Now you would need to be kind of familiar with lighting terms. Being a
photographer, it kind of comes easy to me, but really there's not many terms out there. I think
you would be happy with beauty dish, Softbox, Spotlight and maybe a few others like Snoot light.
But since trying to draw up a mask with a node is kind of difficult, at least to visualize in your
mind's eye, let's find another way to do that. What you could do instead of using a Create
Shape Mask node is just create a mask inside of Photoshop, for example and then loading it into
the workflow or you could just right click on the source image select open in mask editor and then
paint on the image where you want the light to be, so let's say we want to have a softbox from
above kind of effect. We're going to select everything up to the cheekbones and then some
of the shoulders we're going to save to node and then we're going to change the positive prompt to
"softbox from above". And then we want to connect the mask from the load image node into the grow
mask with blur. So we are bypassing the create shape mask now. Then we maybe want to up the max
value from the remap mask range in order to get a bit of a more powerful light. So let's go with
0.85 for example, and let's hit Queue prompt. And there you can see that we got the light exactly
where we wanted it to be. Now that may be a bit too powerful. So let's go down to 0.7 and see what
happens. And that's way better. So all in all as you can see this workflow for relighting a single
subject or multiple subjects really from a single light source it works really really well. It can
work with people it can work with objects it can work with pretty much anything so I'm quite happy
with it. But let's now switch to the background influence relighting and that's the reason why I
think that IC Light is still quite experimental and it's not quite production ready yet. Let's
switch over to this workflow now. You can find it in the description below. This is the polished
version of the workflow that we built together in my latest live and you can use it either by using
it all or just by switching off this group on the left and then plugging in this load image node
here and this just acts as a standalone. But if you're interested about knowing how all of
this works in detail you'll have a link in the description below to my latest live. But what
it basically does is it just starts from a few renders in Blender for poses and then generates
a picture for a fashion editorial based on style for the style of image that we want to create,
the item of clothing that we want to generate on top of the model, the attention mask for where
that item of clothing should be generated on top of the render, and the face of the model that
we want to have throughout all of the fashion editorial. And the next step at least in my live
stream video was to relight the whole picture with a red background. So overall, a very reddish
light. But we can use it as well just to relight and re-background this picture as well. Now, the
way that IC Light works with background is that it can either work with background only or with
background and a mask for light. So you can apply all the same principles that you just applied in
the previous workflow and apply it here. But you have to keep two things in mind. One is that we
need the FBC model here in the load and apply IC light node. So before we were using the FC model.
Now we have to use the FBC model. And the other thing that you have to be wary of is that the
background is color casting a lot on the subject. Let me show you what I mean. In this picture, the
overall lighting is kind of well preserved. We can see that the model has understood that the light
is coming from over here. So if you take a look at the original light here in top left corner, you
can see that it was completely different. And now it comes from where the light source should be.
But the color cast on both the body and the face is way too much. The way it's acting, it looks
like the greens are used as a light source as well. And while that happens in reality, because
light bounces off from grass and trees as well, it's not as pronounced. Something like this
basically never happens. Now if I switched image to a grid of neon lights something like
this result here on the right might seem natural to you and it would be kind of natural because
with neon lights everything is affected a lot, so you might be tempted to think it was just a
matter of that one picture. But if we bring up a photo of some clouds in the sky, you can see what
I mean here. There is just no way that the blue sky is casting so much color on top of anyone's
face. So in my experience, it works really best with toned down backgrounds or really specific
backgrounds such as neon lights for example, as we've seen, or kind of regular backgrounds like
gray backgrounds and white backgrounds. But if you want to generate something with a background that
has a lot of color in it then be prepared to have a lot of color cast on top of the subject, which I
personally don't like. Now it could very well be a matter of settings like over here the multiplier
might need to be a lot lower so for example 0.12, but as you can see yeah the color cast issue is
getting better but the overall picture is a lot undercooked. In my testing I've found that the
multiplier works best between 0.15 and 0.2, so it can't really be that. So, all in all,
while I believe that the background relight is a bit scuffed right now, and it's not production
ready at all, what I think is production ready is relight for products. Now, in this example, we
are generating the product, but what you could do is just plug in a real product here. But
what this is doing is it's basically starting from an already existing image of a product and
then apply a mask of our choosing. In our case, the mask has been generated inside of Photoshop,
effortlessly really. This is just a brush stroke. And then it's relighting the product image
completely from scratch. So in my opinion, if you wanted to go with the background replacing
route, what you could do is just create a different background with another workflow,
for example, the one that we used a few weeks ago for replacing backgrounds. And then relight
it completely. Because one of the weaknesses of any workflow that replaces backgrounds is that
the lighting most of the times is not taken into consideration. With this IC Light you can just
relight the whole picture completely from scratch, and that way you could have consistent lighting
throughout all of the pictures. So today we didn't build any workflow from scratch because these
two workflows are basically plug and play, you don't have to do a lot with them and the overall
structure is pretty fixed. There's no reason why you would change anything in these workflows. The
overall structure of the IC Light node suits is fixed, so what you could do is just append other
stuff at the beginning of the workflow like in my case where I was generating models and then
relighting them, or generating products and then relighting them, or start from an already existing
product and then relight them. But yeah overall I think this is an incredibly powerful tool and I'm
really really happy that we finally have something like this that I have been wanting for quite a
while now and I think that this is one of the steps in the right directions that we are taking
in order to tame the chaos that is inherent to generating with AI. Both in terms of what we can
produce with it, and in terms of what we can tell clients that we can achieve. So that's going
to be all for today. I hope you had some fun with it and you learned something. My name is
Andrea Baioni. You can find me on Instagram at @risunobushi or on the web at andreabaioni.com.
If you liked this video, remember to like and subscribe as it helps a lot with the
channel, and I'll be seeing you next week.