If you put a colorful image into photoshop
or instagram and blur it, youโll see a weird, dark boundary between adjacent bright colors. Yuk! In the real world, out of focus colors blend
smoothly, going from red to yellow to green โ not red to brown to green! This color blending problem isnโt limited
to digital photo blurring, either โย pretty much any time a computer blurs an image or
tries to use transparent edges, youโll see the same hideous sludge. Thereโs a very simple explanation for this
ugliness โย and a simple way to fix it. It all starts with how we perceive brightness. Human vision, like our hearing, works on a
relative, roughly logarithmic scale: this means that flipping from one light to two
changes the percieved brightness a TON more than going from a hundred and one to a hundred
and two, despite adding the same physical amount of light. Our eyes and brains are simply better at detecting
small differences in the absolute brightness of dark scenes, and bad at detecting the same
differences in bright scenes. Computers and digital image sensors, on the
other hand, detect brightness purely based on the number of photons hitting a photodetector
โ so additional photons register the same increase in brightness regardless of the surrounding
scene. When a digital image is stored, the computer
records a brightness value for each colors โ red, green and blue โ at each point
of the image. Typically, zero represents zero brightness
and one represents 100 percent brightness. So 0.5 is half as bright as 1, right? NOPE. This color might LOOK like itโs halfway
between black and white, but thatโs because of our logarithmic vision โ in terms of
absolute physical brightness, itโs only one fifth as many photons as white. Even more crazy, an image value of 0.25 has
just one twentieth the photons of white! Digital imaging has a good reason for being
designed in this darker-than-the-numbers-suggest way: remember, human vision is better at detecting
small differences in the brightness of dark scenes, which software engineers took advantage
of as a way of saving disk space in the early days of digital imaging. The trick is simple: when a digital camera
captures an image, instead of storing the brightness values it gives, store their square
roots โ this samples the gradations of dark colors with more data points and bright colors
with fewer data points, roughly imitating the characteristics of human vision. When you need to display the image on a monitor,
just square the brightness back to present the colors properly. This is all well and good โ until you decide
to modify the image file. Blurring, for example, is achieved by replacing
each pixel with an average of the colors of nearby pixels. Simple enough. But depending on whether you take the average
before or after the square-rooting gives different results!! And unfortunately, the vast majority of computer
software does this incorrectly. Like, if you want to blur a red and green
boundary, youโd expect the middle to be half red and half green. And most computers attempt that by lazily
averaging the brightness values of the image FILE, forgetting that the actual brightness
values were square-rooted by the camera for better data storage! So the average ends up being too dark, precisely
because an average of two square roots is always less than the square root of an average. To correctly blend the red and green and avoid
the ugly dark sludge, the computer SHOULD have first squared each of the brightnesses
to undo the cameraโs square rooting, then averaged them, and then squared-rooted it
back โ look how much nicer it is!! Unfortunately, the vast majority of software,
ranging from iOS to instagram to the standard settings in Adobe Photoshop, takes the lazy,
ugly, and wrong approach to image brightness. And while there are advanced settings in photoshop
and other professional graphics software that let you use the mathematically and physically
correct blending, shouldnโt beauty just be the default?
Anyone know where the setting is for Gimp?
This video does a great job of explaining the importance of gamma correction. But there are more reasons why computer color is broken, one is that HSV / HSL are often used as models for human perception, but in that regard they are very crude approximations...
Comparison of color algorithms on my blog
This was really interesting, thanks!
I'm guessing this is why RAW is better in image editing situations.
What's interesting is that this is pretty much a solved problem in video games. With all modern 3d graphics api (directx, opengl, etc), they let you import textures that are "wrong" (ie sRGB/log encoded), correctly interpret the colors when it's applying lighting and mixing, then convert the final colors back into whatever the target framebuffer space is. It's mostly automatic process, but you'd have to know to use it properly. It's one of those seldom talked about secret tools of the trade that separates pros from amateurs.
So what exactly can I do in Photoshop to avoid this?
Very thorough, great job. It's something I've always noticed but never questioned.
Just checked irfanview. Same problem. The solution seems simple enough. I wonder how much effort would be required to actually fix it. Maybe if we asked Irfan really nicely, or even chipped in a few $ each to help pay for the coding time we could get it fixed.
The reason for this is that finding the average by using the more accurate square root method is a lot slower. I made a program to test how much slower it was and I found it was 31x slower, a performance hit that is unacceptable if you are blurring every frame.