How NASA imaged Webb's First Deep Field with Joe DePasquale

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello my friend and welcome back to our second in a series of master classes from the very people who process the images from the james webb space telescope in our last video astronomer elisa pagan showed us how she downloaded the raw images from the web data archive and turned them into this dramatic image of the karina nebula in today's video her colleague joe de pascual is going to show us how he processed this image of smacks zero seven two three better known as webb's first deep field and as you can see this is a pretty long video so i invite you to make use of the chapters feel free to use them to jump around and i also invite you to download joe and elise's photoshop files so that you can open them up in either photoshop or and you can see exactly what they did and kind of follow along in real time with the video and so without any further ado i bring you joe de pascual and webb's first deep field joe thank you so much for showing us how you did the web deep field or the web's first deep field i guess the first of many deep fields i understand though you use a very different kind of workflow and i was hoping you could show us that today yes sure so uh you know elise and i do have different approaches to the way that we process images and i encourage that i think it's a really good thing to have different approaches because we come at it from a different perspective i've been working for a long time in pixin site and it's just something that i'm really comfortable working in and it lends itself to certain types of images really well i really like its ability to do accurate color calibration of images and so for the deep field image i did use a combination of pixen sight and photoshop that's sort of my typical workflow is to start with pix insight get to a nice color image and then bring it into photoshop for the sort of the post processing and getting it ready for a publication and so i'm going to jump in from the start here actually and kind of give you an overview of how i process the deep field image starting with masked.stsci.edu so if you're interested in trying your own hand at processing the deep field image you would start here i've got a few tabs here so that i don't have to do the searches that i've already done so i would normally click advanced search and also i'll point out that i'm not logged in you know i work for space telescope so i have privileged access to the the archive but i'm not doing that here i want to show you that as an anonymous user i have full access to the the deep field data from uh from web okay so the next tab here is just doing the advanced search i've already entered the uh the information the relevant information here s max 0723 that's our deep field cluster that we observed for the early release observations i'm setting the mission to jwst also you can see that once i have entered an object's name here mast automatically solves the the name and finds the coordinates on the sky so the rn deck is already populated and so we're already doing a cone search of just like a very small portion of the sky so then i've narrowed it down to just jwst data and the near cam instrument and after it's gone through the entire archive of everything using those filters it has found six records which makes sense to me because i know that we observed s-max 723 with six filters on the near cam instrument so i'll go to my next tab which shows you uh this would be the results after doing that search and performing performing the search through mast you get your six filters here if i scroll over you can actually see the filter names um these filter names are linked to the wavelength it's the central wavelength of each filter and so 0.9 this would be 0.9 micron the filter names are they're sort of following an older naming convention from hubble filters but because we're talking about everything is is listed in nanometers so technically this would be 900 nanometers this would be 1500 nanometers but to keep the filter names smaller they're basically divided by 10. it's not uniform across all the different filters like hubble doesn't do that so it can be confusing but if you just remember that this is sort of nanometers divided by 10 that's kind of what you're getting here and if you click on album view you actually get a quick preview of the image that you'll be downloading and so you might be wondering why there's two boxes here for each of these images and it's actually because the way the near cam instrument is set up it's it's two different detectors each one of those has four cameras in it and because of the way we observed the deep field we knew we would have sort of an extra bit of sky uh we call it a parallel field and we didn't use it for the release but it's there it's in the archive and it's available to process if you're interested um and it's you know demonstrated here in these two boxes in the astral view okay so i've already downloaded all these files and one thing i want to point out to you if you've downloaded them from mast they're going to be large i did go through and extract the images from those files and i'll explain what that means in a second but i just want to point out that three of these files are nearly 200 megabytes in size and then the other three are 47 megabytes in size and that right there is an indication of a sort of a quirk of the near cam instrument is that the short wavelength filters are higher resolution by about a factor of two than the long wavelength filters so one of the first things you're going to have to do when you download the data is to register everything to a common reference frame and to get all of your images at the same resolution i would typically upscale the long wavelength filters to match the short wavelength filters because the short wavelength filters look amazing and you can upscale the long wavelength filters without losing too much information and and have them match the short wavelengths okay so i'm going to bring up pixen site now this would be the next step in the process pixel site will allow us to bring the files in and take a look at them and then start working on them okay so we've got pixin site up and i've got the files that i've downloaded from mast here i'm gonna bring one of them in because i want to demonstrate uh one quirk of downloading files from mast which is that a fits file is the the file format that is used in astronomy and astronomical imaging and it stands for flexible image transport system now fits file is not just an image file it can contain many different files it's sort of a package file that contains a bunch of stuff in it and so when you download images from mast you're not only getting the image you're also getting a bunch of calibration files that come along with the image this would be like masks that were used for putting together mosaics noise flat fielding all that stuff is included in here and you may want to use that but for what we're doing these images have already been calibrated they've been fully run through the pipeline and so we don't need to worry about that stuff so we would just bring the image in and then close out these windows you can just do it one by one by clicking or you know pixel site has some some abilities to run commands i would maybe do close star var and that'll close all windows that have var in the name that was i think only one so we'll close all of them and then we're left with just the image file and you can see the underscore sci science that means science image so that's the relevant file that we want or the image that we want out of the fits file i've already done this for all of the images so i'm going to open those up here i've saved them out as separate files so i can always come back to them if i need to and i'm going to bring them into pixel site drop okay here we go all right so now we're opening all six image files and now we're looking at see the sci is uh underscore sdi is there for every single one of these so now i know that i'm looking at science image files across the board and now this is where the the aspect of the the imaging that i mentioned the near cam instrument having different resolutions for your different data this is where that comes into play you can see the second marker here one four one six one six one thirteen that's just an indication of the zoom level currently in the image and the fact that they're all basically the same size in the window but they each have different zoom levels tells me that we're actually looking at different resolutions across the board here and that that makes sense because i know that the short wavelength channels of near cam are about twice the resolution of the long wavelength channels so the first step that i want to do here is to register all the images together and of course i probably want to see what's in there right so i want to i want to run some kind of tool that will allow me to see what the image data contains and pixel site has a great tool for this it's called the screen transfer function this will raise the pixel values in an intelligent way it actually goes and evaluates all the pixel values across the image and stretches them in a way that makes sense to allow the user to see what's in the data without actually changing anything in the image and so i'm going to run the screen transfer function using a shortcut command a that just applies it to the current active window and what i see here is that there's some clipping and that is because the default bit depth when an image is rendered to screen can be it's not quite deep enough it's not uh the bit resolution is not high enough for me to actually see all the details in the image so i've just changed that with this little window up here where i go to 24 bit this applies the screen transfer function using a 24 bit lookup table and so now i can see more details in the image and i can also see that this is the galaxy cluster s max 0723 and this is our parallel field over here now i know that i want to register all the images to each other and so the next step would be to use a tool called so i go to the process menu image registration star alignment what this tool will do is allow me to set up a reference image and for that i'm going to use my shortest wavelength channel which is the 090w i'm going to make that my reference image and then i'm going to scale everything else up to it so i will click add views which gives me a window that shows all the active windows that are open right now i'm going to select all and then deselect 090 because that's already the reference hit ok and then i need to run this tool and the way that pixen site has this little setup down here to run tools you either click the box or you click the circle the box applies the current tool to the currently active window the circle applies it to everything that's open right now and since i'm registering all the images that are open i'm going to click the circle so now star alignment is running through evaluating the reference image finding all the stars that it can in that image and then checking the uh the reference or the target images and looking for matches between them and once it finds a good match it'll look for a few matches and then it will apply a transform to the target image that it's currently working on and scale it to match exactly the reference image and this works really well especially for a field like this that's just full of stars and galaxies there's a lot of point sources for this tool to find and use as references so now i know that all of the images that i just generated are matched up exactly to the reference image the o9w so i'm going to apply my screen transfer function again just so i can see what i'm working with change that to 24 bit okay that looks great i'm going to rename all of these windows because they're going to be really hard to reference later if i keep them with the current names all this extra information in it so i double click on that tab and i just want to name them according to the filter name so i'm going to change this to f444w and then i'm going to do this for the same thing for the f356 f277 f200 f 150. now i'm up to the images that i already opened that i then scaled up to match the 09 ow so i'm actually going to close all these because i don't need them anymore and the last one that i do need is of course the f090w so now that i have all my windows here they're all matched exactly i can start attempting to apply color to them and the way that i do that with pixensight is using a tool called pixelmath this is a very powerful tool that allows you to sort of slice and dice open images as you wish what i'm going to use it for is to make a color combination of all the open images right now and the way that we apply color to these images as i've mentioned before is a chromatic ordering of color and so i'm taking my longest wavelengths and assigning those to red and my shortest wavelengths to blue and everything in between is green with six filters it works out really nicely that you can you can add two filters per color channel so i i know that i've got you know rg and b color channels and i need to define how i'm going to combine the different filters to create those color channels so the first thing i need to do is to uncheck this box here use a single rgbk expression i don't want that because i want to be able to define each color channel individually i also need to click this destination region here and change a couple of things here so that when i process the data it will do what i want to do and that is i need to create a new image instead of replacing the target image and i want that new image to be in an rgb color space as opposed to a grayscale image so now i can define what those color channels are going to be and this is where it's really helpful to change the names the working names of those files because i can just reference them directly in pixelmath this way and so i want my red channel to be a combination of the two longest wavelengths i'm going to do f444w plus f356w and it's really that simple i'm just telling it these two open files add them together but i need to do a little more because i don't want them to just add together and create like a double red channel i actually need to give them equal weight and so i'm going to divide by two so i'll put parentheses around this and then divide by two and that gives equal weight to both f444 and f356 in the red channel for green i'm going to use the next longest wavelength filters so f277w plus f200w divided by two and then for blue the next set of filters which are my shortest wavelengths and now i'm just going to apply that you'll notice i'm hitting the box here for some reason pixelmath doesn't like when you apply things globally it doesn't really understand that because you're kind of already defining things in a global setting by referencing them through the color channels so clicking the box because i have it set to create a new image it'll act as if it's running globally i know this is a little confusing but it works so let's just run it and the resulting color image shows up here as image 12. and again we haven't stretched the data yet so i have to apply the screen transfer function and what we'll see is colors might look a little strange and i can explain that in a second i need to change the bit depth again so that we can see this better okay so that excess purple that we're seeing is a result of the image not being color balanced and i'm applying the screen transfer function in such a way that with this button here the rg and b channels are actually linked together so they can only move as a group and so if you stretch it and the image has not been color balanced yet you may get excess color coming from some of the filters and not others and so in this case the the red filters are sort of uh taking over it's a combination of the red and so the short and long wavelength channels are there's an excess background in them so i'm going to unlink those channels and apply the screen transfer again and this looks a little more natural to me and you can see that we have like a really nice looking color image across both fields they do look a little bit different because again the image has not been color balanced yet and i i'm going to crop this image to just focus in on the galaxy cluster this is a beautiful image in itself as well but i just want to keep my focus on the galaxy cluster for now so i'm going to use a tool under the geometry menu called dynamic crop and then i click on the image to initiate it and then we're going to just pull this box over here and that defines the crop region i'm going to crop up from the bottom a little bit as well get rid of some of that black space and then i just hit the check mark and the image is cropped close that out i'm gonna reply my screen transfer function just to see if anything changes it does a little bit because it evaluates the whole image if there's less black space that will change a little bit how the screen transfer function works okay so now i can see i've got a really nice looking color image but it's not ideal in terms of the color balance i see some excess green across the entire field i'm also seeing some of the calibration issues in the short wavelength channels showing up in the form of these blue boxes this is again this was a mosaic so there's a two by two across this field and in some of the regions the background levels and the short wavelength channels are a little higher and so you basically have like a pedestal of excess blue light in those regions it's not exactly a pedestal it actually varies across the detector and it varies across exposures so it's a little tricky to get rid of it's not as simple as just subtracting a box from this region it's a little more nuanced than that and it's a step that i will need to take later on when i'm working in photoshop but for now i do want to concentrate on getting a nice color balance for this image and so the first step towards doing that is to define a neutral background region so i'm going to create a preview region within this image and that's what this little box is here new preview so i click on that and then i'm going to zoom in to the image move around a little bit i'm looking for essentially just a blank area of sky so something like this or down here and the reason i'm sort of honing in on this region of the image is because the background variations from the short wavelength channels are sort of minimal in this region here so i'm going to treat this as my sky background reference so i'm going to define a box in here zooming in too far okay uh we'll go maybe something like that and we'll use this as the reference for our background and i can actually see when i create a preview i can actually see it here so you can see it's a pretty clean region of the sky and so i'm going to go into the process color calibration background neutralization menu here and bring up this box i'm going to tell it to use as a reference image the preview that i just defined within our image and the way i do that is click this box and then select image 12 preview one hit ok and what this is going to do is take all the pixels in that preview region and evaluate their color values and try to get everything to be a neutral gray so i'll apply that to the image it's going to change everything okay because now image pixel values have actually changed and our screen transfer function is now it's what it was before and so after the change in the pixel values it hasn't caught up yet so we're going to reapply screen transfer function okay we've done that it's not going to look much different yet because we haven't actually really done a full color calibration if i link the channels we might see a change yeah okay so with the channels linked we do actually see that the background region is a neutral gray as neutral as it can get and uh colors are starting to look a little more natural i still do see this sort of green cast across the entire image and of course the blue the ss blue in these regions here so the next step is to do the full color calibration and for an image like this what i really like to do is search out any of the face on spiral galaxies that show up in this image i want to use this the spiral galaxy as my white reference because spiral galaxies contain all populations of stars from young to old and that represents all the possible colors of stars and so if you use an entire spiral galaxy as your white reference everything else will fall into place in terms of the color balance so i'm going to go into the image and look for nice face on spiral galaxies like this one that i'm zooming in on here i will have to account for the fact that there's an excess blue here so i'm going to find some other spiral galaxies outside of this region i'm just moving around the image now looking for more galaxies to use as a reference this one here looks really nice i will define another preview window around it and up here this this guy i don't want to get this diffraction spike in there so i'm just going to take about half of this galaxy and use that as a weight reference okay so now at this point i've defined a few previews uh christian you're uh sorry about that question joe um so how would you so how how is it that you're choosing to use the galaxies versus the stars themselves because i mean to me they look pretty white is that really the case though uh so in this case i'm looking for galaxies that are roughly the same color as the as the cluster is showing up across the image and i could use stars but the stars do have an inherent color to them that i don't necessarily know what that is right away but i know if i use a spiral galaxy that i sort of collect all the colors that i can possibly get if that makes sense um so using that as a white reference then everything else will sort of fall into place from there oh okay thanks all right that makes a lot of sense absolutely okay all right so as i was saying i've got the three different preview windows open for the different galaxies and i can actually look at them here like this there's no way to reference multiple previews when you're running this color calibration tool so the first thing i need to do before i start running color calibration is another tool which is comes under the script menu it's a script called preview aggregator comes under script utilities preview aggregator all it does is it will look at the current open image and see how many different preview windows are in there i don't want to take the first one because i know that's my background so i just want the three that i defined around galaxies and hit ok and what that does is it creates a new image window that is just the galaxies so now we have this this single image represents the three spiral galaxies that we've chosen as our white reference and so if i go to process color calibration color calibration this is where i will define that white reference i want i want to reference the aggregate image so i will select that aggregated i'm going to turn off structure detection because one of the things color calibration can do is actually evaluate the entire image and look for stars and then use the stars as your white reference but we're not doing that here we're just using the whole galaxy as white reference so i'll turn that off and i do need to go back and reference the original background region so that preview one that we used when we were doing background neutralization that's another reference for color calibration okay so now that i have everything defined i will run this on the image 12. i'm going to make sure that's the active window and i'm going to click the square to run it okay so again pixel values have changed the screen transfer hasn't caught up with that yet so i'm going to re-run screen transfer and now this is looking a lot more like what i would expect the background is really nice and clean especially on the right side of the image so that tells me that choosing this region here as my background reference was a good idea for the most part the image is nice nicely balanced on the background i will have to deal with this excess color in the blue when i get to photoshop but at this point i would say i want to apply the screen transfer function i want to actually change the pixel values now so i'm going to bring that back up screen transfer function and i'm also going to bring up a tool called histogram transformation this is the tool that will allow me to apply the screen transfer function to the image and actually change the pixel values and the way that it's done is the screen transfer function has run and it has produced a really nice image but again it hasn't actually changed anything so i want to take what it's doing to the pixels and apply it and so to do that i want to grab this little carrot icon here and drag it into the bottom of the histogram transformation and what that does is tell histogram transformation okay this is what the screen transfer function is doing i want you to define that as a histogram transformation to the image and then apply it to the image i know that sounds a little convoluted but it's just basically telling you okay i've got this screen transfer function i want to apply it to the image and it's now applied but of course the image is still showing it as it was uh in the screen transform so i have to turn off the screen transfer from screen transform function sorry so i will click this little x here and that turns off the screen transform and now this is the native state of the image it's a color combination color calibrated i can see that the the distant galaxies are showing up a little more red which is what i'd expect since those are the ones that are showing up mostly in the long wavelength filters colors look really balanced the cluster looks nice at this point i'm going to take this out of pixen sight and start working on it in photoshop and before i do that i will delete all my previews and because i have this issue in the short wavelength channels that i know i want to deal with in photoshop i instead of saving this out as a single color image i'm actually going to break it down into three color channels so the rg and b and that's this button up here split rgb channels so when i do that i get three grayscale images and the grayscale images correspond to the individual color channels so essentially what this is is the b is the equally weighted combination of the two short wavelengths channels just as we defined with pixel math right except now everything has been color calibrated so it's not exactly that anymore but when i take these out and combine them in photoshop and then reapply color i'll get back to the original color image that we have here but i will be able to focus in on the calibration issues that are just inherent in this one channel and clean those up in photoshop and so at this point i'm going to close out pixel site and we'll bring up photoshop and get started working in there so we've got everything from pixin site now loaded into photoshop this is actually the file that i used for creating the default image that was released so this is the official photoshop document from that release this first layer here is actually the three color channels that we just wrote out from pixinsite right so if i were to combine these in color i would get back to the original color image one of the first things that i did when i worked on the data was a subtle but important step of sharpening the image just slightly and i'm going to show you what that effect is here let me turn off a bunch of layers okay so you may not even see it initially but i'm going to zoom in and i need to change this to normal so that we can actually compare apples to apples i'm using a photoshop plugin it's called apfr or absolute point of focus this is a plug-in that was developed by christoph kotzius and it's a really powerful the very subtle image sharpening tool that operates on multiple scales you basically define sort of the pixel scale of how you want to sharpen the image it'll do a sharpening routine that sort of respects that uh the image scaling so it's like a like a tonal sharpening at different pixel scales and so the reason that's important is because of course the long wavelength channels are lower resolution than the short wavelength channels and so you don't want to apply the same kind of sharpening all across the image because you will then sort of emphasize things you don't want to emphasize like noise versus the real data and so when i say this is subtle i think you'll probably see that as i zoom in here and we look at one of these regions in the core of the galaxy cluster where we see these little dots these dots around this galaxy are likely sort of the first globular clusters to form around the galaxy and this is one of the first times we've been able to see those in a deep field image like this and i'm just now clicking back and forth to demonstrate the effect of the sharpening on this and it's really really subtle but i wanted to be sure that these globular clusters showed through really cleanly and nicely in the final color composite so i went through and found good settings for apfr for each of the filter combinations and applied that for each one individually okay and then from here now i know that i want to deal with the the variations of the background levels so these big squares that show up as excess blue in the color composite and the way that i did that was just actually with curves adjustment layers and a curves adjustment layer is basically just changes the pixel values and i'm using masks to define it you know in a certain region of of the image and so in this case if i turn this on and off you can see how it's affecting the image i just define a box and i'll just do that right now by hand let's apply a curves adjustment layer just to this region of the image and so i've defined my box and then i'll go to layer new adjustment layer curves use a previous layer to create clipping mask that that means it'll just apply it to our blue channel okay and now it just created a new curves layer here and this white box inside the black this is telling us that there's a mask applied such that when you change the pixel values using the curves um adjustment it will only affect the white area and the black area will stay the same so if i go up to my properties window here so this is our histogram of all the pixel values in the image and if i you know drag from here this is changing the black point so if i pull this over you can see our square gets very black that is it's also demonstrating the effect of using a mask on this image so it's only affecting that region that's white in the mask but the way that i want to treat this is it's going to be a subtle change and it has to be iterative because you can't get this all in one shot so what i do usually is i use this little hand icon up here i click on that and when i mouse over the image you can see there's a little circle in the properties window and that's showing you roughly the pixel values of where the mouse is currently sitting and so if i want to reduce the background value slightly i'll hover over a region of the background click and then i can drag down to bring that curve down a little bit you can see that it's subtle but it's affecting the the pixel values it doesn't have to be subtle if i pull it too far and you can see how it's adjusting the curve on the right and so this is it really is like a buy eye kind of thing you just have to go through it and it's uh like i said it's an iterative process you can see that my curves layers have not only a box but i also went back with a brush and took some regions out a little bit this is actually looking at the mask itself so if i step through each of my curves you can see what the effect was on the image the reason i was using those brush adjustments here is that there is a very faint glow of starlight that sort of extends beyond the core of the galaxy cluster and i need to be really careful about removing that i don't want to take that out but in terms of the pixel value it sits sort of just above the background level so it's really delicate if you're reducing the background level you don't want to take it so far down that you've taken out that star star glow so it was a again an iterative process i could see it there i wanted to preserve it and so i had to be really careful when applying these curves layers that i i didn't change that okay so i've gotten to this to a point where i'm happy with uh it's not exactly a uniform background but it's a lot closer and so now i'll apply color similar to the way that alisa applied color using a hue saturation layer and so now this is the blue channel again the combination of the f090w and the f-150w filters uh so that's our blue i did a similar process with the green channel because we still had some of those artifacts in the green as well so there's some curves layers here dealing with that and then of course the hue saturation layer to get the color so there's our green channel and then the red channel was actually really clean so i didn't have to deal with creating curves layers and adjusting background levels i just applied color directly so there is the full color combination after dealing with some of those background level problems and now this is like the starting point for me i consider this like a camera raw file at this point so i'm starting to think about this in terms of photography and how i would approach it in terms of getting contrast right getting the tonality of the image right adjusting color saturation to like for example get these background galaxies that are right now appearing kind of orangish i know that these really distant galaxies are going to be red because they're showing up in the red channel and so they need to be a little redder they need to have a little more saturation to them overall the image is a little muted in terms of color and so we'll go in and adjust that and like i said this becomes more of a subjective experience at this point and it's kind of up to the individual image processor where they take it from here and so this is where i think you know a lot of people you can two people can process the same data and come up with a very different image i mean elise and i have done this before we'll both process an image and we'll compare our results and sometimes we're shocked at the differences that we see but i think generally we're we both have a similar eye for this kind of work and so we tend to produce images that are sort of complementary to each other and so we work together very well that way so this folder here color balance and contrast that's where i do all of these steps i'm just going to activate it so that basically that's the final image right there we can step through and see what what is actually being done to the image i'm going to drag this up again so we can actually see these layers okay so i'm just going to turn them off all the way down the line here and we'll start with the flat combo clean sharpen eq that's basically just take the image that i created make it its own flat layer right and start working from there so the next step was a another background equalization and for this one i'm using a mask and it's an inverse of the image so if i click on that you can see this is the mask that's being applied and basically what i'm trying to do here is i want to again i want to get that background as flat as possible in terms of the color variations across it and so i'm applying i believe what i did to this image was a hue saturation layer where i kind of bumped off the saturation slightly so that the pixel values are closer to gray instead of color but i obviously i don't want to do that to the entire image because then i'll lose color in the galaxies and i'll lose color in the stars so i'm using this mask it's an inverse of the image so that the regions that are black are not going to have this effect applied to them and then the regions the regions that are white will and so of course because it's an inverse of the image the white regions are essentially just the background and so i get a really nice clean sort of uniform background by doing that it's a really subtle step but it makes a big impact later i'll flatten the image again and then at this point i've made this a smart layer and i've applied the camera raw filter and the reason i do it as a smart layer is because i can always go back and readjust these the camera raw settings if i need to so this is camera raw it looks like i just adjusted the white balance slightly um tinted towards green reduced the highlights a little bit so if i just drag this slider back and forth you can see the effect that happens um i've taken the shadows down a little bit i want to get the black level closer to black but not totally black i've increased the whites a little bit so that the shining cores of stars and galaxies are showing up you know in high contrast as opposed to or when compared to the background the texture and clarity sliders are very powerful but it's easy to overdo them so i have to be really careful to to not apply too much of either one of those texture is essentially it's like a pixel scale contrast enhancement and so it's looking at very small scale changes in the image and if you bump it up all the way you get like a crunch we call it crunchy image you've essentially accentuated the noise of the image if you go that far out so i try to keep it below 20 i mean that 20 is really high actually so i think i had it on 15 for this one we'll leave it at 13. clarity is similar to texture but it does it on a larger scale so it's looking for variations across like brighter regions versus darker regions and it will emphasize those differences so if i drag that really far across you can see the effect it has on the image and you can actually you can go the opposite way and reduce clarity and sometimes that's an interesting effect but we're not going to do that here so again it's subtle a little goes a long way you just kind of want to pepper it in there if you're going to use it at all dehaze is another one that i like it basically just takes if you're looking at like a photograph and there's a haze in the photograph it'll kind of remove the haze it's basically doing another kind of contrast enhancement but again this can be very much overused and so i try to be very gentle with these these tools when i use them one that i did not use here but i think i did actually use this for another level of processing is yeah so color noise reduction this is a it's a small scale effect let me zoom in really far the color noise reduction will look at the variations in colors across pixels in the background level and similar to what i was doing to try and get a uniform sort of grayish background uh color noise reduction will help and go a long way towards providing that as well you can it's again very subtle but you can kind of see the changes if i overdo it i'm losing color in this little portion of the galaxy here i go turn it off there's a little bit of red a little bit of blue in here i crank it up a little bit that goes away so again it's something you've got to be careful with you don't want to overdo it but all these tools combined together i find them very powerful and it's a great first step towards getting towards the final version of the image these next few adjustments are well so this next one here is there i noticed there's a little bit of an excess blue in the core here so i used a mask and a curves layer to reduce the core blue i did another contrast adjustment here this is what we call an s curve that's uh lowering the background level a bit and then pumping up the brights just slightly and that's just again to give us a contrast adjustment basically i'm making the sky a little bit darker i don't want it to be black i mean this is kind of a personal preference but when i'm looking at images of space if the background is totally black it just doesn't look natural to me in an image like this one in particular i worry that i'm cutting off faint galaxies that are way distant in the background you know they could just be clobbered by something like this so i want to always have just a little bit of signal in the background level and if i'm looking at the info window as i'm mousing around the image you can see that the rgb values are showing up here and when i get to the final version of the image i want those to be sort of in the realm of 10 to 15 in terms of pixel values for the blank sky background so that's what i'll be looking for when i get there next step is a saturation adjustment this is to really help emphasize those background galaxies that we know are showing up in the red channel i'm looking to get that color to pop a little more again it's a it's a subtle effect but it is it is an effect and the actual adjustment layer that i'm using here is called a selective color which allows you to go in and sort of hone in on specific colors within the image and so in this case the reds i'm adjusting the reds a little bit to bump up the saturation of just the reds in the image and that helps bring out those galaxies a little more so we'll back out again this slight sharpened luminance layer is this is a very subtle thing that i i sometimes do you can see it's only applied at a 20 opacity so it's a really subtle effect on the image but i'm using it as essentially taking a copy of the the image as it is from from this point placing it on top of itself and then setting it as a luminosity layer and this is just something i've picked up over the years i do it sometimes sometimes i don't do it it's again it's like a personal preference thing when you get to this level it's you're doing things that will help pull the image together and create what you sort of see in your mind as the final version of the image and you know sometimes i just try things out and see what works and what doesn't work and this was one of those things where i kind of liked it i kept it at a 20 level honestly it probably wouldn't matter if it wasn't there but i did like the final version of the image after doing that this is a another really subtle change i think i was seeing some issues in the uh the core of the galaxy cluster and i just wanted to adjust the saturation again i'm looking there's like an excess blue in there and i'm trying to get rid of that was really bugging me as i was finishing up the image so i'm using a masked hue saturation adjustment there final cosmetics is this would be using the clone tool to just get rid of a few little cosmetic problems in the image i'm going to hone in on this one down here this effect is called persistence and it's basically this bright star is repeating itself across the detector different times and these ones are outside of my crop so i don't care about those but i did want to get rid of this one here and so i just used a very slight adjustment using the clone tool i i try not to use the clone tool images because we don't want to introduce things that aren't there we don't want to take things away that are there it's all about respecting the data so the clone tool is used very sparingly only to remove artifacts that we know are not inherent in the data they're an artifact of the optical system of the telescope in this case we have you know this persistence effect around bright stars and so i can use the clone tool to just clean that up and you can see it's really subtle there's also a hot pixel here that i'm removing i know that that's another artifact and this is another effect of the persistence so all three of those are being removed in this step and then the final step is one last curves adjustment this is a really really subtle change in the overall color and the way that i'm doing that or applying it is i i just want to make sure that my background is as neutral as it can be and so i use the curves adjustment layer with this little dropper tool you use this to set the gray point of the image and you can define that in terms of a point sample that's what we got here a three by three five by five all the way up to 100 by 101 by 101. i would probably do 11 by 11 and then just click in a region of the blank sky and the tool will automatically adjust the whole image according to that 5x5 or 11x11 box in terms of making the whole image uh have a neutral background in that region and so that was the the tool that was applied at the end there and then finally i'm i've defined a crop region that's what these lines are i can turn those off you can see that the filters didn't quite line up completely across the whole image and that's totally normal we see that all the time so you obviously don't want to include all that extra ratty stuff around the edges so i would define a region to crop that out and you know before i've actually saved out the final version of the image i'll just make a box around that and make it black so that i don't see it so that gives me the uh the idea of what the final image will look like and so when i do save out the final image i'll use this to actually crop it to this box and we'll use image crop for that okay and then there is our final image wow holy moly that was amazing joe thank you your attention to detail your persistence and patience and just looking for even the most subtle subtle tweaks that can be made and i think that's uh something that you really were able to demonstrate for us is that so many of these changes to the images are extremely subtle extremely gentle and they're so subtle that nobody would notice except for the fact that if you didn't do it people might they wouldn't necessarily notice that it was wrong but they might not necessarily buy into the image as well as as we can the the result is so natural and so beautiful and so incredibly detailed and this was just an amazing demonstration of i know not even all of the work i mean i know this is not something you do over the course of uh of an hour how long does it take you to actually uh work up an image like this from start to finish so for this version of the default image that took about four hours from start to finish but i will say that that was the fourth time that i had run through the data as we alluded to earlier we were working with the scientists here at space telescope to reduce the data and to get a clean version of the images as a starting point and that that in itself was an iterative process uh the calibration files were sort of improving as we were moving through them and so we had different versions of the original data to work with and so for the deep field image we actually had four different processes so by the time we got to the fourth version you know i got pretty good at processing it i'd say the first few times took maybe a whole day like eight hours to get through the whole image but by the time i had done it the fourth time i kind of knew what i wanted it to look like i knew where i was going i knew how to run through all the steps and so i was able to do it in four hours but that's kind of a record i think well it the result is is definitely worth every second of those hours thank you so much joe for for demonstrating that for us oh thanks christian thanks for having me
Info
Channel: Launch Pad Astronomy
Views: 20,028
Rating: undefined out of 5
Keywords: webb imaging, tutorial, imaging, james webb space telescope, images, processing, masterclass, jwst, how to process webb data, how to download webb data, pixinsight, fits liberator, astronomical image processing, fits liberator photoshop, pixinsight processing, launch pad astronomy, Christian ready, joe depasquale, webb first deep field, smacs 0723, james webb, deep field, galaxy cluster, james webb space telescope images
Id: lLVqERtcdmw
Channel Id: undefined
Length: 46min 34sec (2794 seconds)
Published: Wed Sep 07 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.