Deep Sky Astrophotography With CMOS Cameras by Dr Robin Glover

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
how's it going everybody's ready here for astrophotography on the 9th of March 2019 and town of Ketron was the practical astronomy show at this show there several speakers and I was fortunate enough to be allowed to record two of their speeches for you the following is presented by dr. Robin Glover formerly a physicist and the climate change researcher he's now turned to software developer and he is the author of the popular program sharp cap in his talk he talks about deep sky CMOS imaging and the differences between ccd and CMOS and he also gives you hints and tricks about how to properly expose how long to expose four-and-half article you camera it's a very technical talk with some great information in it and I hope you enjoy it so without further ado I'll let dr. Glover take it from here enjoy so this talk is deep-sky CMOS imaging by robin thank you very much well I think this talk is for those who really image and I hope that those who are meliad but who only observe still get some value out of it what I want to talk about today is the science behind the way that we image deep sky objects and so hopefully lead us to understand a little bit better how we can get our deep sky imaging to behave as well as possible and make the best use of our very limited observing time so the basics of deep sky imaging and fairly well known we have to image for a long time because we're correct collecting light from very very faint objects so we have to let that collect for quite a long period of time but taking one extremely long image is incredibly risky if the cloud comes across or the mountain glitches or an aeroplane flies pass we could lose everything all in one go so we take sub exposures to reduce the risk of that we probably call our cameras to keep the noise down and we might use filters to really reduce light pollution perhaps just a light pollution filter or even narrow band filters we sometimes use dark and flat correction to fix the problems that we have in our image and we stack all the frames that we get and we get a final image that we really hope is better than any of the individual frames and and we build some fantastic deep sky images this way however so these techniques work really really well if you go online to certain astronomy forums you might find people advising you to take extremely long sub exposures and I think what we want to try and work out today is is the scientific answer to how long the sub exposures should be rather than just the answer based on forum anecdote and we'll find out whether we really need to take those long sub exposures certainly you can find posts that people just say you want to see more detail take a longer sub exposure now I think we're gonna find out today that's not generally the case and hopefully maybe we'll come out at the end being able to choose our sub exposure lengths and our calling levels a bit more accurately and get these images without the same level of difficulty so the goals of the talk are to understand how why the reasons behind our DSO imaging techniques we're going to find out which ones are essential and which ones aren't essential we're going to see how to balance the costs and benefits of the various techniques and optimize those sort of things that we've talked about we'll be aiming to work out how to produce good images and buy a good image I mean we're going to try and minimize the noise in the image because noise makes faint detail disappear we'll try and make sure there's good dynamic range so we can see both the faint things and the bright things and we're going to work on a fixed observing time so what I mean by that is that you have a limited amount of time maybe before the object you're looking at sets behind the neighbors trees maybe before the cloud comes in maybe you've got three hours before you have to get to bed because you've got to get up and go to work in the morning so we're going to work out assuming you've got this fixed amount of time to observe how best to make use of it so first of all we're going to look at some imaging devices and how they work as a very very quick reminder and then we're going to imagine a perfect camera and work out how we might image with that and that will help us discover that links by long sub exposures are definitely unnecessary in all cases we look at how digital cameras are imperfect will understand called camera cooling and how it relates to light pollution in a very interesting manner and will understand random noise in cameras that how that also relates to light pollution and lets us choose our sub exposure lengths and then we'll look near the end we'll look at how to choose a gain value which is something that I know many many CMOS imagers have trouble with although those of you used C CDs in the audience won't have worried about that because you tend not to have a choice of Gehring values and then finally the conclusions so first of all we're going to look at an imaging device that is often overlooked which is the human eye the very very basic one the human eye has a focal length of about 22 millimeters and the aperture gets up to about seven millimeters maybe eight when your eye is fully dilated and dark-adapted so that gives us an F upon free aspect ratio this is an interesting figure though and that best sensitivity of the human eye is to spot a spot of light at about a thousand photons per second now this is quite odd because the pigments in the back of the eye on the retina can actually respond to a single photon but if the a single photon arrives at the retina the brain doesn't notice it the brain hat waits for about ten signals to arrive in a short period of time and that has to be within about a tenth of a second before it spots the signal so about hundred photons have to arrive in a smallish area at the back of the eye before you see any light but what's worse than that is the eye is actually not as good as you might think at transmitting light from outside to a retina and only about ten percent of the light that comes down to the front of the cornea ends up at the retina so you need about a thousand photons per second or a hundred photons with an attempt for a second for the eye to see something like a faint star now what this also means is that the eye has effectively an exposure time of about attempt for a second it's looking at light about that length of time so that limits what the eye can see if you want to see bright fainter things with the eye your choices are fairly limited you go to a darker location where there's less light pollution to get in the way you wait for your eyes to fully dark adapt because they have a vast range of sensitivity up to a million million to one once all the different aspects of sensitivity enhancement have kicked in including chemical changes to the back of the eye or you go out and do what many visual astronomers do and by a very very large telescope to send more photons into the back of your eye okay now this was obviously our first observing instrument then we moved on to photographic film ok photographic film is remarkably simple that we place the film at the focal plane of the camera and that's photosensitive and then after we've exposed it we use some chemicals processes to make the image permanent this kicked off in about 1850 to 1860 this gentleman here Henry Draper had an enormous refracting telescope and a very very large camera that he put on the back of it to take some of the very early photographs and he took this putt photo of the Orion Nebula in about 1870 or 1880 that's the first known photograph of the array nebular and it took 50 minutes of observing time to put that photograph together so that terms there's two things that one that he was a very patient guy and tours would see that the film wasn't particularly sensitive we'll see some other pictures the Orion Nebula later on and we'll see how modern technology has changed things so this is what film looks like when you look at it under a microscope these are crystals of silver halide and basically the photons hit these and make a chemical change happen which we can then make permanent with the fixing process the two big advantages to film is one is the permanent record you can look at it later and secondly you can take long exposures that the eye could not we can't do with the eye but there's some disadvantages too it's not very sensitive about one to two percent of the photons that actually hit photographic film tend to have any useful effect it's expensive process it's buying and processing one of these costs a tenner or it did the last time I ever went near photographic film buying and processing a serious sized photographic plate well I don't know how much it costs to process but I found a reference saying that in nineteen in the 80s these costs seventy pounds each okay I don't think there are any references from later than the 80s because they really went out of fashion in professional astronomy you didn't get immediate results so it's very hard to improve your film photography because you couldn't see straight away what you've done wrong and I have a problem called reciprocality failure which is a hard word to say after you've had a couple of thiers fortune I haven't reciprocity failure meant that film wasn't as sensitive to dim light over a long period as it was to bright light over a short period so although you could take these long exposures it didn't really help you as much as it should now believe it or not some people did stack film they used to stack the negatives on top of each other but apparently it was incredibly hard to do so stacking wasn't really a practical option in the film days so then we moved on to CCD devices where we replaced the film with a CCD sensor which is an electric sensor that detects photons with electronic photons to detected electrically and converted to a digital image so this is what sort of professional-grade early professional-grade CCD sensors looked like they were started being used for astronomy in the 70s by the 1980s they were very much displacing for when they came into Amity use in the 1990s we need to understand now is how it actually works the CCD so basically you have silicon in the pixels of the chip and photons come in and they knock an electron loose occasionally hopefully most of the photons that come in knock an electron loose those electrons build up in the pixels and they make a voltage and then we have electronics that reads out the voltages it's stored in each particular one at a time sends it for a processing system here and analog to digital conversion and what comes out at the end is a number that says how bright that pixel is and we measure that in Adu or analog to digital units so the advantages of CCD is that they can be incredibly efficient so the quantum efficiency which is the percentage of photons that actually get counted can hit ninety-five percent in some scientific ones and even some amateur ones and long exposures are possible and they don't have that problem with reciprocality failure the disadvantages is in computer terms they're slow to read out because one pixels read out at a time so it takes quite a while to read out the whole CCD so that either limits the frame size or the frame rate you can read out a big frame or you can raid out frames fairly quickly but you can't do both with the CCD you can't have a high resolution video camera with a CCD they also have a problem called blooming if one of these pixels like that one there fills up with electrons then it can spill over into neighboring pixels and and cause problems with your image so as a result of some of these deficiencies of CCDs and the expense of producing them there was another generation of imaging devices which are CMOS now they work in basically the same way that electrons are created by photons knocking them loose but instead of having one set of readout logic like a CCD does where it takes the number of electrons and reads it out there's a readout circuit and every single pixel ok now this allows the camera to be read out much more quickly and it allows it to be produced by cheaper manufacturing method one of the early problems was this was that the readout circuitry here uses up some of the space on the pixel so the camera was less sensitive but then they introduced these things called micro lenses which cover the each pixel and the lens focuses the incoming light onto the part of the pixel that's there to receive it and not onto the part that's the reading electronics so that improved a lot of the problems that there were with the early CCD cameras which were rightly known for being noisy and not very sensitive so the advantages is that they're cheaper to produce it's vast numbers made for mobile phones and digital cameras and automotive cameras and all sorts of other things can do high split high-resolution video some of the disadvantages are that because the electronics is spread over the sensor it's harder to get rid of all the amp glow because there's more going on on the sensor than there would be in a CCD and because every single cell has its own readout circuit the response may be a little less even okay so now we're going to try and address this question which is perhaps posed to us by the guys on the forums who say take long sub to take long time to take long times we're gonna try to a does DSO imaging really need super long subs it kind of seems logical to us okay because when we take images in bright light like this one the Sydney Opera House we take an incredibly short exposure we get a very crisp image we don't get any camera shake when the lights particularly poor we take a longer exposure and we get things like camera shake that ruin our images so we think to ourselves well DSOs are even fainter than the Sydney Opera House underneath a rainstorm so if we should take even longer exposures for those doesn't that make sense but with a kind of being misled here because when we take these images of real-world objects ups row world objects about us we take one frame so you take one frame here in one frame there when we're DSO imaging we could take a stack so we can take multiple images and add them together and pretty much only a very few of the most modern digital phone cameras do stacking if everybody's got a pixel phone from Google they actually stack on the phone so if you take an image in low light like in here it will take like 10 images and stack them for you to try and produce one good image but in general that doesn't happen for normal cameras so let's try and work out whether this really leads to long exposures so we're going to think about imagine that we had a perfect perfect digital camera so that would be one that can count every single photon that comes in and never makes a mistake it has 100% sensitivity it's completely accurate has no noise it can read out those values very very quickly so if we need to take lots of exposures there is a much downtime between taking one exposure than the next if we had a camera like this it doesn't matter what sub exposure lengths we're going to take we could take an hour's exposure and get a thousand photons and we get a count of a thousand coming out at the end or we could divide that up into four 15-minute sub exposures and we get four different counts and they'd all add up to a thousand or we could go down to shorter exposures and get 12 of them in or even shorter exposures you know down to a few minutes and then I could go on but I just got bored of drawing the boxes if we're counting photons we'll always get to the same count at the end of the procedure because we're 100% accurate now this sounds actually like a pretty weird thing to exist and it sounds like it might be impossible but there's actually a scientific device called an e/m CCD or an electron multiplying CCD that has a properties that are really quite similar to this apart from not quite hitting a hundred percent sensitivity it counts that photons for you unfortunately they cost tens of thousands of pounds to buy and they need to be cool to at least minus 70 degrees C to work so they're they're not entirely practical yet for amateur astrophotography but but very similar advised Isis exists so if we had this permanent perfect camera there's really no advantage in taking long exposures because we're just counting photons we get the same number whether we take lots of short exposures or one big long one okay but there's disadvantages to long exposures though on there because we've already talked about some of those you need your Guiding to be accurate over several minutes if you're going to take a five or ten minute sub you need very accurate over even longer if you can take even longer exposures you get satellite an airplane trails coming through that can ruin a whole long exposure sky glow and thermal noise build-up which means you need to call your camera more deeply to keep those under control or perhaps put life pollution features in there's also disadvantages to take incredibly short exposures you wouldn't want to take 10 to the second exposures because think of how much data you would accumulate and how long you're stacking program would take to run through it even if it is exceeded so if we have this perfect cam we might take subs in sort of one to 30-second range as being a sensible medium between encountering these problems at one end and encountering those problems at the other so hopefully I've demonstrated that there's nothing fundamental about long exposures in that if we had a perfect camera we wouldn't take them so why on earth do we take long exposures then or long sub exposures our perfect camera doesn't need them so the reason for there has to be something to do with the imperfections in real digital cameras so some of the imperfections we encounter in real digital cameras are not every photon does a useful job lots of them gay ignored some earlier cameras had a quite a low quantum efficiency so not a small percentage of photons would do any work more modern ones can be up to eighty percent or more sometimes the pixels become full with electrons we talked about blooming already when they become full for CCDs but once the pix electrons once the pixels become full you can't count anymore electrons we have pattern noise so things like amp glow in our images that causes problems but the two most important ones that makers take long sub exposures and determine how we image are the thermal noise which comes from heat and random noise and random noise is the particular one for sub exposures and that's going to take most of our time today but before we're going to do that we're going to talk a little bit about thermal noise so remember the picture we showed of these photons coming in and whacking an electron loose within each pixel well unfortunately heat can do the same thing just occasionally the warmth that it areas in the pixel knocks an electron loose and the problem is is that those electrons that have been knocked loose by heat are completely indistinguishable from the ones that we want to count that have been knocked loose by photons so they end up in our in our score for that pixel in our value that comes out as you can imagine the fact that heats knocking them loose means that the hotter the camera gets the more they ago the more rapidly they're going to be knocked loose so I've got some graphs here that show approximately the amount of thermal noise you get with different sensors and how it changes with temperature one of the key things with this is it tends to double for about every six and a half degrees see that you get hotter and half for every six and a half degrees seed that you get colder and that's not quite perfectly true but it's roughly accurate across a wide range of different temperatures and cameras now here's some popular sensors that have been in you know moderately well used astronomy cameras over the last decade or so so the older kaf 8300 PCD sensor which is in a lot of cameras you can see is quite bad boy as far as thermal noise is concerned it's at least twice as bad as the next strongest thermal noise whereas some of the more modern sensors like these - this one 8:3 CMOS sensor is much lower it's it's it's nearly five or six times lower and the sony IC x 9 6 9 for CCD is also got very low thermal noise the other thing that you can see from this is that as you call your sensor it's the first few 10 first 10 degrees that give you the biggest improvement and as you call deeper you're getting less and less improvement okay so we've seen a little bit about thermal noise where it comes from and how it changes with temperature now I want to do what it almost seems like a switch of direction and talk about light pollution and we'll see why the two come together in just a moment so this is a plot from the excellent Stellarium program which is free download for Windows and I think that's well if you want it and it's one of the things it can do it can simulate light pollution so if you're aware there's a scale of light pollution called the portal scale which goes from 1 which is a truly fabulous dark sky sight through to 9 which is a dreadfully light polluted in a city Skype site and this image here shows the simulation for the same sort of date and time of what you would see in those five different locations from inner city through suburban to rural - excellent dark sky and this is a simulation for the naked eye but you can see how it really affects that and how it would also really affect our imaging prospects as well now what you can actually do is you can go away and calculate from the light pollution figures how many electrons get pushed into each pixel of your camera due to light pollution every second okay and you can build up like this so what we've got down the side is we've got the F ratio of your telescope okay where it's a fast Telus imaging scope like an F for Newt or f5 refractor or something more like an SCT f10 here and we've got the different bo2 levels along the top and then this is the number of electrons per second per pixel that are being pushed in by light pollution so this calculation is done for a mono sensor with 50% efficiency and 3.75 micron pixels there are things that you need to do so if it was a color sensor you get less light into each so you divide by 3 if it was narrowband you're cutting down the light pollution a lot and you divide by a bigger number if you've got different sensors or telescopes I've put an online web page at tool shop cap coat UK that will do this calculation for any combination of scope focal length and light pollution level so what we're going to remember from this is first of all that there's a massive range between less than a photon per electron per pixel per second here to nearly 200 here but the other thing we're going to remember is this number in the middle 2.6 this is a suburban sky which is going to be pretty typical of a lot of people who do Astro photography and it's f6 so it's a pretty typical focal length for a sort of imaging refractor so we're going to remember that number and use it again and again throughout the talk ok so how is photos like pollution linked to thermal noise well they both put extra electrons into our images into every pixel of our sensors light pollution does it according to this table so if we're our default imager we get 2 point 6 electrons per second added to every pixel of our image thermal noise does it according to our graph so if we're perhaps using the 183 chip and we haven't called it so those are 25 degrees that points in naught point 2 electrons per pixel per second so basically they both have the same effect of bumping out the background brightness of the image so what we've got to think about is there's no point in trying to drive the thermal noise right down if we're not doing anything about the light pollution so we find if what I would say to people is I'm trying to decide whether to call their camera and how far to call it is find out the thermal noise rates for your camera which will be on your manufacturers website most likely look at your like pollution figures and I am to set your thermal noise rate to maybe 10% of the light pollution figure so here's an example for our default imager with 2.6 we divide that line by 10 and we get down here about a quarter of an electron per pixel per second for the K AF sensor we'd want to cool that down to about 10 degrees to hit that target level for the Sony - 9 4 cent so we only need to call its 20 the other two sensors wouldn't need cooling to hit that sort of target level if we were in a darker sky rural location instead of suburban we'd have to call things a little bit further but there's really no point in knocking the cooling all the way down to minus 15 down here and getting a hundredth of an electron per pixel per second coming from a thermal noise if you're still getting nought point five from the light pollution because what you're doing is you're reducing a total of maybe naught point 5 + naught point one is naught point 6 2 naught point 5 + naught point naught 1 is not 0.5 1 you haven't really got very much lower you're getting diminishing returns for your cooling in a big way this can have positive effects because it reduces your power draw so anybody who's imaging with a battery is going to see a bonus of this and the likelihood of do pork problems so don't just turn up the cooling to the maximum value okay so now we're going to get on to the meat of the thing which is random noise so you can see the random noise in the background of this image here and you know what's happening here is this is taken with a very short exposure I think it's one second of the Orion Nebula compare that to our 50 minute exposure taken by Henry Draper 140 years ago and the random noise which you can particularly see in the fainter parts of nebulosity it's destroying our faint detail we care about it because it is doing that it's the thing we really want to see in astro images it's the faintest and faint detail because that's what gives them their charm and their beauty and the random noise is wiping it out for us so let's just understand and have a look at what random noise does to things so here we got a series of five grey circles of different sizes and we're going to walk for a series of images where the noise level goes up open up and you can see that with at least a little bit of noise you know we can still see we've got circles there but add a bit more and that left-hand one there is no longer very clearly a circle and the right-hand one is starting to lose it the second one is starting to lose its shape now we're struggling to see the left-hand two circles we could still see the bigger circles fairly clearly and as the noise level further in the increase further and further we see that pretty much all of the detail starts to disappear now the key takeaway from this is that not only is random noise destroying detail in our image is destroying the small detail first and it destroys the faint detail first so we want to keep the random noise down to a minimum okay what we have to now look at is the two types of random noise that we get in digital images from CCD or CMOS cameras so the first one is the Reed noise so this is inaccuracies in the circuitry of the camera mean that sometimes exactly the same amount of input light creates different values at the output and the second is the shot noise and this is a harder one to explain but essentially it's down to the fact that photons arrive one at a time and in a random way and so that means that just like if you if I gave everybody in the room ten coins first of all I'd had to bring a lot of coins in here and I'd be a lot poorer but if you then tossed all the ten coins you wouldn't all get exactly five heads some of you would get five there'd be plenty of sixes and sevens beby eights and fours as well there might be some twos and nines and ones somebody might have a ten somebody might have a zero it's all spread out in a sort of random way when you do that sort of experiment so what we're going to do is we're going to look at these two types of noise and we're going to find out that how they balance with each other eventually tells us what sub exposure length we should use so just to look at read noise imagine that we've got a pixel and it collects two thousand photons a thousand of those make electrons that might give us a signal of half a volt stored in that pixel and we've read it out an array we get a pixel value of 4,000 however the next-door pixel also it collects 2,000 photons a thousand electrons but this one gets maybe point 5 volts or maybe a slightly different number but the electronic circuitry has just slight errors in it and this one reads out to a different value the one next to that reads like this the one next to that reads a slightly different value so read noise is just about the two stages here turning the number of it the electrons into a voltage and then turning that into a number just having very slight arrows in them so there are different values for different sensors this is an important thing it depends on the sensor you've got typically between one and ten electrons it's the size of the read noise so some CCTV values the ki F that we've already talked about has seven to nine electrons typically some slightly newer Sony sensors have five and five to seven electrons some value CMOS sensors the one eight three sensor which is in a very popular camera these days has one point five two three electrons the one that's called the 1600 camera by a lot of manufacturers although that's not really the sensor name has about one half to three and a half this one has five one point five to seven what's interesting for these ones is that the size of the read noise depends on the game you've picked so if you pick a low gain you get the read noise at the high end of the value if you pick a high gain you get the read noise at the low end of the range whereas for three CDs it depends more on the electronics that the manufacturers built around the sensor and and the quality of that electronics but because everything's built into a CMOS sensor that's that's rather less of a problem for CMOS okay measuring read noise if you wanted to measure the read noise of your camera you have to do and quite a large number of measurements of the brightness of different frames with different exposures and the brightness of the background with the camera being dark and there's quite a lot of calculation to go on so the best way to do it or the perhaps the second based way to do is to go to your manufacturers website where you may find a graph like this has already been measured and published for you this came from a well-known manufacturer of CMOS cameras and you can see here that minimum gain it's about three electrons and it goes up to just over one and a half at the maximum gain for that camera the very best way to do that to calculate this is to use sharp cap because sharp cap has a has all the calculations built-in for you so you can just tell it to do what's called sensor analysis it'll ask you to point your sensor at a sort of bear piece of wall that's constant that's evenly illuminated and it will tell you to cover it up to make it dark at what at some point during the measurements and then uncover it again it does all the measurements all the calculations and it produces the graph for you so if your manufacturer hasn't published a graph or you just fancy do it yourself I can recommend using sharp cap so what we now know is that read noise depends on the camera it comes in every frame that we take its value might depend on gain as it does here so this is obviously a different sensor again because it's read noise goes from one point six to down to about one electron so I think that's a one seven eight sensor is my guess for that I can't remember but I think it is okay so now let's talk about shot noise this is the picture quite a lot of us probably have in our head when we think about photons arriving at the pixels of an imaging sensor we think about them or if there's even illumination we think about them arriving sort of evenly not quite in rows but you know there's a constant rate and we think about the pixels all basically filling up to the same level actually what happens is more like this it's more like raindrops coming down on in a rain shower they're all over the place they arrive at random and just as if we put four different buckets out in the rain they collect different numbers of raindrops inside five minutes the different pixels in the sensor collect different numbers of photons and this is where shot noise comes from so I've actually got a little little animation here of photons arriving in a in a sense of where we just got ten pixels so we're gonna have five hundred photons and we're going to see them arrive they're just arriving at random like they would in real life and we can see that we don't actually get the same number we don't get fifty arriving in every single pixel there's quite a range there in the end between what if we got maybe just over forty so about forty to down here up to maybe fifty seven so that's sort of fifteen percent below to fifteen percent above the average value of fifty due to the shot noise now if we look at the sort of thing for less photons arriving so this is just to get a hundred photons arriving instead of 500 so we've got basically a shorter exposure by a factor of five we can see we've seen the same pattern again that there's a big variation between the ones that get the most and ones to get the least but here the radiation is even bigger here we are five pixels is the lowest so we are at 50% below the the average level and 13 is the highest so it's 30% above this is this is indicative of what happens with shot noise that it's a bigger proportion of the signal for smaller signals when there are less photons collected so to look in real images we can look at a sequence of m42 images I had fun image again 42 or couple of weeks ago to get in the photos for this this talk so this one is a 1.75 second exposure so that means that in these sort of areas around here in the fainter nebulosity we're collecting about 750 about 17 electrons and in the brighter core here we're collecting about 75 so we're now going to go up through different exposures as we can see with a seven a half second exposures we're collecting four times as many electrons the noise is much less noticeable and you can see the fine detail is starting to appear in the wings of the nebula 30-second exposure again four times as much again we're now up to between 300 and 1200 electrons being collected the noise drops away 60 seconds it's now starting to get particularly on the projector started to get very hard to see noise in that image 120 seconds 240 seconds again in those last two switches two longer exposures really it was very difficult to see the amount of noise change in those but the early ones we could definitely see the noise drop away so what I want to do now is I'm going to show a graph of how the shot noise varies with brightness and it turns out that it rises relatively quickly to begin with and then as the noise as the brightness increases it sort of tails off this is actually a square root graph for anybody who wants to look into the mathematics of it but it's kind of misleading because it tends to indicate that the noise goes up as we get to brighter levels well if we just seen that noise looks less when we have brighter in images and the reason for that is that here we've got to collecting 5,000 photons so the actual level of the image brightness is somewhere way way off the top of the screen whereas down here if we collect a hundred photons the image brightness is here so this noise as a fraction of image brightness actually looks more like this graph here so when we collect small number of electrons or a small number of photons the percentage noise signal is twenty thirty forty percent but as we go up to collecting maybe a thousand electrons here we can see it's dropped down to less than five percent and then it drops away more and more slowly so we've learned a couple of things from this okay one is that we've learned that the noise looks worst in the very darkest regions of the image which is a shame for us astrophotographers because as we've already said the very darkest regions of the image are where the faint detail is that we really want to pull out okay so that's that's that's a downside and in fact the noise is much less prevalent in the brighter regions of the image so then what we can remember now is that what controls how dark the very darkest region of the images well what controls how dark it is is light pollution do we remember we come out with that figure of two point six electrons per second for a standard observer if we plug that in as that's the darkest are pixels can be they can't be any darker than that because the ducked light pollution is there so if we plug that in we start to get a graph of how strong is the background noise for those darkest regions of the image against our sub exposure lengths okay so if we took a hundred second sub exposure we get about five or six percent noise if we took a six hundred second sub exposure we're down to 2.3 percent if we go over what's this that's pretty about twenty minutes we're down to below two percent if we took all our imaging was done with a whole one hour exposure we're down to one percent this is the graph that people are imagining in their head when they say on online forums you need to take longer sub exposures because they're thinking well you know you get less noise the further out on here you go you get nest noise if we went past point is still going down it might be going down very slowly we're getting certainly getting into the place a law of diminishing returns but it's definitely going down however there's more to it than that and the thing that's more to it than that is stacking but before we can move on to stacking we need to understand how noise from different sources adds together so imagine that you've got a camera it's got two sources of noise we might have a read noise of free electrons on a shot noise of four elections if we add them together are we going to get seven electrons of noise it might seem like it's the right answer but it's actually not now the reason is sometimes the read noise will be in the positive direction and the shot noise will be in the positive direction they will add up and sometimes they'll both be in the negative direction but equally often one will be in one direction and one being the other so they they cancel out to at least partly cancel out or the other way round the actual answer if you add free electrons of noise to four electrons of noise is that you get five how do you make three plus four equals five I hear you ask well it turns out that to do that you walk in a diagonal you take your three electrons of noise and you put it along one side of a rectangle you take your four electrons of noise and you put it on the other side of a rectangle and it turns out that the answer is the length of the diagonal of the rectangle if you're remembering back to your your mathematics at school you'll remember that that comes down to Pythagoras you square the length of this side you add the square of the length of that side you that gives you the square of the length of this side and you take the square root and you get the length of the total amount of noise I'm afraid there are going to be a couple more bits of mathematics in this talk I'm really sorry now this is actually more interesting when one noise level is bigger than the other by a big margin so if you turn out do you have 12 and five electrons of noise to add together the total is only 13 which really isn't very much bigger than the 12 or if you've got 24 and 7 the total is 25 which is only a tiny bit bigger than the 24 so what we learn is that when we're adding noise this way by following the diagonal if one noise component is much bigger than the other we can effectively ignore the small component entirely and that will come in really handy later so what we did was we had our graph that showed us how the shot noise vary with exposure length I've used that calculation that adds noise to add on the read noise for two different cameras one a sort of CMOS II camera with two-and-a-half electrons of read noise I'm one a sort of CCD camera with seven electrons of read noise and I've plotted the three curves here and unfortunately it's a bit of a mess okay you can see that down this end there almost entirely the same something looks like it's going on there but it's almost impossible to see so what we have to do is we have to change how we scale this axis so we can see more of this left-hand side to work out what's going on when we do that so now we've got one second here ten-second exposures a hundred and a thousand we see that down this end actually these curves are banging on top of each other okay it doesn't matter whether there's no read noise 2.5 electrons or seven electrons do you get the same total noise in the image down this end they separate out quite significantly because of the effects of the read noise start to get start to notice this is because of the way that we added by diagonals the noise so if we have a very high shot noise that we have down here because we're collecting lots of electrons then the read noise whether it's 2.5 or 7 is tiny comparatively and we get to ignore it whereas down in this end the shot noise is very small to the read noise is the thing that we really that really affects the total noise okay so there's one more thing that we need to look at before starting to get to some results here which is how do we stack frames and what happens to the noise then so if we've got two frames and I have to say here they have to be two different frames you see occasionally see people asking online what happens if I stacks the same frame over and over again will that make my images better it won't okay don't where you can try if you want but it still won't work okay so you've got two different frames and maybe the signal strengths in each the brightness is a hundred on average and the noise is 10 now if we added those two frames together we get a brightness of 200 that's easy okay that adds normally if we added the noises we'd get 10 in this story 10 in this direction the diagonal is 14 so what's happened is it's we doubled the signal strength but we've only increased the noise to 14 so we haven't quite doubled it so we get an improvement in the signal-to-noise ratio of the stack the stack gets less noisy okay it looks better if we had a third frame we get a total noise of 17.3 I've just drawn a right angle round like that if we had a 4 if we get a total noise of 20 and so on it just makes a spiral actually if you keep drawing it'd be quite pretty but we haven't got time for that so instead I've drawn up a chart of the number of frames that you've stacked this is the increase in the noise due to the stacking this is the increase in the signal you see the signal increases a lot more with a hundred frames the noise is 10 times bigger the signal is a hundred times bigger we've gained a factor of 10 on the signal-to-noise so the noise if we brighten all the images up to the same level the images will look 10 times better in terms of reduced noise level in them okay so now we get to the big question if we've got a particular situation so we've got a set up your outside in your garden okay you have a fixed set of equipment you can't go changing that tonight you've got your scope you've got your camera you got your filters you're in your garden you can't change the observing location quite so easily and we've got that fixed amount of time before we have to go to bed or before the cloud comes in or before whatever it is sets below next door's tree that you can't persuade them to count down so we've got say an hour to image and we're going to stack all those images that we capture in that hour what camera settings should we use to get the best possible final stack okay and we want to avoid any problems that we might cause by an excessively long sub exposures so I went out and took some real images of 4-minute stacks to try and show how this might work in real life before we get on to the sort of theoretical results so this is a single four minute image of the Orion Nebula and I want you to watch the noise levels in this and see where you can spot the noise starting to kick in as we go to shorter subs but more optimum okay this is 60 seconds subs and I've got four of them and I was really not seeing any noise there okay this is 15 seconds subs and I've got 16 of them so still adding up to a total of four and I'm still struggling to see any noise there it might be a bit better on the laptop screen to see it than the projector sort of you know not quite so clear perhaps okay seven and a half seconds subs I'm still not seeing an enormous amount of noise in that excuse me five second sub exposures okay we've got the noise at last excuse me one second folks we've also got no voice and let's just move on to complete the sequence two second subs we're getting very long so we got a hundred and twenty of those we're getting a great deal of noise and one second subs very similar so let's look back on that contrasting the single frames and subs for single frames we saw the noise when we were at sixty seconds supposin shorter for subs we had to get down to about seven and a half seconds before we started to see the noise compare a 15 second single frame on the left-hand side here where we can see the noise in the nebulosity with a stack of fifteen second frames for four minutes where we can't see anything at all so the stacking is clearly doing its job here adding multiple frames like this up and getting a much clearer picture okay so I'm really sorry but we really need some equations now so Reed noise of the camera we've learnt about that we know it's a number of electrons that might be between 1 and 10 a total imaging time we've got 2 hours we've got 4 hours we've got an hour the sub exposure time that's related very easily to the number of subs we take we might take 10 subs in an hour of 6 minutes each or we might take 60 subs of 1 minute each and finally we've got the light pollution figure which we our standard observer is 2.6 is the number for that okay I'm going to try and just ignore a lot of these equations here if you if you're keen on mathematics you can look into how they work and take photos of them the real key equation here is this one of the stack noise the stack noise comes with two parts so the first part here is the read noise for the number of sub exposures we take we pay the price of that read noise for every single sub we take in the stack so if we take ten stack ten in the stack we square the read noise and we multiply by ten if we take a hundred we square it and multiply it by a hundred the second component of the stack noise is this component here which is the total time times the light pollution rate which is our shot noise for the whole stack now remember we're adding these inside a square root that's like taking the diagonal if this one turns out to be small compared to this one we can ignore it okay just for comparison if we have our perfect camera we still have shot noise that's a physical facts of the universe we'd have this thing and this one will completely disappear if we took one long sub then n would be one so we'd have this equation here where you have one lot of read noise plus the shot noise okay I'm going to take the equations away for now and I go to plot a graph of it instead because I think that's probably more accessible okay this is how the total noise in the stack looks for our two cameras that we're interested in again for our standard observer the 2.5 eread noise CMOS ii camera and the 70 read noise ccd light camera now what we notice here is that first of all whoops we go forward to to the wrong frame what we notice here is that first of all these two graphs come to a flat point on the right hand side they are not drifting downwards like the previous graph we looked at when we were looking at single exposures there is a maximum quality that we can get in our limited time so this is a simulation of 1 hours observing the maximum quality we can get is with one one hour sub of course you might lose that due to an airplane or or something like that but that is the very best we can do what we do notice is that to be honest I would say that the you only need to get to about this point or this point with the CCD the CMOS camera before you're so close to this best exposure possibility that there's really no point in going further why would you move on from say the 30-second sub that you've got here all the way out to this point because you're not getting any better image quality in your final stack with a higher CMOS rate and the CCD read noise we probably need to get to maybe there I would say so that's about 240 seconds because the higher read noise just makes this line curve in more slowly okay well we've looked at our standard light light pollution observer of 2.6 what happens if we try some different people if we have an excellent dark sky site if you're lucky enough for that you can see the curve comes in much more slowly so you need to take longer exposures where the light pollution is really low if you've got sort of pretty grotty sky you need to take you can get away with shorter exposures actually this is kind of a bit misleading because what we're showing here is how much noise there is compared to the best you can do if you actually display the absolute amount of noise you can see that our dark sky observer gets less noise because there's less sky background there so he'll get better Astro photos okay but it's actually harder to see on this particular plot how quickly these lines curve in okay what effects the filters have because obviously we don't all take monochrome images filters cut down the amount of light that's getting to the sensor so that's very much like reducing the light pollution so a monochrome sensor needs the shortest exposures color cuts down the amount of light by a factor of three or so narrowband cuts it down by four and this is what narrowband images know right from the start that they need to take longer exposures than color or monochrome images and it's all in this in all in this graph okay so the last set of equations that I'm going to show and the reason is is because there's this particularly important equation here you can work out from the previous equations the optimum sub exposure length so here the input things are the read noise which we know from the camera the light pollution rate and E which is how much extra noise will we tolerate above the very best possible level now I put this C here and the reason I put C there is because it's a horrible horrible equation in terms of e that you need to get to work out C so it's much easier to look at the dude in the chart or in a list if we prepare to accept 1 percent extra noise then our C is 50 if we prepare to take five percent extra noise and our C is 10 if we prepare to take like 10 percent extra noise then C comes down to 5 but the important thing here is that the length of sub you should take is proportional to the square the reed noise / the light pollution rate times this C and the C depends on how much you're prepared to go about for the possible how much extra noise you're prepared to accept to allow shorter exposures I'd recommend 5% is perfectly reasonable which gives you a sea of 10 so you just would work out your sub exposure length C times R square at 10 times R squared upon P now what that formula lets us do is to take this chart for light pollution and convert it into this chart for recommended exposure times okay so what we've got here is we've got two sets of exposure times the black ones are for our CMOS e-type camera with 2.5 V Reed noise and the red ones are for our CCD ish camera with 70 Reed noise and so our standard observer in the Box in the middle would want to use at least 24 second exposures for if he had to see CMOS camera and 180 second exposures if you had a higher noise CCD camera you can see it makes a really big difference to the length of sub exposure that you have to take and the reason it is is it's because the recommended exposure is proportional for the read noise squared so if you go out from 2 and a half to 7 you're nearly tripling it so square that and you're nearly multiplying by 9 in this case if you're using filters you have to multiply these scores up so if you're using RGB cameras or RGB filters triple them if you're using narrowband multiply them by between 25 and a hundred okay which means there are narrowband images particularly own dark skies really really do need to take long subs but if you're in a sort of suburban context and you're using a one-shot color camera you can be getting away with subs in sort of one minute to one and a half minute regions the question people might ask is what happens if I want to go beyond that point go on no I no long subs do but do better so if this is the noise level at the recommended sub exposure then this is how it changes if you double it if you multiply it by five if you multiply the exposure length by ten you can see you hardly gain anything at all okay there really is no point if you go the other way it does come against you I can see I'm now running out of time so I'm going to skip past the gain things I'm sorry and I'm going to say that if you want to do it the easy way all of the calculations including the game one you haven't seen I built into sharp cap Pro so in the histogram you can see these bars at the top the green one if you as long as your histogram peaks are in the green you're good if you want it in more detail you hit the brain button and this comes up it's all in the documentation and so to conclude we don't need to take our DSO imaging to extremes we can balance the cooling with sky brightness we can balance the sub length prescribed brightness and we can adjust the gain to keep the sub length within our hardware capabilities and before you try and use longer subs you don't reassure longer subs we'll help you before you upgrade your Guiding or your mount because it may be that you're being sold a myth so there's some resources that might just be useful thank you very much for listening [Applause]
Info
Channel: AstroFarsography
Views: 130,324
Rating: undefined out of 5
Keywords: How Long Should My Subs Be?, How Long Should My Exposures Be?, How Cold Should I Cool?, Camera Cooling, Deep Sky CMOS Astrophotography, Astrophotography, CMOS Astrophotography, Dr Robin Glover, Sharpcap, Sharpcap Pro, Astronomy, Astrofarsography, Deep Sky Astrophotography With CMOS Cameras
Id: 3RH93UvP358
Channel Id: undefined
Length: 53min 20sec (3200 seconds)
Published: Tue Mar 12 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.