Autostakkert with Emil Kraaikamp - REPLAY

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] good morning ladies and gentlemen welcome to another woodland hills cameron telescope or telescopes.net live stream um this one is going to be a very very very very very long day because we have to wait for christopher go to wake up who's all the way in the philippines but for now let's only go about halfway across the planet from me to belgium i believe and i have a meal here the guy that wrote auto stacker and i know a lot of you out there know what auto stacker is and use this program religiously um but there are some buttons and things like that that some of us don't know what it does so i thought you know what instead of it coming from me why don't i get the guy that actually wrote the software to tell you all about it and give you more information and then all the other cool features that you probably didn't even know about so emil tell everybody at home about yourself and all about auto stacker okay um so my name is emil crycomp i live in belgium for the last eight years i've developed out of stuff it's like 10 years ago basically to help make my life easier because i was taking a ton of images i'll try not to swear that much i was taking a lot of images and i just didn't have the time to process everything um so i decided to build something that could help me with that um so that's when i started out of starbucks and back in 2009 i think um i started imaging myself in 2006 so that's some time ago the last couple of years i've been imaging less and less unfortunately partly due to being busy at work and also planets not being in a very nice position at the moment for uh for us northerners um yeah i think uh that that's pretty much uh the the introduction that i i wanted to get pretty much so um let's pretty much get into all of this just uh as a ground rules for you guys watching at home uh hold your questions towards the end we will have a extended q a we've allotted almost two hours for all of this just in case we go over time that we usually go over time anyway but that's not the point so remember keep your questions towards the end if there is something very very pressing uh feel free to just um fire it out i am monitoring the chat constantly so uh emil take it away let's see let's start the screen share um let me see if i can find the right button there um sure okay i think that should be okay yeah you see my screen all right we can see your screen so let's make it a little bit bigger for those guys at home um yeah so i was a bit scared when you said two hours um but we'll see how far we can uh i i don't i don't think i don't think we're gonna go two hours i really hope not because i do have to have a break at some point exactly i'm exhausted after half an hour of speaking so then i will have a short break probably we'll see how it goes yeah um okay so first of all um i didn't prepare anything in the sense of like a presentation um i will go through the program as as well as i can and i wrote down a few things for myself on a separate monitor here so i know where i am and what i have covered and what i should still cover i already wrote down a bunch of questions that i saw online that people wanted me to explain so i will do that and okay um basically uh there are a few versions of outer stock out there there's only one version that i really maintain at the moment and that's the 64-bit version of how this target which is outer stock at uh three um so auto stacker two is no longer being developed but i know that some people still use it and even uh it shares a lot of the the the same features but um like bug reports i don't really handle any more for other soccer too um this year i've spent quite a bit of time on further developing out of stuck it so i'm i'm calling it out of stock at four no big surprise there and i will mostly use that version during this presentation to show a little bit what is new but there's not that much new so again it looks very similar to other stock of three and if you uh i hope you don't get lost there i'm sure that will be fine um so i will open up out of stock at uh four like i said it looks very similar to other stock of three you have like a main window which allows you to do all the different processing steps and you have the the window that can show the frames that are in a video file that you are going to process so tiny introduction about what other stock it actually is um i'm sure that most of you know uh but uh for those that don't um out of stock at this lucky imaging software um and that is uh to say that it's trying to analyze video files or a lot of images closely recorded in time and picking out those frames that are better than other frames or those parts of frames that are better than other parts of frames and putting them together in one image that is often of higher quality than any single frame in the video file is so the idea is to make pretty images basically of planets because that's what outer stock it is focused on but also of moon recordings or solar recordings and even a tiny bit of deep sky which i will try to show later on okay so first of all how do you use the software and how you can open files in it very simple there's a big button called open and you can click on that and you can browse to the files that you want to open that's not the only way to open files you can also from an explorer drag and drop files onto the software and then it will open those files at the bottom right of the screen you can see how many files are currently open so i dragged one file on there now i'll drag three files on there and you can see one of three it will only show the first file but once you start working on the software and you press the stack button it will process all three files so you can do batch processing okay let's go back to just one recording that i will use throughout the first part of the the stock um [Music] pretty much at random i think a very pretty recording of jupiter um this was i cheated a bit here this was not imaged with my own telescope this was imaged with the victimidi telescope in the southern part of france which is a one meter telescope and in 2017 i was lucky enough to visit that place and record the planets for a couple of nights in excellent scene conditions and with a one meter telescope that's just yeah a lot of fun i will not only use these uh recordings but yeah it's it's just a lot of fun to work with high quality data so um i'll start with this um so the first thing uh that you have to do um again this is this is basic for people that don't know it but i i go over everything i'll try to go over everything um so the first thing that you need to do is select what type of recording this is that you gave to out of target and there are two options it's either surface recording or a planetary recording and this has to do with how the image is stabilized and out of stock planetary recordings are usually pretty much always the a planets surrounded by black space and there's a center of gravity measurement the cog that you see here that is used to center the planet throughout the recording so if the recording was not uh aligned to begin with these days most recordings are of planets because fire capture and other capture software can already do that but if it's not the case then it can do the alignment for you and yeah planet works on planets very reliably surface recordings work on solar recordings and lunar recordings and also the deep sky recordings i will show you that later simple option the dynamic background text [Music] checkbox that you find here if you hover over the most options in outer stock you can get a little description of what it does um so this has to do with the way uh uh on how out of stock discriminates the planet in the recording from the the background value um if it's automatic if it's checked then it will look at the peak histogram value and assumes that that is the black space surrounding the planet and any value above that will be used for the center of gravity measurement if you have very tricky recordings you can play with a manual threshold setting to to see what works best for example if the sky background is very bright during daylight recordings you might want to switch this off and and do a manual threshold for now though i will have it on dynamic background extraction because that will uh dynamic background because that will work 99.9 percent of the time particularly for nighttime recordings it will work always then immediately we get to the most important setting probably an out of stock and that is quite often overlooked um let me quickly show a previous version of outer stock it's next to out of stock at four because i made a few changes here um auto stalker 3 had the laplacian filter as like the default quality estimate in another circuit and that's a way to determine which frame or which part of each frame is sharp and which part uh is not so sharp um the laplacian operator is really good for this it's it's used um uh in different uh fields uh also scientifically uh it's it's like proven to be quite a reliable quality estimator it's not the only one out there it's not always the best but it's super fast and it's very good to being the best so that's why i'm using that it is still using that in outer stock at four so even though it's you cannot select it anymore i just removed the uh or i just made it the default setting and now the stuck at four [Music] the other quality estimator is just not as good as the laplacian so i removed it this is something that i try to do more often by the way when i'm working on the software in the last couple of months i have had some time to work on it i prefer to take options in way a way instead of adding more options [Music] because i think that it is in 99.9 the best option out there so i don't take it away if i i know that there are recordings that really need it but in this case i was pretty certain that you don't need the non-blushing quality estimator that was used if you uncheck that checkbox i think um um something like that is is quite important that you've taken it out because it's removed a layer of complication because i think the quality estimator is the part that trips everybody up the most and i'm glad you've actually um addressed that and remove the the front end for like a better description so the end user is not as overwhelmed with something that they don't necessarily understand even though it's doing it in the background yeah exactly um i have to say that it is so i i replaced it with now like an automatic setting um which takes care of setting the noise robust value automatically for you as well as some other settings behind the scene i will close out of stock of 3 now because it's a bit in the way but it's difficult to automate lots of stuff there are always border cases that don't work quite well so you have to be very careful on when you remove settings and when you need to keep them so don't worry there's still plenty of settings to to play with and to try and then squeeze break the break the program more like for sure for sure you are good at that i just found out um so the um yeah so so right now i have the the new checkbox there that says automatic which tries to set the noise robust value for you i will explain what that noise robust value actually does you can tune it from number two to the number eight a little bit arbitrary but depending on the type of recordings that you uh feed the software um a higher or lower value gives you better results [Music] a lower value is good for recordings that are very high quality so that the image data that goes in this is has very low noise and the seeing was reasonably good uh or at least reasonably good and and it uh it will not or it will focus on small features in each frame to determine what the quality was like so actually what it does is a noise robust value of two will bend the image two by two before it calculates that laplacian of the image value of three we'll bring it three by three value four we'll be in at four by four um basically you're selecting the skill of the features to look at some data is it's it's always worse than other data blue channel data is pretty much always worse than than green and red channel data and anti-infrared which is quite often the best because it's less affected by seeing but that means that blue channel data often can use higher noise robust value than green and red channel data just because there aren't that many very fine features in there it's not always the case but it's often the case so sometimes the seeing is really good and then in principle you have better resolution possible with blue channel data than green and red channel data but that's very often not the case so right now the automatic setting does a little trick in that it also uh uses um i know i say how it works the the the automatic setting will look at the frame and determine how much noise is in the frame and based on that it makes an estimation of which noise robust value will work most likely the best for your kind of data i've already seen a few examples on my own data where it doesn't give me the very best results that's and i'm sure that that will be more the case uh the more data you feed it the more mistakes will show up this is something i'm still working and still trying to find something that is robust and working most of the time but what it does is it set it noise robust value and it also set a value under advanced experimental features it played with the pre-processing it applies a small gaussian blur to the image to basically get rid of the very fine noise that is in in the image which allows the laplacian operator to work better [Music] it does not always use it it uses it depending on how much noise it finds in the image but in this case it thinks that by pre-processing i will get better results then by not using pre-processing um so i set it to automatic and i keep it to automatic [Music] and there are two more options under the quality estimator [Music] and this is probably a case where i might get rid of one but i will not because i know that there are a few examples where global is preferable to local local means that it will calculate the quality for each align point that we place in the image and i haven't covered alignment points yet but here are a few alignment points that i put on jupiter i randomly do not copy this the local quality estimator will mean that for this alignment point here it will calculate the quality throughout all the frames and it will do the same for this and we'll do the same for this and the same for this and every little alignment point will get its own set of frames to uh that are best for that particular alignment point so it can be that with uh [Music] with just a couple of alignment points there and you process the recording that and you say that you want to stack the best 50 of the frames that in the end it will actually use data from all the frames that are in the recording just because it so happens that for this particular alignment point frame number one was good but for this particular alignment point frame number two was good which wasn't good for this one etcetera if you set that noise robust value to global it will take the quality estimator that the quality number that is assigned to the entire frame for each alignment point so every alignment point will use the exact same set of frames to stack generally you don't want that because there's a lot of information to gain by letting every alignment point the site for itself which frames are better but for example when transparency is very poor you might get lots of artifacts if you leave it on local because some frames are darker than other frames for a particularly alignment point and then you can get like weird scene artifacts between the images i need a step of order so local is best in 99 of the times there are a few times when you want to use the global quality estimator but pretty much stick to local most of the time the next step is analyze the data and analyzing the data will that out of stockholders uh uh go through all the frames uh sort them and show us which frame is the best one which is this one and which frame is the worst one which is the end of the list so it sorted the frames for us you can clearly see that the best frame has the sharpest features out there shows most detail and then the worst one is it's really quite blurry [Music] the quality graph will show that um so let's go all the way to the left this is the very best frame and it has a quality of 100 um but that is scaling that i perform it's just the quality value of four thousands whatever that's behind that forty thousand raw value doesn't matter but the 100 stands for the best frame and the zero percent is the worst thing so the quality graph is always skilled between the best friend and the worst string meaning that if there is like a weird outlying outlier frame that has some some sharp artifact in there this quality graph will be skewed a lot with the line going something like this if you follow my cursor with most of the the frames below [Music] the the average line that is there uh what i'm trying to say is that the quality graph gives you an indication but you always have to be wary of uh of what the frames actually look like so i suggest to see that the first frame is indeed the best frame check that the last frame is indeed the last string er either the worst frame and depending and if that's the case then you can use that quality graph to decide how many frames you want to stack for example [Music] so there are a few different um ways to uh to do that um and it's a bit tricky i don't know the best way um some people prefer to stack fewer frames and others prefer to stack like at least half of the frames of planetary recordings what is an indication a quite rough indication is the point where the quality of the frames drops below the 50 line um so at this point [Music] this is the cutoff point where um [Music] half of the frames uh oh no we're with the best 39 of the frames uh has a quality of 50 or higher and the worst 61 of the frames have a quality of 50 or low usually i stick to kind of that 50 line in general though i quite often thought planets jupiter saturn i know that i at least want to have 50 percent of the frame stacked [Music] for service recordings and especially for high quality service recordings like like uh lunar images or solar images with a high signal to noise ratio you will see that you will have a very sharp peak in the beginning and that cutoff point is is way closer to there that you're more likely to stick then eleven percent of the frames um you can click in yeah one quick question here is how how does the software determine the quality of a frame um what exactly is it looking at okay so that's again that same quality estimator so again the same uh it's bidding the images four by four um it is in this case doing the pre-processing so that's slight question blurring and then calculating the laplacian and the laplacian is like searching for edges in the image more or less so it is it gonna be contrast detection or phase detection um it's pretty much contrast detection yes okay um okay one thing i hadn't shown is you can that's also an undocumented feature if you hold ctrl and you click in the quality graph it will set that frame percentage to stack for you automatically you're giving away all the secrets that was the plan right yeah no it is i'm kidding um okay so for now i will i will i will go with my own instinct and say i want to stack 50 of the frames which is the difference between stacking the best fifty percent and the best thirty seven percent isn't the future it's a small difference um your post processing will have a much bigger effect on the outcome then um stacking a few frames more or less it's only like when you're doubling the the number of frames that you stack that you really start to see increase or decrease of noise and big changes in in outcome that you can get but like 55 40 yeah and that's it's not a big difference okay um so we analyzed the recording we set to stack the best 50 of the frames by holding control and clicking in this graph i will go back to that reference frame afterwards i will tell you how you can make multiple stacks in one go and so you see a bunch of text boxes here and the top row of text boxes are to define if you want to stack like a fixed number of frames say you want to stack the the best 100 frames the best 500 frames in the past 1 000 frames and then then you fill in those numbers and it will generate image stacks for the best 100 500 and 1000 in the same processing run it won't take much more time but it allows you to quickly see what works better for my recordings what happens if i only stack the best 10 and what happens if i only stack the best uh 20 or whatever um so the top row was for the the absolute number of frames that you can select and the bottom row is for the percentage of frames that you can stack i will empty the top one and for fun or just put in some random numbers at the bottom so i will stack the best 50 of the frames 20 10 and 5. back to the option below analyze that's reference frame and reference frame is [Music] so what the software does is try to compare on the planet to where they are supposed to be and trying to put them back in the place where they are supposed to be so if you go through this video a little bit and i'm just moving the browser you can see that it wobbles a bit and again this is very high quality data so it doesn't wobble that much but it wobbles a little bit and if you don't put everything back where it's supposed to be and your image will be blurry so you need to have like the reference frame onto which you align everything in a reference frame just having one single frame is not a good reference frame which is why the reference frame is actually a reference stack and i it's probably a good idea to change reference frame to reference deck i will do that for the next version so the reference stack is made up of the best uh typically the best 50 of the frames or the the frames that are above this cutoff line that i just discussed so the best 37 percent of frames it will use to create the the reference frame and even though it's a bit blurry it will have very good estimate of where everything is and that's the most important for the reference frame it doesn't matter that it's slightly blurry as long as more or less this feature here can still be detected and this feature here can still be detected i don't care how sharp it is you just need to have the position of it correctly um so you can let out of stock choose itself how many frames to use for this reference frame or you can set a manual size [Music] outer size is good 99 of the times but there are a few cases where you want to say well i want to use as many frames as possible for the reference frame because for example the ceiling is very bad and i just recorded a really short video meaning that there's if i only use the best 10 of the frames then the reference frame is still gonna be distorted well if you use more frames then that reference frame will be on average or will be average more so less distorted you know better where the features are this becomes especially important if you're going to make animations for example where you will see that positions of features are not exactly where they are supposed to be um if you compare one recording to a next recording and you can see slight wobble of features on jupiter in in vertical direction then you know something is wrong and the reference frame probably wasn't good this is mostly the case for uh for service recordings for solar recordings especially there i often use a manual size that is slightly slightly higher than what the automatic size would suggest maybe not often sometimes sometimes i do that depends on how long my recordings were for now though i will set it to out of size because that works most of the time [Music] the feature double stack reference that means that it will go process the recording twice basically it will process the recording analyze the frames create a reference frame of the best 37 of the frames create the image stack and then use that image stack as input for the next processing run as the new reference frame this can help especially when you were stacking only few only the best five percent or best uh ten percent of the frames to begin with that might be a big um i i just said that the reference frame it doesn't matter that it's blurry but for those recordings it will matter a little bit uh and it might lose track for very small features if you do not use this double stack reference option so pretty much for all solar recordings i would as um yeah recommend to use the double stack reference option and even for very high planetary uh recordings i would use it it's you know it doubles your processing time but if it's if you can afford that if the recording is not too big then i would use the double stack reference option so i will put that on now let's go look at my notes to see where i was i will go to [Music] a menu option right now and show you the memory usage menu and there are two options in outer stock at four one is no buffering and one is adaptive buffer if your computer does not have lots of memory and you notice that the software is crashing quite a bit try running the processing with the no buffering option that will mean that other cycle try its best to use as little memory as possible it can still use quite a lot depending on which settings you use but it will not buffer all the images in memory so it can help you still process very big files for example that option is basically there because the adaptive buffering option that i i think it's new and out of stack it's four let me quickly check i know i uh it was new um in 3.1.4 it was there already as well but in an option before that it wasn't called adaptive buffering it was just called buffering i try to make the buffering smart and that it will as soon as it sees that it will run out of memory it will will free a certain number of frames again and will continue processing it's not perfect yet it's very difficult to to do this optimally it's easy to to to use too much memory so um typically adapted buffering is the best option to be if you run into memory issues go for no buffering and see if that processes your recording okay the color menu so other stock it can work with different types of recordings [Music] personally i i for planetary imaging i only use monochrome cameras um because on average that gives best results it's a bit more work but you have more resolution in the end so i i i have i use that but it has an option to auto detect which type of video you are giving to other stuff at which mercs works most of the time but not always especially for some difficult uh player patterns it might not be obvious for other stuff to detect the correct one in which case you should play with these options to see which one gives you the the right colors in your image which should not be too difficult if you guys don't know which player matrix that you have uh check up with the manufacturer uh to figure out what they are so if you're using a zwo qhy or starlight express they all have their bayer matrix profiles um set if you're using dslrs on the other hand or mirrorless cameras you will have to just double triple quadruple check so for example if you're using a fuji film based camera it's going to be red green being blue um but every camera has a different bayer matrix so if you guys get weird problems um double check that so always check up with the manufacturer they do give you this information okay um i've found it like most cameras set rbo cameras that i've used that are color cameras the rgtb one is i think yeah it's the most common common one well pretty much any sony um chip is rggb okay um then i will ignore this for now i will um go focus on uh putting alignment points in there um together with the hold on before you close before you go over that um image calibration um are you going to refer to that at any point of course of course i have my huge list of things that i i will discuss so let's see okay we'll discuss that um and also the the advanced options i will mention i i think i will pretty much do everything okay we might need more than two hours now okay i can do this i can do this so let's go back to the the window on the right here so um yeah so i use this to show the frames and to to show the best frame and the worst frame um but i also use this window to place the alignment points in and there are two ways to place alignment points one is to do it manually left click and right click to delete allowing points [Music] but that becomes quite tedious if your planet is big or if you have a lunar recording that's impossible so the other option is to place alignment points in a grid and there are a few settings that you can tweak to to make that grid [Music] but the most important one is the minimal brightness one which just says says that exclude anything where the brightness is below this threshold in the image so if i place it on 20 then it will not go on the black space but it will focus mostly on inside the planet and i will increase it a little bit because i want to and the alignment points a little bit further from the edge why i want to do that okay so alignment points you want to place on features in the image and you want these features should be able you should be able to track these features throughout the entire recording most features that you can see by eye are good features so you can see this oval right here which is very clear to see in the raw frames which means that an alignment point right on top of that oval would give a very sharp or very good tracking for that particular location in the image but the more you go towards the edges and especially the more you uh for for a planet like jupiter here we're not talking about the rings of saturn for example but the the brightness of jupiter is highest in the center of the image and goes dimmest towards the edges meaning that features towards the edges are more difficult to track and features that are facing us also if an alignment point is placed too much over the edge the predominant feature in this case is the edge itself and it can trace the edge perfectly in this direction oh no sorry in this direction so it can see where the planet begins here and where the dark area is on this side but it will not make a big distinction between the alignment box here and the alignment box in slightly different positions for from the point of view of that alignment point it would look very similar which means that these kind of alignment points are not in a good location you want to have them slightly more on the planet where the features are instead of on the edge where it will focus more on the edge so one way to do that is to tweak that minimal brightness to make sure that i don't want anything right here i want any something here so i want the minimal brightness to be 50 in this case something like that where the alignment points are placed nicely on the planet a little bit over the edges okay but not too much another option here is the the multi-skill option if i don't have that selected it will just produce a grid of alignment points of the same size but outer stock it can try to be clever and use different sized alignment points and it will almost double the alignment points size and then double it again more or less and put multiple of those sizes on on on on the image and that means that the bigger the alignment point the easier it is to track because there are more features inside that big alignment point compared to a small alignment point so whenever there's a small align point that doesn't work well uh outer stack it will use the information from a bigger alignment point that is yeah over over that uh that same area that's covering that same area um there's not really a case where i can think of that you should not use this it will make processing a little bit uh a bit longer but on average the results will be better because there's always one small alignment point on an edge where there's just not that much features to track on and that will cause like a blurry area in your image where when if you had a larger alignment point on top of that then it can use that that information uh to to repair that the lion point that isn't working properly um [Music] so the size of the alignment point is something that is super important and unfortunately that it's really difficult to automate it's it's similar to automate than that noise robust value meaning that noise is uh yeah the the noise level in the image determines how small your smallest alignment point can be but seeing effects plays a big role as well and this is really something that you should play with um meaning that for yeah uh for the recordings that i shot during this night um i made i don't know we made we made like 20 different recordings of jupiter in each filter so we have a lot of data to work with um before you process everything um go co-sit with one recording that is one of the best that you've seen where the scene was pretty good and just use different alignment points and see what the results will be um if you use alignment points that are too small it will produce a blurry result and then you will have artifacts even with that naughty skill option it will not cover everything so never choose alignment points that are too small because you will not have optimal results but you only know that after trying it but it will not change much from one recording to a next i mean for one night you have found that [Music] this size the 72 size works best then you can process all the recordings off at night with that setting um so i knew i knew this was going to start something because it's the prime example of all these questions uh is to do with these alignment point and the sizes um i've got to admit i don't generally use multi-scale because i've used it before in the past in doing because i do a lot of solar images what i found here is the smaller sizes create the best stabilization for me um but interestingly um it's always the the opposites for everybody um yeah but it depends a lot on the type of image that you feed it for this for my solar recordings that i did and that's some time ago already but those are such high signal to noise ratio images i mean there are so many very fine details there that you will find that the very smallest line point is probably the best although 94 is maybe pushing it but like 32 or something is in that ballpark you're probably having the best but for planetary images the noise level is much higher it's so hard to ever make sense to use those small alignment points in those conditions right so for you guys watching at home um if you're looking at finer detail than the smaller sizes generally are the better ways to go obviously if you have a planet that is say a tenth of the size of this in terms of the capture size then having one massive alignment point doesn't make any logical sense i mean he's got an image here that is a very covers a large surface area i mean we've got 640 by 624 most people's images are like 320 by 320 so you have to use a little bit of common sense here you're having alignment points of 72 plus maybe way too big because it encompasses the entire planet so you need to kind of scale down accordingly it is the way to look at it and for the some of you other guys who are doing solar photography wondering why your surface features are sharp but your prominences are not um that's what he was referring to with doing the alignment points in terms of the minimum maximum brightness level if you're if your alignment points do not cover the prominences then it will not align to that data which is why most of the time you get this blurry mess so if you actually lower the minimum brightness to a number where when you do the auto placement of the grid it does cover that part and another tip for that is that you can manually add some of your own alignment points if the prominence is uh here then you can add some extra alignment points there and even use a bigger size of alignment points for those yes there's it's much darker there it's difficult to track so use some bigger manually placed alignment points but i think the most important thing for you guys at home is experiment using alignment points we can't give you a set rule there is not a magical button i mean i mean emil's done a really good job doing a magical button as it is doing the uh place alignment grid but you still have to have common sense you still have to do some manual processing i guess uh in placing these alignment points if you're having problematic areas yeah for sure um what might be fun is to try just very small alignment points in this run and look at them and see if you can spot the difference between two small ones and two big ones uh we're not gonna do that now it's gonna take too long oh i'm super fast okay go for it okay let's let's quickly go for it i will switch off that um multi-skill feature i will take just one stick and i will make it kind of small and i will not do a double run um i will make the stack small so that it's easier to to spot the differences i will only use the very sharpest frames oh i did not show this yet that's for later um i will use some larger ones now and you've made a massive improvement to the engine because it's moving real fast yeah that's um we discussed briefly uh before um we were live um so the other stuck at four the biggest improvement is on processing speeds and especially for very large recordings i uh i made sure that it was working fast for like huge service recordings or lunar recordings that were 40 gigabytes in size um that should be much faster now i will quickly process these like super super quickly and not really paying attention to what it looks like okay so um oh i i see what's going on um left hand side about um just below the halfway line i can see a bunch of uh line artifacts disappearing and appearing yeah okay this this is the one where we used a lot of alignment points so the really small alignment points not these but whatever it was something like this and you can see some weird vehicles here the edge of jupiter here is not correct in this one i i unclicked that one this is the one where we used the the larger alignment points and you see no mistakes throughout the entire field of view as far as i can see um yeah it's it's it's sharper pretty much all over the place not everywhere though there are a few areas where the very small alignment points work better um in the center of jupiter i can see some areas where the contrast is higher due to the smaller features but definitely near the edges it's not working well uh this is only something that you spot if you over process your recording and go in detail at it and really stare at it and flip which is what everybody is doing right now nah it's um people should do this more if they want to trust their data it's super easy to make like a weird line near the edge of jupiter between alignment points that are not following something and then it may look like a feature that is real while in fact it's just a stacking art effect yeah i can see some of the bands there are just a really struggling there yeah but it's it's subtle it's it's not super clear i mean no you you know it and then you know where to look for it um but um to get the yeah to get the very best results you need to do this and this data is very high quality data so right to begin with trying to get away to with with rather small ones um it would be a bigger difference if i had used another recording it would have lost track of the features much so let's have a look at the multi-scale this time now um okay okay so the multiscale i forgot how many frames i used or the size of my alignment points but this is a nice bridge into a new feature in version 3. for every stack that i have done it will create a is3 file af3 file which is like a session file which contains sorry about that which contains the information of [Music] how the yeah i can't open it right now but it contains information on how uh how this particular image this particular image here this particular image here was created so by opening this session file it will open the original video file it will show you where the alignment points are and you can uh all the settings ah that created that stack so now i can actually go back and troubleshoot the areas that exactly ah okay that's i've never done that before this is a nice feature in where you can also if you run into a problem and that's you want to show me it you want to show me the problem you can send me the video and you sent me that as3 file and i know exactly what you did to to make that to do that processing so apparently i use the size of 32 and i will now use the multi-skill thing and i keep everything the same so this is a multi-skill that free field is something that will be added to the image stack name um whenever i'm debugging or trying stuff out i will type something here that makes me remember what i was doing so now there's an image here multi-skill i open that place it on top of the other ones [Music] okay this is the multi-scale one um multi this is 32 and this was 72 or something like that i think okay so the multi-scale one yeah it seems that it fixed the the recording pretty well actually yeah yeah it even fixed the issue down in the bottom left hand corner that i was looking at i i don't see what's uh what issue you were referring to there's a black line um well without me showing you where it is yeah i i know where it is for sure but for sure at the top in that area you can see that there's something weird going on with the edge but it's fixed with the the the multi-alignment points on multi-size i'm pretty uh happy with this actually didn't know it would work that well so depending so really depending on the the the the quality of the data you can go quite low apparently with the alignment point size as long as you have multi-scale to fix whatever went wrong having said that i would not go crazy low [Music] test it out and look in detail at what you did to really confirm that it is making sense and it's probably best to test it out without that multi-skill feature on maybe i don't know you can test it in both ways i guess just leave it on but but look closely at your data that's the only way how you can get like the best results in general i think we've started something now everyone's going to go back and start reprocessing their old data and now going oh that's what i've been doing wrong all this time oh that's one of the things they have been doing wrong so one more thing instead of just having rectangular alignment points and i'm left clicking and right clicking to add them um you can use the the mouse the the scroll wheel to change the size of the alignment point so you don't have to click there you can just scroll to make it bigger and scroll down to make it smaller um just to help you set large alignment points manually and remove them again and make it easier to do that and there's an option to like if you want an alignment point of a particular size or shape for whatever reason you can do that it's not often useful but it is there so it doesn't necessarily need to be a square alignment point it can be any rectangle i'll give you guys at home an example how that's useful if you guys don't really know this i do a lot of imaging of the iss that is where it's going to be super super useful depending on the angle and the tilt of the iss if you guys are doing stacking with the iss and you find that it tears apart um that's actually a really good method of fixing that is to draw your own rectangle for the iss it works really really nicely okay um also in that case keep your recordings that you let out the stock it's processed quite short i mean if it's changing the angle during uh what other stuff it is trying to do you will not get optimal results of course yeah well we're only doing like five or six frames if that oh yeah uh are you manually tracking or yeah we're using the dobsonian to try and chase the iss it's not easy um there's a guy that's in london um his name sabi he does uh space station guys he does it by hand the way he does it is just unbelievable so he does it all by hand and he uses auto stacker and it's one of the tricks that he showed me is to manually draw the alignment point um because it's it's not the the movement of the iss it's when you get five frames in a row what happens is when you have seeing when you get an alignment point because the iss is so small if you draw your own manual alignment point it'll look in that one area and so forth down as you work your way through the whole thing so when it recombines the image it doesn't want to twist if that makes sense okay cool and it works so ever since he showed me that i was like well damn i should have known that earlier it's been a long time since i've imaged the iss but i've done it and tried to do it manually as well with like a 25 centimeter newtonian and then you miss 99.9 percent of the time it's too small and it's lots of fun but it's the the processing afterwards was interesting i had some hidden features in outer starter to let me extract manually the frames where something was visible blah blah blah it's nice um okay oh yeah a few cool things i want to show that are still on the main window of outer stockings so i opened the file from this folder and there's just one file open there are a few little buttons underneath the open button and this allows you to open the previous file and should type on here open the previous file in the current working directory or open the next file in the current working directory or open all files in the current working directory um and this is new this is new and 3.1.4 for sure i don't know if it wasn't 3.0 it wasn't i've never seen that okay so this is um okay it's kind of new yes um but what this does is allow you to quickly go through multiple recordings um or all the recordings that you made in a particular day but something super fun is that there's a checkbox next to it and it shows the filter of the currently open video and it extracts that information from the fire capture log file and you can like limit it to open all files in that folder with the infrared 685 filter and then it has opened all 15 files in this folder that have this same filter meaning that you can easily process all of those files in one go with the same settings would this option also be suitable for red green blue values yeah for sure um so let's see if i have uh [Music] something here oh look here this is this is all the red uh files in there um now uncheck that box it goes to the green and it shows me that this is a blue one i want all the blue ones it will open all the blue in that that folder so this is a nice way to to browse through your data a bit yeah and process all the ones with the same filter in the same way which is probably what you want to do because yeah like i said the blue files need a different setting than green and red ones generally speaking so yes those are explained there's another fun feature there um two features there that uh limit and expand [Music] and what that does is i'll open a very big file to play with let's open the big moon and this is a file recording of the moon that i made and i put it on service recording because it is a surface recording i zoom out a little bit because it's quite big um but this one is how big is it like 40 gigabytes or something and sometimes you just want to play with a specific part of the data in a very big recording um and the limit option allows you to select like i only want to look at the first 1000 frames and then it opened the file as if it had only 1000 frames in there you can choose your own window of frame stake you want to use so from from 1000 to 3000 and it will open the 2000 frames that are in that interval so this can be useful if yeah if your recording is super big and you want to pay close attention to a small part in your recording only another option that is kind of fun is the expand option and the expand one will will pretend that we that the currently open recording is not just one recording but there's actually multiple recordings it's multiple recordings from the minimum to the maximum frame i keep that as something very last because i want to include all the frames and i want to pretend like it's a recording that consists of 1 000 frames uh where there's 500 frames in between those 1000 frame intervals and this will kind of break up your recording into in this case 36 recordings that are each 1000 frames in length but in a running window of 500 frames separated so just so i understand this with the expanded that means i can record say like a really really long file uh we'll say i don't know a million frames for argument's sake create a bunch of small ones and then export each one out and create like an animation through that those ones exactly this is super useful for making animations and this is nice still making animations that are very smooth uh in uh instead of having i mean the fact that you that you can have like a little overlap between the the different windows that you have will make the animation super smooth um so yes definitely this is for enemies and also yeah international space station uh when the viewing angle changes you can use this to make sure that it has only i mean you need to have a tracking telescope then that you have like continuous movement of or continuous video of of the iss but this will ensure that you will only stack frames that are more or less uh having the same orientation of the iss within the frame and then you can make a nice little animation out of that yes you totally totally yeah this is a nice option you don't use it that often but there are a few oh i'm gonna use i'm gonna use it to death good luck don't break it too much just as long as it works um okay so that was that one um i'll see if i missed something on the main screen of the software yeah um there's one there are a few uh little logos here uh one will say the type of uh how to stock it is processing this data as if it's a monochrome indicating by this grayscale pair pattern kind of thing that you see there if i had it on rgtb it will show me an rgtb layer pattern it's just like a visual reminder of which color setting you are using right now there is a little logo that shows that ffmpeg is found in the current working directory of outer stargate which means that whenever it opens a video file that it does not recognize by itself and pretty much all compressed video files that come from a dslr camera or a mobile phone um if you place um ffmpeg dot x so you need a statically built ff executable and you place that oops let me see if i can find a directory so here i have an executable of f next to others targets and then it will use ffmpeg to convert the files on the fly while not on the flight before opening them in how to stock it whenever it cannot open it by itself um one thing is that it can use detect dots detected software do you know what detect this uh i do i don't know if everybody else does okay can you cliff give a nice short summary while i drink this glass of water uh hold on it might be uh better if i could do this okay so detect this um it's written by marek okra or maybe not started by him but i think it's started by him it has software to detect impacts on jupiter and saturn yeah i'm trying to find his website how does he spell his uh software again uh the capital d capital t capital c but he is like gathering um uh well no this is i think this is it so i'm just gonna put something over the screen real fast okay um this is basically what this program is i'm not 100 sure if this is the official site because i don't recognize this site but yeah it's he has a really weird way of stylizing it so it basically looks for bright impact changes and i don't know how else to describe it it's just like using all these other programs i have his website up now but i don't know if that's his latest one i think it's part of people i think this is like the latest uh the best website oh okay we were looking at the same thing then yeah so it's it's software to detect impacts on uh jupiter uh potentially saturn and there have been like a handful of of impacts being detected maybe slightly more than one handful now but [Music] and there's a study going on by scientists to see how often this actually happens and amateurs are much encouraged to help with this um because amateurs are watching the planets pretty much continuously way more often than the professionals do and every now and then an impact is detected and you can be one that that detects it with your data so it's it's fun and it's super scientifically useful as well so i highly recommend to to have a look there if you image jupiter and saturn regularly to to use this software so that they can build a big database of of how often something like this happens and even if you detect nothing that's still super useful it's relevant data to see how say you have 20 hours of video um then you know that at least during those 20 hours of video nothing happened which is as useful as knowing that you detected something on that so have a look there um but i'll just target or this new version will work together with detect and you point it to the detect executable from the website we just showed the default website where you can download it and it will after processing the videos in other stock it will send it to a queue that detects monitors and it will scan the video files for impacts so it's i mean i mean what the software does just so you guys understand is um it's like a high contrast filter that it runs on the planet and all it does is it looks for bright spots appearing and then going away is essentially the only way i can describe it i mean i've tried it but i have no footage that shows any impact so i have no idea if it ever works or not but but like i said even that is very useful information so um there's still a small manual step involved i think where it creates uh an image where you need to verify if what the software saw is actually really an impact right not because there are some faults detections of course um yeah pixels whatever exactly or a moon that is affected quite a lot by seeing effects for example um but yeah it's it's a really fun project and it's it's really useful to uh to participate with this so other stackers can do that or can help with this and send all the videos automatically to detect after it has processed the videos is there any reason why you made the icon look like death star there's like a mini mini impact right there you see a hoverboard oh okay yeah it looks more like death star i i think i copy i'm not sure if i i might have made it based on an existing icon i don't know death star is fun it's good good enough um oh yeah the um there's a little uh up down thingy where you can select how many threads you want the software to use by default it is set such that it will use all the threads that your processor has and this laptop here has eight cores but it does hyper threading so it says it has 16 threads that it can work at the same time and that will significantly speed up processing if you let it use everything that there is but there might be cases where you want to use the computer as well at the same time with something else so you can limit the amount of threads that it uses so that your computer doesn't become too unresponsive yeah typically i just have it set to maximum and i can still work whatever i want in the background there are some output options here that i haven't discussed yet so the little column here says you can click on this and you can say i want to [Music] create automatically tiff files in this case because i have stiff file selected that are either containing are made up of red green and blue channels in there even though if it's a grayscale recording you can force it to always use an rgb image output some software needs that um or you let it be on automatic and it will create monochrome files for monochrome videos and rgb files for rgb videos um you can save as png files or tiff these are all lushless and or uncompressed you can right click click on tiff and have a few options on what kind of tiff files you want to save by default is an uncompressed one you can set it to a lossless one and compress this just a little bit faster so if you're processing a lot of files um it's it's easier for the software to to write uncompressed ones but it's a small difference there's no difference in image quality you can also create 32 bits channel output by default df and png are 16 bits and fits are 32 bits but you can force the diff files to have 32 bit per channel as well hardly ever is there a use case for that but you don't know might be might be useful there's no hidden option for png and there's no hidden option for fits um yeah there's a an option to have a sharpened output and this will apply a non-chop mask to the um to the image stack and we'll blend in the raw image deck as well for fifty percent um if you set you don't want to blend and draw image stack you set that to zero percentage you get the most sharpened file um i don't use this much but it can be nice to like quickly see which videos are good which are not good to get an idea of the data that you captured or to find out on which videos during a session you really want to focus because those are sharper than other ones for example [Music] by default it will save outputs in folders [Music] so let's go to where it puts some outputs so this is a directory where i have the original recordings and there's a text file that says something about the recording which fire capture created and within that folder then you have folders you can see i've been messing around with the data a lot but within this file you have folders that say how many percentage of frames you have stacked um asp 75 is you stacked 75 percent of the frame this is the best 50 of the frames so depending on the settings that you use here it will create a folder for that [Music] then i will quickly go over these output options so there's a button here output options you can find here the detect software again you can send the stack to rail stocks if radio stocks is open at the same time and it will immediately go to the wave processing stage page of registers um for every stack that other stock generates you could send it to an open photoshop instance or you could open it in windows photo viewer so for every step that you create you quickly see what it looks like in one of these software packages and you can determine what you want to name your files exactly or how you want to name your files so you can determine the name of the folders in which the files are saved not super interesting but for people that like playing with this kind of data the option exists um now for something more interesting the advanced settings here um there is by default it's it's processing the data as is the same image skill just one by one and it will not try to increase the resolution in whatever way but there are a few techniques implemented that allow you to try and increase the resolution the most efficient one is drizzle technique which really helps in certain cases to increase resolution and resampling allows you to get slightly higher image skill and sometimes slightly better stacking results as well i would only use these settings for very high quality data where you actually might see the difference between using this and using and not using this but i will try to show now um with one video um where uh where i use drizzle to see um to to to see that the resolution actually increases so that i can get better results with it and i will do that by a solar image that i took back in 2011 and this is the video that i will try to process you can see seeing is quite [Music] good in my eyes in that there are certain areas in the frames where you can see the granulation let me see if i can improve it a little bit or make it a bit bigger you can see the granulation quite clearly in raw frames and in other areas you can't see it but you can see like a checker or a pattern on the image where [Music] you can see uh yeah the the focus kind of move throughout the image everywhere um so this data is quite uh undersampled which means that i didn't use a very high magnification for this which means that it is a good candidate for the dristing technique to try and get some more resolution out of it um i will quickly process this it's a service recording i set it to oh this is a setting i haven't touched yet to crop which means that i will want the video or the final output file only to contain data that was visible throughout the entire recording if i set it to expand it will try to make the output file bigger or as big as possible but it will mean that the edges will be made up of fewer frames so back to crop the improved tracking one is not really needed that much anymore in the new version because i've much improved uh how well it tracks but the i i haven't discussed one important thing yet i realize um that's the the alignment anchor the image stabilization anchor for surface recordings you need to point it to a feature where it can that it uses to track uh the movement of the the the throughout the entire recording um there's a new option where i'm working on to try and pinpoint a location that is the best one to place it so this is like automatically selecting from this data it uses this area as the best area to place your alignment anchor it works quite well but it's not perfect yet so i'm still improving on that but you can use ctrl click to pick an area in whatever location to decide which feature you want to track the improved tracking one goes a bit uh while searching a larger area for that feature from frame to frame if there's not much movement from one frame to the next frame then you don't need it but if the recording was made during very windy conditions for example and this the sun is moving in the image a lot then you might want to try the improved tracking option to make it make it succeed and tracking all the features there back to the drizzling option i will place quite a lot very small alignment points um i will not do sharpening of the images and i will say that this is a oh this is not drizzle this is no drizzle i'm very good at naming stuff usually i enter something like randomly dutch with blood um but i will create one stack where i did not use drizzle and i will create one stack where i did use drizzle and you can see that processing or the stacking phase goes quite a bit slower when you're drizzling that's because it's like up scaling the image in this case 300 in both dimensions and yeah that will take a bit longer but it will try to improve the the final resolution so i now have two images i will close this this is the drizzled one um and for this demonstration i will resize the recording such that they are the the same size and so we can try and loop that's the difference between drizzling and non-drizzling as best as possible this one i need to reduce in size the other one i increased by 150 percent this one i will decrease to 50 so that they are both the same size um this is drizzle this is no drizzle and i will just quickly sharpen them in a very ugly way but in an identical way that is too much once more i often use photoshop to sharpen the recordings and i often use this smart shopping feature because it's kind of like deconvolution but super fast okay and let's add a little layer on top too someone like that just compare the two images um this is the one uh with drizzling and this is the one without wrestling i will zoom in a little bit without drizzling with drizzling it is again super subtle and drizzling is by nature a little bit more crazy than non-wrestling but you get the impression that the smallest features are smaller in size in the drizzling stack than in the non-dressing stack differences are again super subtle but i know that this one is sharper than this one there's more detail in there in this one because of the drizzling technique um for most planetary images you can see it a little bit actually at that at the 150 scale this one looks a little bit more fat than this one more finer finer defined features again differences are super subtle um and if you want to see if it works for your data you need to try it out but it's very difficult to compare the drizzle data with non-drizzled data to see which one is actually better you need to focus [Music] look super closely at the data and see which works best for for whatever you are doing um see if there's something else i can say here resampling i don't use that often but it's uh it's a way to create an image that is larger in size without any of the possible artifacts that you can't get with drizzling dressling is a bit yeah it's tricky to work with drizzled data and resampled data is sometimes smoother to work with and less artifacts but not as high quality results you can get as you can get with drizzle all right we've got about 10 more minutes left and then we need to open up q a and i have to really go to the bathroom and my cats decided to come alive and step on everything so uh you guys at home start putting your questions in now and um you've literally got 10 minutes left to go through the advanced features and the image calibration awesome that's that's plenty of time okay i'll i'll do my best um let's go back to the um a recording um first image calibration image calibration is easy um there are um i want to open a recording first um this one this is image data i'll make it a little bit smaller and a little bit brighter to just see what's going on here this is some [Music] data that i shot with an asi 174 camera so this is deep sky data and you can see that it's kind of rough and that there are some horizontal lines which is quite typical for cmos sensors but you can also see quite clearly the the the the hot pixels in the image that stay at the exactly the same location throughout all the images and you want to use a dark frame to get rid of those so other stuff it can use a dark frame you can use a flat frame and you can also use a dead pixel map that pixel map is to really get rid of the pixels that are stuck that are just dead you provide it with an image file and the image file is zero for all the locations where it should not try to replace the pixel and it's bigger than zero for the locations where you want to replace it by the median of the surrounding pixels but in principle i think most people use a dark and a flat for planetary recordings i use neither of these it's hardly ever needed to use them for deep sky recordings you need it for solar images you probably use it all the time i guess sorry what was question i was just reading another question last question um for solar images how often do you use image calibration i you have to use flats yeah it's like a must if you don't shoot flats and there's i will be coming out with a tutorial on doing flats we're just waiting for a product to be released once that thing's released then the tutorial can go out um but it's yeah flats is like so important when it comes down to solar uh and even luna but when it comes down to planets nah don't bother yeah exactly um okay so i opened the the dark map and i will clear it now so that you can see the difference um you see all the hot pixels now i will load it again and play pay close attention to the hot pixels that have disappeared so dark is to get rid of pixels that are brighter than other pixels basically um another very nice option here is the road noise correction you can you can see that you can switch it on with ctrl h so i do that and it will like try to get rid of the the the noise uh horizontal pattern noise that is typical of cmos cameras this works only if it can sample uh the image well if there's not if there's plenty of dark space in the image to to determine what the you have to try and get rid of it you need uh it won't work on solar images and it won't work on lunar images typically you can try but it will probably not work similarly some cmos cameras have column noise instead of the row noise so you could use that as well you can create your master dark frame if you have a master dark recording like this one i don't want to use any of the image stabilization options i just go to create master dark frame create a dark frame and it will do that for me so that's just stacking without anything fancy i'm just going to show everybody at home uh real quick uh you won't see this obviously the flat frame uh calibration so here's something i shot this morning and just moved the green box out the way you can see a bright spot here where the actual sunspot is so i'm going to go in and load in a flat and when i s so you can see it i've already created the flat frame as soon as it loads in you'll suddenly see it it smooths everything out to create what is essentially a flat field this is more critical for solar than anything else and obviously if you're doing moon uh surface but if you're doing planets again don't bother okay so the one remaining uh thing is the advanced features that you can find under the advanced menu the first two options are to say if you want that session file that i showed earlier where it creates next to every stack a session file that explains how the stack was made and that you can load another stacker to rework on all data or send to other people to work on your data there are some options to detect horizontal abrupt horizontal vertical artefacts in the image if your video data is messed up somehow there was a problem when transmitting data from camera to computer sometimes you have some frames that are broken these options can try to get rid of those frames if it doesn't work and if it's just one frame use the spacebar to disable a frame [Music] so i manually press the spacebar to turn this frame into red and then it will not use it for whatever processing i use in outer stack uh is that a new feature i didn't know about that one this is like one of the oldest features oh my god i never knew about that one it's it's so useful it's so useful god is it useful i could have done with that earlier oh boy um so you're welcome um first the brute force alignment option don't really bother with that autostarter tries to be smart uh when aligning and that works almost always um if you put it on brute force it will take a lot longer to process your files and it might be slightly better but nah don't bother with that if the scene is super poor and you have imaged it a little bit too if you have over sampled your data if your image scale is too big you might get some artifacts from alignment point to alignment point use this ultra smooth map recombination to try and fix that otherwise don't bother with that discard worst global frames um it will try if this is uh if this feature is on it will get rid of the worst uh five percent of the frames by default and it will not use them even though they might be used by a local alignment point or a local alignment point wants to use them um it will not be used it's the the frames that will be shown in the quality graph at the edge or at the end and there will be a little red line underneath that will show that these were the global frames the worst global frames that have been switched off and then there's the experimental features and parameter tuning there's pre-processing option which i am experimenting a bit with for trying to uh tune the quality estimator um what it does is that it reprocesses the video during analyzing and aligning it will apply that blur but when stacking it will not apply the blur so it's really only for quality analysis and for aligning everything that the video is pre-processed it's it can be a bit slow but it can give better results not that often it's something fun for experts to play with i'd say segment clipping that's a fun one if you have a hot pixel in your video and i think i had one video here somewhere in that pixel here let me go there and open it a bigger there's one pixel right there that is quite bright and if i play it you can see that that will draw like a line on the the image stack in the end if you don't get rid of that um i can't use a dead here or a or a dark image here because the image was already aligned uh before it was recorded so this is a pixel that i cannot correct properly but with sigma clipping you can say please discard of any pixels that are out of the ordinary by default is switched off but if you have something like this use the sigma clipping to try and get rid of that bright pixel having an effect on your image data and yeah there's one more that i'm not using a lot myself but i think more and more people are using it because they have a dobsonian telescope and not on an equatorial mount for example and that's field rotation so if the recording changes uh throughout or if the the rotation of the planet or orientation of the planet changes throughout the recording because you're not using the equatorial mount but an altar has moved out and then you can try and correct for that by fitting in these details here that does actually work really well i've played with that quite a few times um especially when we're doing like longer exposures for uh uh animation so we don't just have the the rotation it doesn't do this weird arcing effect when we're doing the the animation it will it'll just rotate and move smoothly just as if you were observing it going across the ecliptic yeah i think it works it does work um it's a bit slow it's not super optimized in this and it's difficult don't care why you like using it it works okay i will not get rid of it then please don't because it does work um oh and a bunch of features that i'm currently working on but that's part of that should not be there right now um i think i've covered most of it yeah i mean i'm sure we can come back and revisit anything else on another day if we want to do some more advanced things um because again this program is so vast that even though we spent an hour and a half uh we still haven't covered everything there's probably more questions and than answers at this particular point but this is a great start um i do want to launch into some of these questions here um because obviously people are watching and waiting um kristen uh i've seen claims uh auto stacker is the best way to debayer color images pl or planetary data can you explain why this is better than debayering in pip or other applications oh i can almost tell the answers there oh well then please do um by the way damien peaches here hello damian um i sent you an email please respond um we love you we really do and just to prove that i am an english guy tada accent um and he is saying hi to you emil um say hi hi back he can hear you ah okay um i hope i hope to see him soon again uh preferably somewhere i'll pick the midi or something but i'm sure he knows oh there you go um so kristen let me answer that real quickly um let me fire up auto stacker real fast the reason behind it is the way that it chooses how to do the debayering when you do auto detect it's going to look at the actual matrix so it looks at the first pixel pulls out the four colors the sub pixels and determines which ones which hence that's how the algorithm works for the automatic debiary you can see that i'm doing the wiggling this is not his screens is my screen so the way that other programs work is it defaults to one particular bayer type and then here's another problem you end up getting these squares that show up and like a grid pattern and that's actually because it's using the wrong type of debayering technique whereas the way auto stacker is doing it is not only is it going to try and auto detect it knows that it's a color image and it makes up for the sub pixel uh at least that's what the algorithm is that i've noticed the way it uses that's why it has a better chance of doing it versus other software which is not always applicable um because every now then i have spotted auto stacker will incorrectly identify something so i have to go back in and manually do it um i'd like to add on that um sure that's a bit more technical i lost forwards it's a bit more technical um and that's the debayering technique that it's using um so typically if other software debayer this particular image data this is a raw image data and this is by clyde foster and he was imaging saturn going behind the moon and he used a color camera and what you see here is i debay it this particular frame but very coarsely and only to preview uh that this is actually a color image this is what outer structure does by default but when stacking this image it's treating it as if it's not debated it's treating it as if one image is just uh 25 red pixels 25 blue pixels 50 green pixels and those empty parts of the image it doesn't use and it will fill up as the video moves about a little bit which means that you don't end up with any gaps and that you don't end up with any interpolation uh within a frame but you're just filling in the the empty bins over time um meaning that um the red green and blue channels are separated from each other completely unlike any debayering techniques that work per frame so resolution is higher um that's the end result basically okay so rafal or yeah i think it's pronounced for fall uh is is it possible to extract and process different rgb video channels from a single video generated by a one shot color camera ooh that's an interesting one in other words can auto stacker separate each individual channel out as opposed to it's it should be possible again if you know the right now no but um okay that's probably i i'm not sure if pip can do that the planetary image processor if it can break up a video file into rgb files but in principle you could go back to a raw file from a color file um because the the 25 green uh red and blue pixels in every frame will not be interpolated you could go back to the raw data again um probably not what the user wants to know but i think it might be possible don't worry sorry i'm a bit caught up with the idea of trying to cut back to a raw video ralph from a debate video file i think that would be possible actually okay so uh auto stacker version five it is uh seven versions all right um this is going to be the obvious question when is version 4 coming out oh the obvious answer is soon and for those people who do not know where to find it um it's pretty obvious uh autostacker.com um is is basically where you find it so keep your eyes glued to that and it will pop up yeah i i had like a separate website where i was keeping track of uh better versions uh but i will put anything new on that autostacker.com website instead yeah that will be perfect um but yeah seriously i think i mean i i always say soon and i always want to release it soon it really depends on how much time i can find to work on it and right now it is something that is um it's working and it's much faster than version 3 especially for very big files so i might say okay i can release it as it is right now but i don't want to be too troubled with fixing stuff if it doesn't work for everyone you know so there's a trade-off on when i i think it's good enough or not um yeah i hope this year i hope by the end of this year excellent all right so if there's any last minute um there is another question here so george hall is asking uh could you comment on using auto stacker for deep sky imaging oh i missed that one um it is tricky to use it is okay if you have data that is um [Music] and when there is no field rotation visible i mean it could still deal with that but i will quickly show one recording that i already opened earlier it's a bit too bright now [Music] i will open the dark file this one so you can use it already you can use it in service mode and you place your alignment anchor either around a bright star and you say go and analyze it and see if you can track that star and see if you can track the movement in the field throughout the recording um so it's done that now you can see all the frames and found the worst one and it found the best one and i can say i want to stack pretty much all of it but don't go and use uh please rb and grid it's no point you might end up using an unaligned point around every star that is not saturated that will probably work in general you want to stack almost all the frames um oh i still have it on the threshold mode that will take forever disabled wrestling oh these two features you have a cancel and a pause option if you did something wrong press cancel and you can back out of it the past one is to just hold it so right now it's stacked this file and this was four gigabytes of data and it stacked it in six seconds or something i can open that frame and this is how i use how to stock it for deep sky imaging so it does work obviously so it does work yes but for some recordings it will work better than for other recordings um and yeah if if you have quite simple i mean it's a like you said it's important to have the the the dark files here for example you need to get rid of all the hot pixels because otherwise they will give you little ugly streaks in the image flat field i don't bother with because there's light pollution and i just find it much easier to get make an artificial flat field afterwards in photoshop i like messing about with the data in that way but for making the sticks i always use how to stack it like this but then i end up taking multiple recordings and i manually combine them in photoshop uh to to improve the signal to noise ratio more because my equatorial platform um had 16 minutes of running time and i need to reset it for example then the next recording will be rotated a bit so i will combine those images manually after having stacked the the raw files in other stuff all right um this is going to be the absolute last question because i really got to go to the bathroom um do horizontal vertical artifacts at the anchor point boundaries always imply over sampling is multi-scaling the first line of defense in that case i'm completely lost uh okay so you know how when you get the alignment points and let's just say uh we we have the same size 64 for argument's sake if i do all the uh autoplays but i still get vertical and horizontal lines is that implying that my image is over sampled or is it from something else so think of the alignment points as they tile over each other yeah when i process this because i've noticed this this effect too sometimes i'll get vertical weird lines showing up like vertical artefacts if i'm not using if i'm not using multiscale okay so that's probably saying that the alignment points that you were using were too small and that they lost track and that um the the the alignment is well not quite yeah it's close to being random for each alignment point um meaning that uh yeah from one alignment point to a next one where it happened to to to just track something like like here with the star and here in the black space um on that edge between the two alignment points you will get some artifact because they don't line up perfectly anymore it's uh it's it's because your alignment points were too small i would say gotcha so uh joe in your particular case because you're probably not using multi-scale not a lot of people use multi-scale is if you have no data in one alignment point versus the one that's next to it that has data you can quite easily drag in data into an alignment point that shouldn't be there that created the artifacts in the first place so yes multi-scaling in this particular case is your first line of defense uh along with probably using slightly oh yeah yeah and different alignment points but this is this is something that you can easily check if if it's artifacts created by some weird alignment points out of a placement or not you just stack once with a big alignment point and compare that image stack to the one where you have lots of uh small ones right small ones yeah see if there's weird weird things going on all right um we are pretty much out of time because i know adam is going to be joining us very very soon and emil this has been one heck of a um an hour and 45 minutes and i think everybody is like going mind blown um i've learned stuff that i never knew about like going what the hell the space bar yeah the space bar i never knew about that one that is like by far and wider wait a minute i do i wish i knew about that earlier um prime example would be if you happen to have something fly across the frame a bug a bird or whatever exactly that's exactly what it's for and i've been scratching my head for years going how the hell do i deal with this problem and i haven't even told you about the 99 other hidden features that are there just like this we could do that another time otherwise we'll be here for about a month probably trying to figure all this stuff out um emil again thank you very much for showing us all this um a little bit of a shameless plug for you more than anything else you have to understand guys emil does this out of his own time it's it's his time his money his everything he doesn't do this for a living he doesn't do this for anybody specifically so when you're going to the download page he does have a donate button i really do suggest that you do support him any amount will help him and remember he does this in his own time he does this pretty much for free so to show support we already donated some money earlier on before we even started the thing and i really do um you know pressure you people out there into doing the same thing so i'm not trying to say that you have to pay for anything but remember this guy does it for free so i really want him to keep on doing this so the best thing is to do as we is to support him if you can't donate donate anything i'm sure some happy words will always be much appreciated emily very much thank you again um we're gonna have you back here again at some point i'm pretty sure when uh we get a stable version of version four out and cover some of these hidden features i think we should just do a whole hidden feature section because there are things in here that i'm like going damn it i need to know this but sadly we're going to run out of time so um without any more delay i've got to go run away for a moment for obvious reasons um we will have adam block on next and we'll see you in just a moment
Info
Channel: Woodland Hills Camera & Telescopes
Views: 15,967
Rating: undefined out of 5
Keywords:
Id: JIjXmRh1DE0
Channel Id: undefined
Length: 112min 22sec (6742 seconds)
Published: Tue Aug 11 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.