Splunk Knowledge Object: Detail discussion on Summary Index

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
okay today we'll discuss about somebody indexing Splunk okay now somebody indexes are really useful in a scenario where we are dealing with very large data set okay and we want to report on very smaller portion of the time range of the data set that means we want to report efficiently on a very larger volume of data for an example let's say in our system every day we are indexing millions of event okay now it could happen that we want to report on let's say for a particular day a single day earth of data we want to report on right in that case if you see your what we will do you will be basically writing a normal search query on the on your base index right which will basically go through all those millions of event to find the one day worth of event for reporting purpose right if you see you over here this is an inefficient way of handling the data right now in this type of scenario some are index comes into picture okay so in summary index what we do basically we create a search it's a schedule search okay which will basically pull from the main index okay periodically and put the data put the summarized data basically which will be useful for our reports in a summarized manner in the summary index okay then so basically if you see it over here as the schedule search is pulling the data periodically for a definite time period sella past one hour or fifteen minutes or one day so it's basically pulling a very small amount of data all right and that same data is getting pushed pushed to summary index right so that means then the your actual report you are trying to build the final report you're basically running on the summary index that means you are running on very smaller chunk of data that's why somebody index for performance why somebody index have very good performance okay so this is the main advantage of summary index that's why you basically use so many index okay so for an example even similar kinds of data I have set it up over here as well for my tmdb app so let me show you I'll just run this query okay so for all time so what I have done it over here I basically index certain data like 617 data events so it has ID generate a original language title and released it okay and if I just show you how this data data's are time wise distributed mmm you will get a good idea what I am talking about sir stats count by underscore time now if you see 429 that means day before yesterday I have 258 events 38 yesterday I have 74 events and 31st I have 285 events right now my idea is to I want to get the top Jenner IDs right so generally what will be your query in this case on our main index we will write top Jinnah ID correct so this is our final result we want now think about it this scenario where your main index has millions of data for each and every day right so in that case this query is going to take forever to complete and now think about we are talking about only a single user when 10 concurrent users are running the same report the whole system will be collapsed right so in that case we will be implementing the summary index way of reporting that okay we'll see and we'll also for verification purpose what we will do is we'll take a screen shot of this one so what's the summary index approach should give me the similar output okay so what I will do I will go ahead and take a screenshot over here okay so now let's move on let's go back to our diagram first so there are a lot of way you can summarize data okay either by report acceleration or by data model acceleration or somewhere in this which we are currently talking about in previous video I have already talked about the data model and data model acceleration please have a look at my playlist ok and report acceleration I will be creating a separate video but while discussing the summary index as I said we need to have a scheduled report right we will briefly talked about report scheduling as well ok over there now so that's why how now let's talk about the point number two over here so how some our index is boosting the performance as I said over here the schedule report is basically picking up smaller smaller chunks of data priyada cally and putting it to some other index that means the whole cost of computation you are basically spreading across time right now as the as thus in the summary index we are frick we have the frequent running search right that extract the precise information we want so each time this search is run the results are saved into a summary in the exactly designate right then we can then we can then run our main searches and report on this like this significant this smaller amount of somebody in the desert we I have talked about this as well right now so now the main concern about some our index population is what about the static stakes right we are here we are talking about top right and as an example so some our index are also statistically accurate we will see how it works as well ok now another another significance of somebody indexing is let's let's take an example let's say today's Thursday night Thursday so let's say I am doing the daily Y Samadhi okay so Thursday I have created is somebody similarly for Friday Saturday Sunday Sunday Monday right now if you see here as we as I am generating summary for each and every day that day Y somebody can be used either for day wise reporting right suppose I am on Friday but that means tomorrow if I want to report for yesterday this Thursday somebody can be useful right can be used for the same reporting or let's say I'm on Monday I want to generate report for last five days in that way also the Thursdays report his cancers list somebody can be used so that means the same somebody can be used for overlapping time period reporting as well okay this is another advantage of somebody reporting now so let's talk about how we can create a summary index okay so creating in somebody index is same as creating an index okay so we need to go to settings in the X's okay and then new index okay I think I'll just delete it so that I can view it so click on new index okay as I will be deleting for my tmdb app so I'll just give it a name as tmdb underscore summary I'll keep everything else as default value okay same so this is the summary index now if you see there is no difference between this one and in normal index so until unless I am Telling my Save Search to put it as a summary index there is no difference over here till now okay and if you see Splunk also provides a default summer index called somebody okay you can use that one as well if you want to use it now let's go back to our Splunk enterprise so we saw how we can now create a summary index right now the question arise how we can populate the summary index right now to discuss that let's go back to our in this particular mind map so there are two ways you can push the data to summary index one way is to create a scheduled report which will be using these SI stats a psych top si chart or si time chat commands to populate the someone in this case I mean somebody index then stats so if you if we talk about si stats the functionality is same only thing is that si stats also compute some other fields which will be used to maintain the statistical information about the someone index data okay so you can either do by by using this command so you can create a data which can be pushed to summary index or you can use at you can use add info and collect command to push that into somebody index from a search query now to use these guys like açaí stats a side table is a chart you can you need to you need to use these commands in a scheduled report and in the scheduled report you want to basically enable the summer index in only it will push it to someone index but in by by using collect command you can directly push from the earth search query itself we'll see that one okay so so let's go back to our Splunk enterprise will click on tmdb app okay so now this is our initial data set right for all time so what we will do here let's say as we also see instead count by understood time okay we have each and every day or stuff data what we'll do is we'll create a search which will run for each and every day and and take that corresponding day data itself okay and put it into some money index so for to do that what we'll do we'll go to settings we'll go to search reports and alerts okay now we'll select our tmdb F from there we'll create a new report okay we'll name this report as populate tmdb underscore somebody okay now search the search should be as I as you have seen it over here my intention is to get the top general ID right so top jana ID now I'll be using a site optional ID to get the summary index top version okay now if you see it over here it is it is creating two fields one is a generality but ideally it should create a count and percentage okay now now it is creating a separate field called CVP underscore reserved underscore count okay now this has been created by a site talk if you run this normal top it creates the count and percentage value okay so this is the difference between a site top and top a site up creates a new field I don't know this field and Splunk recommends this fields you should not use in your final reporting search okay now how will you will be accessing data from some other index we'll see that okay but this is how the data will be pushed to summary index okay so now and this this particular search will run for a single day right so what we'll do I will copy this one I will come over here I will put it over here right now let's save it first okay now wheels from the edit will do one by one so we'll first edit the permission okay so that the same search cons the created Save Search conf will be created inside our TM DVM for now is it a demo I just keep we'd write to everyone and click on save now if you see if when I do this if I go to my Splunk home etc' apps tmdb okay then i'll go to local folder there is a safe search.com created now right if I open this one the safe Sajid Khan has same search name and that particular search ok so that's why I have just edited the permission now let's move on let's edit this schedule this is the most important part of the schedule report this is how you schedule a report you have to click on this check box schedule report ok now it is giving me certain options right schedule time range schedule priority schedule window and the triggering action right now let's talk about one by one first two things are most important schedule and time range let's talk about them now schedule we have two important terms call schedule and time range okay for that order I will create a simple picture okay bear with me okay let's say current time is 12 o'clock 12 a.m. okay now and I am scheduling this report for every one hour it's sitting now to this this particular pipe will be one o'clock window this particular pipe will be two o'clock okay so that means when I'm talking about schedule I'm talking about when the next time the report will run right if I schedule for a one hour the first time it will run on 12:00 then it will run on one then it will run on two correct now what is time range time range is when the report will run what time range of data it will pick up for its computing purpose that's the time range so if I gave time range as it's 30 minute 30 minute what we'll do let's say when this report is running on one o'clock it will pick up the data from 12 3000 30 to 1 correct zero to 30 to 1 so that means this worth of half data it will it will pick up and it will push into summary index right now if you think about it over here if you see it over here the summary that the deport cannot pick up anything else from here because you are giving the time range over here right now there is a chance that you will miss this time of data data right because your report is not picking up this one so that means there is a possibility that if you give this kind of time range there is a possibility that the your report summary index will have some kind of gap suppose I am giving the time ranges all time okay now what will happen when in 1 o'clock it will run it will take all the data right suppose I have 3 days worth of data so it will take 3 days worth of data it will push it into some re-index when it will run for 2 o'clock right again it will run for all time it will take three days worth of data it will pushing into someone index right in that case what will happen your summary index will have lot of data overlaps right so these are the problem of summary index so that's why the best practice is to when you schedule the report right the report should pick up according to the schedule that means if you are scheduling for a one hour it should pick up the one hour worth of ritght cell if you're scheduling for one day it should pick up the one day worth of data itself okay now that will also lead to another thing which will solve flatter right so what about the historical data suppose I am scheduling the report for even in each and every hour right so then and it is picking up the hour of worth of data and putting into summary index what about I have data in my system for last five days it's the summer index skip operating search will not pick them up right so we have we still have a problem with our historical data we'll solve this one okay now but we will follow this convention okay so when our scheduling is there so as I have told you I have data for each and every day right 29 30th and 31st so what I will do is I will give let's say run every day okay starts at 0 0 ok and the time range will be lost 24 hours so I will give 1 days ago ok and the beginning of the day ok so if you see the data is coming first 31st 1 1912 aim ok this is important so basically it will pull up all the data right now what is scheduled priority over here all the data belongs to our 31st today's date itself ok so now what is the scheduled priority here scheduled priority means when you have multiple save searches scheduled reports now you can prioritize them over there right which one should the highest priority so for that you you you give this one okay if you give highest that means this particular report we'll be using will be given the highest priority among all the reports okay now schedule window means suppose you again if you have ten concurrent schedules searches with same priority okay so now they this they will try to start on same time right but if you give five minutes of scheduled window what will happen next plank will automatically distribute them within the five minutes of time gap that means all the schedule reports will not start concurrently they will be starting one after another but it they'll be they'll all be starting within this five minutes so that's the concept behind schedule window okay let's click on save okay let's see whether we have read the schedule we will not edit the acceleration so that will be the concept behind the report acceleration which will be discussing about separate video okay in a separate video now we will directly go to a dick summary index okay so here basically you are telling this report to put in the data into the summary index I will click on this edit summary indexing checkbox okay now I will be selecting the some other index I have created tmdb underscore summary okay now if you see it is it is also having some kind of add phase option over here right now this options are not used for the data extraction or the field extraction from the summon index know these fields are used for annotating the summary index events with some kind of data and editing means if you see Splunk automatically I know tapes all of your events using host source and source type fields right similarly you can create your own annotation over here okay now the best example will be let in in general normal scenario what happens that you have more than one scheduled reports okay all of them are pushing into same summary index right now when we will access the summary index it is important that someone index for a particular report it is important that I will only fetch those data which important to that report right seen that kid generally what we do is we also annotate all these reports all that all those events we generated from a single report with the report name okay so I'll give the search name so all the events belongs to the summary index created by this populate tmdb summary will have this search underscore name field as populate tmdb summary okay we click on save so we almost created our save searches everything right scheduled report and then now if you see it over here the report is going to start on one right so that means it's not not one that another half an hour over here right so now as as we have to wait for another half an hour so what I will do is I quickly change the schedule okay so that the report is just run now okay then then I will just change the schedule back to one one day okay so before that we will just see our summary index how we are accessing it even the summary index you can exit it like in normal index way index equals to the summary index name okay sorry tmdb underscore somebody so currently we do not have any data right so what we'll do as I am doing the demo I will just quickly change the schedule okay I will do it on cron schedule okay and I will run in next two minutes so if you don't know what is the con scheduled for two minute there is a website called crontab guru okay so this is for the every five minutes so I'll just copy this one I'll paste it over here okay I will sit to okay let's pick it up for the last one day worth of data itself as as you do not have any data in our summary index so we are safe here correct so I'll click on save so next time is 26 23 26 another 1 minute so let's wait for 1 minute then we will see how the reports is pulling data ok or 1 minute is over let's see how our report pulled the data to see what kind of data report has pulled click on view recent ok now if you see it over here it ran on 26 it did not create it any event why let's let's see that one so to run this report what I need to do is click on run ok so if you see it is not pulling any data plus one day let's see what's going on over there ok the latest is basically picking up the morning 12m here let's click on now yeah apply ok now it should pull so what I need to do over here is edit schedule ok last one day I have to click on ok thing I have not given anything over 1 days ago latest equals 2 now ok I apply there was a problem with my time range that so it was not picking up let's see in the next run is on 11:30 so I think we would resent if I see it over here nothing it has pulled right so let's wait for another 2 minutes and and then we will see how how it's how it's working ok ok so our wait is over so let's see how it is how it's working now ok it took the it worked on to to a 285 events okay and let's see our summary index has been populated or not so to access some more indexes how we access index equals to the summary index name ok so it has pushed 19 they got because we are talking about top over there right so to know whether it's matching or not if you run this particular report you will see it it is pulling out the 19 records right but this is only the 31st data we have pushed over there in the summary index right so we still have so if we talk about stats count by underscore time okay so we still have 29 then 30 to be pushed to some one index this is exactly what I have discussed over here about the historical data right so now we have to tackle the historical data or there so before that let me do one thing let me edit this particular search schedule okay otherwise what will happen is if freidy save so I think it's already pushed some data to somebody no it has not okay or better what we'll do is for now we will just disable this one okay you know it's not it's not going to run before we and before that only we'll be finishing our this particular demo okay so it will not be needed now so now let's see how we can backfill or backfill this historical data to or someone index for that we will do is we will be using that collect command over there okay so as I said you can use the collect command for miss search query itself from this search prompt itself to push it to the summary index previously what we have done we actually took help of this particular scheduling report and we edited its summary indexing that so it was pushing into the summary index correct so now we will be using the collect command okay so for that what we will do we will take the similar query right now I'll just close this one I'll close this one and come over here I'll say table we are using the query called si top generally correct so this is our main query now this output will be pushing it to somebody index directly okay so now to do that first I will time get a range I will be picking it up as 29th let's say first we will push the 30th data let's say okay so January 30th 0 0 that means morning 12 a.m. to January thirtieth twenty four that means night 12:00 a.m. okay night 12 p.m. okay 11:59 p.m. so if I apply this one so this is my thirtieth worth of data and that top output over here okay so now I have to push it to somebody index right for that the first thing I have to do is use this command called add info okay so as you have seen before when we when it took when we took health from the schedule report we are using SI stats or a side table commands right so that time Splunk automatically handles all the statistical information about those particular data right about the data and if you go to Splunk documentation okay so now all the SI star commands basically creates this new kind of fields as we have seen before as well right even even you can see it over here as well si top this kind of this kind of field names right instead of giving it count it's giving me something like this right I even we and and we cannot use this one in our final search query as well so in summary index it is saving it some different way itself right so so now and if you see the format is something like this PRS LVD then type and the phrase name so for each in every field name it creates with separate separate type like count group count in see numerical count something like this okay so now what we will do so as we are pushing the data from the search prompt itself using a collect command we will be responsible for gathering those statistical information okay now for that we'll be using this add info command so I didn't for command what it does it's basically generates 4 X 3 X 4 extra columns basically in form X time in for minimum time in for search time and in for search ID max time means whatever the date range you are using it over here the maximum date that means the maximum time similarly for this is the minimum time this is the wind I ran this particular search this is that ran runtime of the search and this is the search ID okay so now I will be using the collect command okay now this is the syntax of the collect command the command name then you will be giving a our input call index here you will be giving the summary index name okay I'll say tmdb underscore somebody like this is our summer index name in which we will be pushing this data okay and now there is a input called test mode okay so if I can make it as true what will happen you can still see how this query is running but it will not push it to somebody index okay just like we do our testing right you can see it over here okay so if you see if I eat query the summary index it's still has 19 records did not push anything over there right so now what we will do if I make it as test mode equals to false it will push it to summary index okay so let's push it to somewhere index before that when we implemented the schedule search right over there we also implemented the annotation right the search name you should remember the similar stuff can be done in collect command as well there is a entry called maker this is the input okay here you can give the the key value pairs okay so I will be giving the same key value pairs search underscore name equals to I will be giving I'll be giving this same search name okay so okay I have to discuss this one as well so let's let's do that first okay so this one now if I run this one it will push only the thirtieth top data to our summary index so if I run it now you take some time okay so it has push no 36 we have data right if you see it over here when data has been pushed to summary index let a lot of changes happen to the sample data okay first of all if you see the source this when I push it from the mmm schedule report the source is becoming this is a scheduled report name when I am pushing it from the so search query the source is becoming something like this c program files Splunk versus pull Splunk then a hash and then underscore stash underscore new this is a directory if you put only if you if you if this is basically the search output are generally temporarily storing over there in this directory and in this file and Splunk automatically index this files okay into into the summary index in the mention summary index so this is how in the background the summary indexing or okay hmm now let's see the source type okay now if you see the source type name becoming stash there is a reason behind it now if you see we have basically replicating the data right from our main index to a summary index to an extend right so that means ideally the data has to be going going through license but Splunk do not use any license for some re-indexing that's why how it knows it should not charge any license its knows by using by changing this source type to stairs okay this is one of the reason why it is changing the source type to stash so it will you will not incur any license cost for this one for somebody indexing implementation okay now also you have seen it over here right we annotated our events with search name so this is all sub operating over here okay and this is our Junior ID okay and this is that count filled name okay so we populated our data for xxx using this this search for 31st using that populate summary index Save Search let's do it for 29th okay similar way so I will do it for 29th I'll do it for 29th okay I'll click on apply so let's push it over here okay let's see how somebody 36 sometime okay it pushed as well over here if you see the source type it created another one the similar way because we are running from the search similars directory it's creating a new file okay now we pushed three days worth of data to summary index now for each and every day we have a generating and their corresponding top values right now to get the overall depth top values we need to do it in certain way okay so now if you see the way data is getting stored how I will get back the data this is more important right now before I before I discuss that okay so let's see any point we are missing it over here or not okay we are we discussed about how we can fill the gap using a search query by that time I have shown you the collect command right now there is another way to fill the gap okay so B we will see that one but before that let's see how we can get back our top data the final output from the summary index right so after index equals to somebody index generally we will write our search name right because if we have more than one search in our system this always good to have this one that's the same reason we have implemented this particular search name annotation right so populate somebody index okay so so you have our 53 events right now when you access the data from summary index okay you access in the same way you put the data to summary index only difference is when you are putting the data you are using si stops si stats command when you'll be accessing the data you will be using the non si version of that search that means over here I have pushed using the si top now I will be accessing the data using top okay I only bother about my Gina ID because I don't know I will not bother about how Splunk internally saved those data in summary index okay for me it's just an abstraction for me I know how I have pushed the data and I know how we'll be pulling the data from someone index if I run this one for all time it should create a similar report which we have saved it before right over here if I match it 18 153 twenty four point seven nine seven zero four four zero seven percent 18 153 twenty four point seven nine seven four zero seven that means statistically this report is too much accurate right the same accuracy I can achieve using the device what do I have done it over here I have basically this whole operation like top channel ID this whole operation I have divided into three days right each and every day I took the data and I put it into this summary index right and then at the last I have accuracy am basically accessing the similar way I push the data over there okay so this is the beauty of summary index where you are basically spreading the whole computation into each and every they are in a separate schedule or separate interval right okay so we talked about how we can backfill the summary index using collect command right using the search command there is another way you can backfill the summary index now Splunk provides a script called back for summary index okay so let's talk about that that's an interesting one so that scripts are saved into Splunk bin folder okay you see if I go down there it'll be a script called fill summary index okay we can use this one as well to fill the summary index gap okay now what I will do is to demonstrate that I will first go to my summary index okay this is my summary index and let's say I will delete this one I will delete this both of this one okay we will do into the same way from summary index I will delete this okay again same summary index I will delete this one as well so now again I am moving my summary index back to only the 31st of data right now now I will be doing the historical data push over there in different way right so we go back to our nineteen events over there now let's talk about this one this field summary index now if you see the documentation the Splunk documentation okay this particular script accepts lot of inputs like you can give the earliest time you can give the latest time you can give the app context the name of the safesearch okay the different safesearch name as a comma separated the user context as the owner the summary index name authentication Slayer right number of seconds to sleep between each run okay now you can do the concurrent way as well by using this j1 and there is a command input called data okay so before I before I discuss them let's talk about how this scripts work okay so what this scripts is doing is basically if you see we can pass his SafeSearch name to this script right so what it will do basically that safe search would let's say if I pass if I go back to my settings and searches okay if you remember previously I just did that desire I just disabled it right let's enable it okay so I enabled my one so current time is let me turn it off so so the next schedule is on twelve o'clock right we have needed another 13 minutes of time so what we'll do is over here if you see the schedule of this particular guy okay so it is time range whatever time range that's particular search is taking it up this film null summary index what it is here the field of somewhere in the fill somebody index scraped also run on the same time range okay it that's why we are passing the schedules report name over here okay so it will automatically determine the time range of that particular scheduled report okay and now whatever earliest time and Lettuce time you are giving to that particular input as the input to this particular script what it will do it will divide this whole range okay by this scheduled report time range that means it will create a small small schedule and run that one run for each and every schedule it will pick the data from the index and it will push into someone index and while pushing into somebody index if you mentioned data because to true what it will do for that schedule like that small schedule the if the data it is finding the summary index also have data for that same schedule it will not push into someone index but if you see it is only checking by the time it is you do not have any capability to check out the data level okay so this is important over here so so for that what we will do so this is how we can run this particular script so we need to go to our Splunk command from Splunk CLI so what I will do I'll go over here I'll open and yes okay so I'll go to my Splunk home bin folder correct yard where my script is CD insert okay so now this is the script where this is how we are calling we are basically running a Python script this is a Python script right and this is how we run Splunk command Python the script name then app name I am giving my tmdb app name search name I am giving I'll be giving my this search name right populate tmdb someone we have another 11 minutes otherwise this search will run and push the data to somebody index ok populate somebody earliest I will be giving two days back ok so today's back means today is 31st right so 29th end and latest I will be giving one days back ok that's fine and number of J the number of concurrent searches I will be giving 8 owner I will be giving admin authentication I will be giving my Splunk user ID and password and I will be giving Dida because to true let's see how it's working so if I run this particular command over here ok so let's say and just follow be in a safer side I just give three days okay so if you see this is what it is doing basically so it takes this query and it is basically checking up this particular saved search okay the schedule search timing based on that timing it is dividing this whole time into on that one over one days gap and is pulling the data and while pulling the data it is running eight concurrent searches over there using this authentication okay now as it is completed let's see how its it has populated our summary index or not so currently we have 19 data if I do it so it is having 36 data right so some data it has pushed to summary index right so this is how you can you can so let us not worried about the data now I'm just I just wanted to show you this is if you just give the exact precise time range it will pull the same data to your summary index as well the way we are doing using the collect command okay so this is how another way you can push the data to summary index using this particular script right you can either backfill or you can use it for for filling the historical data as well now there is another one we talked about data overlapping as well right let's see if I run this similar code again okay so ideally it should pick up the same data and it should push into our summary index right now we have mention and Dida pickles too true so I don't think it will push it over there it will be still 36 events yes it is because you de because to true what I want to push so if you we saw the data use case as well right I want to push the duplicate that I want to show you something so either because to false I am pushing now okay so ideally it should push some data to this summary index let's see how it's working okay so if I learn it now so now it has pushed something right so if you see before that you just stop this schedule okay so that it don't interfere with us so disable so now if you see we can see some kind of duplicate has been pushed to summary index right so to know the summary index what kind of duplicate date on what range we have pushed the duplicate data or is there any duplicate or is there any cap in the summary index or not there is a command called overlap okay you can use this one to check the status of your summary index okay if you see found a gap in the safe search found overlap in the safe search so till your always should also give you the search ID and from the search range so for the gap what you can do you can use that fill summary index Python script or collect command to push the data for overlap what you can do is you can delete that particular date range of data from the summary index so this is overlap commands are useful to get the status of your summer index now I think this overlap commands also runs a Python script which you can find it out over here in the Splunk et Cie I think it is there in the search app if I go to apps if I go to search app this is the by default apps comes with Splunk right and if i go to bin folder okay so there is a script called so yeah this is the one somebody index overlap so basically the wall of command is running this one okay so so we saw different use cases of this one as well mmm different commands as well now now for the for the last one and we we ran this this one right the fill summary index Python script so it may happen that this script has been the scraped is somehow halted or because of some issue the script it could not able to complete it right so to fix that what you need to do is you can go to your particular app as we are dealing with tmdb app okay so apps tmdb okay and there is a log folder okay if the script somehow it stopped in between there will be a file created over here you can delete that file and then rerun the script again it will be it will be running fine okay so that one and we talked about summary indexing gap as well right so there could be different other ways the gap gap could happen right we only talked about the historical data there could be like either Splunk can also go down which can create the gap as well right or the summary index populating search which is running for a particular schedule which runs for a longer time that means already another search is coming into queue in that case also the gap can happen or somehow mistakenly you keep the you basically make that somewhat index operating searches runtime or real time so that time also gap can happen so and you know what what we to do when a gap is happening right we have all Python script and custom and our collect comment to to fill that gap as well so this is how the whole summer index stuff works hopefully this is helpful to you guys see you in next video
Info
Channel: Splunk & Machine Learning
Views: 14,653
Rating: undefined out of 5
Keywords: splunk, how-to, summary index, sitop, sistats, addinfo, collect, overlap
Id: joZ3jokt9qs
Channel Id: undefined
Length: 51min 18sec (3078 seconds)
Published: Thu Jan 31 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.