TCP- Retransmission & congestion control

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome to 'party shala 4pg course content by looking at series of lectures on networks so we've been looking at the TCP model the tcp/ip model we'll be looking at the TCP protocol so we're looking at how PCP provides reliable data transfer so of the reliable data transfer mechanisms we've already looked at connection oriented mechanism sequencing flow control so what we'll be looking at today will be on retransmission and congestion control so retransmission essentially is to take care of lost of erroneous packets and congestion control is to take care of the bot like at the network that is when you have too much of data in the network and network is not able to handle the amount of data that's that has to be transmitted then congestion occurs when we look at how TCP takes care of that as well so let's start with retransmission mechanism so how is it that a retransmission how do you determine when re transmissions to be done so retransmission has to be done only when something is lost or or there's an error in which case you will not get back an acknowledgment so lack of acknowledgement becomes the cause for retransmission and a lack of acknowledgement is used to trigger a timer which would cause the timer to time oh and once that happens you will retransmit the packet this is the basic retransmission mechanism that is normally used in any network and that is what is used in not e CPS way but the main challenge that we have here is how do we actually set this timer value okay so this is where there are some engineers mechanisms that are used by TCP so let's first look at some retransmission scenarios so look at what are the things that we could have so normally this is the typical case of a packet loss or an erroneous packet this the first example here so a packet is sent it's lost so what happens is no acknowledgement comes back so you time out right after the time out the same packet is retransmitted and you get an acknowledgment for that once you get the acknowledgement you know that the packet is going through then you continue with your transmission this is one scenario another scenario could be that the packet reaches correctly on the others to the other side but the acknowledgement is lost if the acknowledgement is lost what happens again you've not received the acknowledgement on this side so you will time out and you will retransmit the packet this is a slight complication in this scenario on the other side there are two copies of the same packet that I receipt though both they received correctly you did not receive the acknowledgement therefore you ended up bringing the retransmission so is actually a retransmission that could have been avoided but of course you have tree transfer because you not got an acknowledgement so this is a duplicate packet also sin and the generating sin are you also that you will have to handle take care of third scenario that you have that you send out a packet and you're receiving the acknowledgement also only thing that the acknowledgement is delayed or or your time owed has occurred quickly or little earlier than the normal round-trip time in which case you would unnecessarily would have timed out and I necessarily would have transmitted retransmitted the same packet though everything was actually going fine because of delayed acknowledgement or because your time out the time that you had to the time owed was inadequate was less than the actual round-trip time you ended up sending a duplicate packet now both these scenarios this is obviously a scenario that you want to avoid right this is right-side things something that we definitely want to avoid and the second case also something that we would like to track okay of course the first one needs to be anyway handled okay so how is it that the retransmission mechanism tries to take care of all these three scenarios is what we will see now so how long should be sender wait so that's the basic question okay if it waits for too short a time as we saw if you will end up wasting retransmission you'll earn a salary transmit if you wait for too long then there will be excessive delays when you when the packets are lost okay so how do I what is it that we should do so what PCP does is it sets the time out is a function of the round-trip time okay which means you need to know what is the round-trip time so once you know what the round-trip time is TCP can use that round-trip time and use your time or the timeout value is a function of the tarty T okay so what is it we do we expect an act to arrive after naughty T okay so plus you can add an adjustment factor to account for some queuing delays and other things right so that can be your roundtrip at the end the estimate that you have for your time or calculation but the next question that comes us how exactly will the sender know or d'art it is how do I get this round-trip time because the round-trip time itself can vary depending on the characteristics of the network if the network is not very congested make it priority will be less it is very congested a treaty may be high and so on so how do I actually get an estimate of the RTT I get it by watching the acts okay so you actually have to measure the RTD so that is what TCP does so what the original algorithm for retransmission okay that was used in TCP does this is the following so you actually measure the sample RTT for every segment AK path that is when you say you need send out a segment and you get a knack for that you have a timestamp at a time for the time at which it was sent and the time at which the acknowledgement is received calculate the time between these two and that becomes your sample RTD okay you don't use only one sample RTD but you keep taking samples and you take a net weighted average of the different samples that you receive okay that becomes your estimated round-trip time so how is the weighted average calculated basically what you do is you have your earlier value of estimate dirty T so that is weighted by a factor of alpha plus you take the current sample with your sample RTT you weighted by a factor of beta so obviously your alpha plus beta will be equal to one now normally we choose an AZ value of alpha between point eight and point nine and beta value chosen to be between point one and point two which means we give a slightly higher value to the estimated RTT and a lower weight factor to the sample are treating this is basically done because we do not want slight variation to the sample are tt-to totally offset your estimated RTT okay suppose for instance one particular packet alone was delayed for a long time you don't want that one single packets are TT to upset your entire estimated RTD calculation okay so beta is normally given a smaller value alpha is given a higher value and the timeout value is said to be twice the estimated RTT so this is basically how the original time oh tractor was set okay so based on this you will see that this is exist if eclis how your actual RTT could vary this black line indicates your um at these the sample RT T's okay the Z sample are treaties and this pink line here is the estimated RTD so you can see that the estimate data T will kind of try to track this sample RTD that you have okay but there is a flaw in this approach so look at what can happen in these two scenarios so let's say for instance this was your original transmission okay and there is some time owed that happened and therefore you ended up retransmitting the data you got back an acknowledgement okay let us assume that this acknowledgement was for the retransmitted packet and not for the original packet but since you did not get an acknowledgment for the original transmission your sample RTD calculation may think that this was the time at which the ACK is received and this was the original transmission time so this entire time this big duration that you have here this is what it will treat as the sample RTT in this time - this time okay but actually sample RTD should have me this time - this time there is the time at which the retransmission was done okay this is one one scenario or it could be the other way around right that is you got you you actually transmitted an original packet for which you get back an acknowledgment but before you go out that acknowledgement you actually retransmitted the packet now if you assume that this acknowledgment was actually for the retransmitted packet then you less then you will think that the sample RTT is only so much okay but actually the sample RTD in this case the actual round-trip time between the original transmissions is actually between the original transmission on the sec so it should have been this but you measure it Tesla's in this case in the first case it should have been only this but you're mentioning it as a larger value okay so when you have these duplicate transmissions okay or a real and a retransmission takes place we have this ambiguity of not knowing if the neck knowledge meant is received whether the acknowledgment is for the original transmission or whether the acknowledgment is for the retransmitted packet okay so this is a problem which can offset we can do it can totally throw off your sample RTD calculation okay and that can cause a change in your estimated RT t so what do we do to take care of this now if you actually look at it deeply so what is this ACK actually actually acknowledging for us it doesn't really acknowledge a transmission but it only acknowledges the receptor of the data right so what is it that we actually need so in the first case for instance if you consider the retransmission of the last packet if you assume that the AG goes with the first transmission the sample RTD becomes too large if you assume that the AG goes to the second transmission the sample RTD comes out to be too small right so this is what we have so a solution for this was given by Cardon Patridge okay so they came up a very simple solution all they said was this they said you collect these samples for segments which are sent only one time there is if you do a retransmission in those cases don't consider those samples for calculating your estimated RTT this is a very simple approach that they give so obviously automatically you can see that in the in the sample RTD calculation this error that we have rate both these things are done away with because in both these cases we have done a retransmission so we don't calculate ake into account this particular sample RT t at all for the calculation but since a retransmission has occurred there is something happening with respect to the round-trip times right or with respect to the network some some congestion is there or something is happening in the network so what we do in these cases is that you just double the time out after each retransmission okay so this was the addition that was given by kannan partition okay so this is an algorithm that was introduced into the TCP model once they started facing these kind of problems again even in this approaches another limitation that was noticed the limitation basically is that this calculation that we are talking about the simple averaging weighted average of estimated RTD calculation does not take the variance in the RT - into account okay that is if your variance from the sample RTT is very small if the variance are small then the estimate diet it is quite accurate okay but if the variance is large then the estimate isn't all that good so what do we do in this case so instead of only Kyle calculating the average value it would be good to also take into account the variance so this is what was proposed by jacobson and Carols and this is the Jacobson kalsa algorithm which is used currently in the TCP implementations today so what we do here is you take the consider the difference that is the sample RTD minus the estimated artery see how much it is deviating right so you take an average of this deviation and you boost your estimate of the RTT by this factor okay depending on whether you have too high a variation or too small a variation okay so let's look at how that is done so these are the fresh set of calculations that are proposed in to take into account the deviation factor as well so what do we do here first thing is you can this difference between the sample RTT minus the estimated RT t okay so that we call as the difference factor now your estimated RT t again is calculated the same manner which is estimated RT t plus delta into this difference okay you can see that it is very much similar to the normal estimated RTD calculation that we have it is just weighted now by a factor of Delta and 1 minus Delta okay so now we calculate the average of this difference right so how is that calculated deviation is calculated as previous deviation value that you have plus Delta into mod of difference - deviation okay this defense is their re that you're calculating now so it could be higher than a sample RTD could be higher than estimated RTD in which case it would be positive value or it'll be less than your estimated in which case will be a negative value so we take the mod of this difference and see how much this mod is the varies from the deviation so that factor is weighted by a value of Delta okay and you calculate your deviation based on this so now your deviation factor that you have here is an estimate of the various variants that you have from the estimated RTT value so how much your sample our T's are varying from this mean so that variance from the mean is what you have in this in this factor so now what we do is the timeout value is set based on both these factors that is both on the estimated RT t and the deviation factor so time of the set is some mu into estimated arty T plus some Phi into depth so this mu star me set a value of 1 and Phi is no me set a value of 4 okay so these are values that have been determined empirically okay by means of running various experiments on the network and therefore these are the values which quite closely track the estimated RTT and the time Oh Travis okay so the Delta factor has such we know that always will be a value will be between 0 and 1 okay so you can see that with this calculation if there is not too much of deviation okay then what will happen this deviation value will be small in which case your timeout value will be close to the estimated our treaty which makes sense but if your deviation is large then you will see that because you're multiplying this deviation by a factor of Phi and adding it to the estimate DT the timeout value will become larger so that so you can see that the time up kind of at early changes depending on whether you have a deviation or not okay so this is how an Jacobson Carroll's algorithm takes care of both deviation and the mean value so this is what is currently used in the and tcp/ip implementations and you will see that in the implementation we normally since there are a lot of multiplications and divisions that we need to be carry out carried out these factors that you are multiplying them up by our factor are typically powers of two so that you can do a an implementation of this multiplication operation using just a shift operation remember that multiplying by four is equal in two shifting left by two bits so that is the kind of optimizations that are done when the calculations are actually carried out so this is basically how the adaptive retransmission mechanism of TCP works so next thing that we can also look at us another addition that can be done in in this retransmission mechanism so as we saw this normally a time out base retransmission is is is inefficient we'll take a look example of what normally happens in these cases see what is that you normally do since it's a sliding window kind of a protocol so what would what would normally do is that when a retransmission happens so from the packet that was lost you will start transmitting all the packets from that onwards okay that is for instance so let's say i transmitted packets zero and packet one and packet 2 okay now zero and one breached correctly but it was really packet 2 which is lost and then I have transmitted also packet 3 after that ok but but packet 2 was lost 3 has reached correctly onto the other side alright now I will receive back acknowledgments for packet 0 and packet 1 ok and I may also and when I receive these acknowledgement I will end up moving my window and I will send additional two packets but when these packets are sent they will send back acknowledgments but which packet will they acknowledge they were only acknowledge packet one because packet two has been lost and remember our acknowledgments are normally cumulative TCP acknowledgments are cumulative so you'll only acknowledge up to the packet that has been received in order so you know I have received only up to packet one so I have not received this packet so what will happen is eventually though you know transmitted these two packets now you'll time out okay because you to see the acknowledgement for this and when this packet to time so you're starting from packet to you'll end up transmitting packet to packet three packet four and packet five of which actually three four and five have reached the other side correctly okay only two needs to be retransmitted but we have done an additional retransmission okay because of this time only because of this mechanism so how can this inefficiency be removed so if you look at what is actually happening this is that's a quick solution that can be used okay so what is the what is actually happening here packet n might have been lost in this case packet two but you're seeing that packets n plus 1 n plus 2 etcetera are all going through ok which means the receiver is continuing to get packets and it is sending you acknowledgements saying that it is still waiting for the enth packet there is a descending you what I call this repeated acknowledgement so duplicate acknowledgments so these duplicate acknowledgments that we received right these duplicate acknowledgement saying that I will see that one I have receive that I have received a packet one this duplicate acknowledgement is an indication for us so the network actually is sending packets and packets are later packets have been received so what you can do is when you receive a certain number of such duplicate acknowledgments you can take it as an indication or as a hint that a particular packet the enth packet actually must have been lost but other packets are going through and do that retransmission earlier that is you do not have to wait until the timeout occurs but even as you start when you receive a a certain number of duplicate acknowledgments you can treat that as a trigger and retransmit your packet so normally what TCP does is it uses a a three duplicate acts as the trigger for doing a retransmission in this case what would have happened so I received this acknowledgment if I had received two other acknowledgments before this time would happen I could have retransmitted the packet too without waiting for the timeout to occur so this is another fast retransmit mechanism which is used in TCP in order to make your retransmissions better ok now how effectively with this retransmission ISM actually work ok so when is when does it actually work really well so this will work very well when you have very long data transfers oh kids which means there is a high likelihood of many packets in flight so you can expect that it's just the 1 packet is lost you will see you a few a duplicate action because of duplicate tasks you will be able to do a faster retransmit and it will also work well only when you were window sizes large because if your window size is not large your window size itself will resting the amount of data that can be transmitted okay and it works better when you have low burst innocent packet losses that is you don't have bursts of packets being lost it's just one or two packets that are likely to be lost at any given point of time okay so in these cases this fast retransmit strategy will work well but this is not very effective for web kind of traffic because normally what happens is that in web traffic most of the web transfers are shot okay maybe like 10 packets or so because you will have short HTML files small images and so on so in these cases there may not be too many packets in flight so in which case your fast retransmit mechanism is hardly likely to really kick in and start making your retransmissions faster okay so in these cases it would normally cause the user to reload the browser or the tab more often and it may end up you may end up not having very fast transfers okay so but when you have large amount of data transfers being done this kind of a mechanism is pretty useful next we will take a look at the congestion control mechanism that is employed in TCP okay so now remember the congestion normally is a phenomenon of the network right it is network which gets congested and it is normally expected that the network should handle condition but TCP which runs on top of IP cannot does not have that luxury because IP does not do congestion control so TCP in order to provide reliable data delivery also tries to handle the congestion in the network so let's look at how PCP does that so what DCP does is it first of all has to identify that congestion has occurred in the network now it has no direct mechanism of identifying that so it has to find some get some kind of an implicit feedback from the network so let us look at what are the ideas that TCP uses so want TCP does is it assumes that the IP network is a best-effort network okay and it has some particular capacity at any given point of time okay and what each TCP source will try to do is it will try to determine what is the network capacity for itself okay and it does it by using some an implicit feedback what is the implicit feedback remember the only feedback that we have is the acknowledgement so the same acknowledgments are used as a feedback about condition as well so what has been found out in the current networking scenario is that most of the time when you have a lack of acknowledgments it is you do not have acknowledgments coming back it is mostly because of congestion in the network and not because of actually an erroneous packet which has caused the end the receiving end to drop the packet or something like that okay so most of the lack of acknowledgments have been found to be only because of congestion the network so what DCP therefore assumes is that if there is no acknowledgment then it must be because of congestion and immediately it starts taking some congestion control steps so but in order to do all these things TCP now needs to know what is the available capacity in the net in the in the network okay that's the first thing it has to determine okay and once it is determined the available capacity it will have to adjust to those changes that is if there is more amount of capacity available it can send more data if it is less capacity available it will have to reduce the amount of data that needs to be sent okay so how does TCP basically I'd a purely take care of these things okay so we will see that TCP uses two techniques broadly one is called as a slow start mechanism and another called the additive increase multiplicative decrease mechanisms so we'll take a look at these two mechanisms now so the first thing that we look at is the additive increase multiplicative decrease technique so the idea here is that you want to address two changes in the available capacity okay so to do this what TCP does is it introduces a new variable called the connection window or the khawarij the congestion window okay per connection for that is for every TCP connection that you have it maintains a variable called the congestion window okay just as you have an advertised window now we also have something called the congestion window now this congestion window is not something which is sent by anybody else but it is maintained by TCP right and it is adjusted based on its view of the congestion that is there me network okay let's see how that is done okay so what you do is this congestion window is something which is used as an indication of how much data can be sent so that the data will get to without condition okay so your so what we actually do is before you calculate the effective window size right you calculate what is called as a maximum window size now what is the maximum window size it's the minimum of either your congestion window or the advertised window remember your advertised window is what comes from the from the receiver side through your for the flow control mechanism so let's say advertised window is 800 bytes but congestion window is 600 bytes so then the maximum windows is that you use for your effective window calculation will be minimum of these two which means it will be 600 bytes so you have more than your flow control your congestion control is determining how much of data can be transmitted so what we do by this particular term finding the minimum of these two is that you're determining whether it is condition that is now controlling the amount of data to be transmitted or whether it is the flow control that is determining the amount of data to be transmitted if your condition value is larger than the advertised window value is small then you will take that value as the maximum window size and then for your effective window size you calculate it as this maximum window size minus the last byte sent minus last bit technology now if you remember from from that weight and flow control is done this is you will keep track of how many bytes have not yet been acknowledged and reduce that amount of bytes from the actual window size that is advertised right so that's what we do here as well and that will give us the effective window size and that will determine how many more bytes can be transmitted into the network okay so now what we do is you will increase this congestion window when the congestion goes down and you will decrease the congestion window and the congestion Kozel that's the idea that we want to use so we've got one variable called the congestion window which we will increase when the congestion goes down and which will decrease when the congestion goes up okay so this is what we do now the question is how do I know whether congestion is there or not right so that's what we come back to so so the question as I said this how do I know it's congested or not the answer of causes again lack of acknowledgment lack of acknowledgment essentially means that a time-out will occur so the moment a timeout occurs okay the timeout is in is an indication that a packet was lost and PC and TCP assumes that that lost packet must only because of congestion so a lost packet implies congestion so the moment a timeout occurs then you start reducing your congestion window because it is an indication of condition so how much do I decrease it by okay so this is basically where the additive increase multiplicative decrease algorithm comes in which is called as a IMD so what we do here is whenever you have to reduce the window size okay you will do a you will divide the congestion window by two okay let's call this why it's called as a multiplicative decrease and when you find that packets are going through that is that there is that you're getting back acknowledgements in time then you will increment the congestion window by one packet per RTT that as you can see what's happening here so let's say I sent out let me assume that you started off with the conditional window size of one so I will send only one packet into the network now when I get an acknowledgment for that within one RTT right so one oddity so before a timeout occurs I've got an acknowledgment now this is an indication for me that things are going fine so I will increase now the congestion window by one packet so now my practice so now I can send two packets into the network now when I get these two packets back the acknowledgment for these two packets back now I increase the congestion window by one more values okay so which is so which now the condition window becomes three so this is called as the linear increase phase or the additive increase phase but if an acknowledgement is not received under time otakus what I will immediately do is I will reduce the congestion value the congestion window value by a factor of two so for instance let's say for here we have four packets have come in now I sent eight package to the network and I am centered sorry five package so network because I am doing an additive increase order sent out five packets but order these five packets if the five packs do not reach the other side and a timeout occurred now my window size will become 5/2 it will become 2.5 or it will become two okay so in practice what we actually do is it's not done in terms of one packet assets but it is done in terms of the number of bytes that are actually sent okay so the increment is actually calculated based on the maximum segment size into maximum segment size divided by the congestion window and then the condition window sync is incremented by this value that you are calculating here this increment is calculated as a of the maximum segment size and that amount of thing is added to the congestion window this is basically how the aimd algorithm works okay so this is so you can see how this behavior will look like so you can see that it will have some kind of a sawtooth behavior that is you have an additive increase phase where you keep increasing linearly and then suddenly there is a drop right which once you have a packet loss immediately your window size what I'm showing here on the on the y axis is the congestion window size so the condition window size will drop immediately and then again you will see that as packages are transmitted in the network you reduce the amount of packets so it will start going through again when congestion occurs again it will drop so you will see this sawtooth kind of a behavior of the amount of that data that's transmitted by TCP because it takes into account e of congestion factor okay now initially we said that we'll start with the congestion window size of one okay that is what we say we will do but when you actually start off in the network your advertised window you would ever see some advertised window from the from the receiver side and ideally you would like to transfer that many number of bytes into the network okay if you have that capacity you can actually transfer that amount of data into the network but we have no idea of how much of congestion is actually there in the network so what we do is initially when you start off you get a little aggressive okay so what we do in this case is that initially when you start a connection when you send one packet into the network we start to the condition window size of one and you get back an acknowledgement you can send two packets with a network that is you will double the amount of data that is being sent into the network so when you double the amount of data sent to the network so now the condition window size becomes 2 when I get the acknowledgment for both these packets again I will double it and my condition window size becomes 4 again when I get acknowledgment for these 4 packets I will double it again now the condition toe size will become 8 so you can see that in this in the slow start phase this is an attempt to determine your initial capacity of the networks used you kind of go a little aggressive ok but you call it a slow start because it is much slower than directly sending an advertised window amount of packets ok so when you start a connection you do this kind of a doubling and once you hit congestion obviously you will again morph into the increase multiplicative decrease phase and then from there on you will use it at linear increase and a multiplicative decrease depending on whether condition occurs or not okay so this combination of slow start and and your AMD is what we use so it's slow start you can see how the behavior of the amount of data being sent in the network will will be like okay so you can see that you know actually slow start is used in two different cases okay one is of course I said when you first start a connection it is also used when you are connection goes dead waiting for a timeout to occur so sometimes you'd have transmitted a lot of PAC is a network but you don't there is no activity in the network and so your timeout and you end up tree transmitting so during this time when you're waiting for some for the timeout since you have no idea of what has actually happened in the in the network in those cases also we you start with the if the same slow start mechanism so you start off again with the window say someone quickly ramp up to a particular value and then you can go far forward from there okay so in these cases we also make use of a congestion threshold mechanism this condition threshold is normally set to half of the previous congestion window size so up to this congestion threshold we will use slow start and from there on we use a linear increase multiplicative decrease mechanism so combination of the slow start and I'm a IMD is what is used in the TCP for the doing the congestion control in addition to be these two basic techniques there is a slow start and faster and the aimd TCP also has brought in a few other techniques okay one is called as the fast retransmit mechanism only fast recovery which are also useful for handling the congestion mechanism now we've already looked at the fast retransmit technique when we talked about the retransmission schemes right so we said that if I get n number of duplicate acknowledgments or three duplicate ACKs I will treat that as an indication for a faster retransmit track and I can transmit the packet so in this case if you look at this example say one packet number one and two have gone to they'll say three is lost I have sent four five six after that for which I get back three acknowledgments so when I get these three duplicate acknowledgment indicating that packet number two remember all these three packets will say to act 2 act 2 which indicates that it's it that 2 has not been received but 3 4 5 have received so now you transmit retransmitted packet 3 and you'll get back an acknowledgement 4 6 because already these have been received so then you can start your continue with your transmission from packet 6 onwards okay so this fast retransmit is something which is used along with the congestion control mechanism plus there is also a fast recovery mechanism that is used here again it basically tries to reduce the the slow start phase okay that we normally have whenever we are whenever we have this time mode that we talked about so what you do in this case is that to do a fast recovery you will skip the slow start phase and go directly to half the last successful congestion window so you keep track of a threshold that you had lookie there so whatever is your last official congestion window half of that value start off with so it is what you will see happening over here okay and from there onwards you do a linear increase multiplicative decrease okay and with this mechanism you can see that since you will you will be able to recover faster and reduce the amount of delays that are there in the network because of the congestion control that is that is actually coming into place so remember that because we're doing congestion control there is again a certain amount of decrease in the performance so these techniques like fast free transparent fast recovery they try to improve performance to kind of offset the problems that we that you would have otherwise okay so this is basically what happens so combinations of these are what are currently used in the current versions of TCP okay so we will take a look at some of those versions later but this is typically what we have in terms of the basic mechanisms of TCP for adaptive flow control and as well as for a condition control so to summarize so we've looked at two important techniques of TCP today one is the retransmission strategy so we looked at how the round-trip time is estimated and how the retransmission is done based on that and we've also looked at the congestion control mechanisms slow start and the additive increase multiplicative decrease mechanisms and how fast free transmitter and fast recovery or other additional mechanisms that are used thank you
Info
Channel: Vidya-mitra
Views: 9,886
Rating: 4.8282208 out of 5
Keywords:
Id: Og4Br2Jog5Y
Channel Id: undefined
Length: 33min 28sec (2008 seconds)
Published: Fri May 20 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.