SF17US - 22 Understanding Throughput & TCP Windows (Kary Rogers)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Applause] thank you for coming to the afternoon I'll after lunch Rousseff when you talk about TPP Cuba Caroline Carey Rogers the director of engineering it Rebecca technology I came up through the acquisition and echoes of you nine years oh no that's why their eyes thought about looking at PS now I you know this is archived and I manage the team we are back line support interface engineering we're dealing with the kind of hard problem so when I go to management I really like hanging offices the skills that I don't work hard to learn so as I Dario vertical all I had my one night on a canal sort of ones like called hanging on my arms where I pasted up in tutorial second thing so is that enough housekeeping we've got about an hour of content today so ask questions we should have a little extra time to get to the end you know we'll look at silly videos or look at a bit faster I'll just you know you can run off and get coffee early so I think at the top I want to show you is a quick video demonstrate some of the key key key key concepts that we'll be we'll be talking about there ah [Music] you [Music] okay I'm out of the door I mean you have to agree right she's like kind of adorable today little over it's still fantastic I'm still doing the PPP all right we're going to talk about today let's give you a quick very high-level overview TCP ownership who's responsible for TCP then we're going to go over a couple of turns just to level set what we're actually talking about we will look at the receive side factors the sender side factors other sectors and in the death of a checklist that you can when you are looking at a CPAP and you're trying to understand what the proof that limitation is you can go one down the checklist and try to determine what the issue is so first why do we care about TCP who owns generally think into performance issues tends to land in the network engineers lab or the the person who's responsible for the net that's where it ends up clearly we are network engineer is on the gear the cables the hardware physical stuff in some instances you know making sure things are configured properly the centrality is working packets are getting from point A to point B efficiently so this is possibly a terrible analogy because I don't know much about humming but if you're responsible for the pipes why should someone call you about the quality of the water like water tapeline maybe go to call the county and talk to them about the call of your water I'm just here to make sure there's no leaks in pipes so TCP kind of ends up being the redheaded stepchild of protocol stack because you've got network folks who are worried about you know layer 3 and down you've got server guys and application guy worried about servers that there have CPU and memory these are connected properties of network application guys who are making sure that it appears to be the applications working the TCP lives on the end the hosts right so why is that the network guys responsibility well just you should go ahead and be resigned to the fact that this is going to land in your lap probably right so what I want to help you do is be able to go through data quickly and get it off your plate and get it back to who can really solve the copper so first up vaneless what Dan well the dictionary that I looked up online that a measurement of the ability of the electronic communications device for systems such as a computer network to send and receive information so we're talking about the capacity right the potential for our discussion today when we're talking about you know from a host day the host be we're going to visualize it as a pipe I know there's lots of interconnections and different bangles along the way what were some of the minimum size of EXO just for today think about it as a pipe from host a the host B and you say well I have a gig connection I have a hundred megabits that's the bandwidth all right that's the raw little capacity of the links between the two hooks they were today we're talking about throughput which is really the result you get in that system so the amount of material data etc interest groups such as machines with system so within that Daniel it's the throughput is what you actually get you have 100 megabit bandwidth but you're only getting 20 megabits a throughput why is that should I be getting higher so the difference Dan wants total raw capacity what we are aiming against Nia might be overhead through bus let actually getting whitening some college electric we've got a good connection here we've got a gate why are we only getting a hundred negatives so what we're going to focus on again focusing on TCP a single connection from host a to host B download upload FTP HTTP we're not really going to be talking about higher than TCP the application layer things that have different application turns back and forth just I'm trying to send the most amount of data I can in shortest amount of time another important term bdp what does that same form then with delay product what what when we talked about valence electrons we're talking about the amount of data that TCP needs to send to fill that pipe to achieve the maximum throughput they can given the characteristics of that time so I like to think about golf balls in a PVC pipe so if you have a piece of pita short piece of PVC pipe you can fit one golf ball in it the bdp if that piece of pipe is one golf ball if you extend the length of the pipe you can fit more got balls in it so maybe you've got six got balls bdp of a piece of pipe if you increase the size of the pipe you can fit more dock balls in it so if you can now fit three golf balls in and it's the link is six then that's 18 Gotham Hall bdp we have some golf fans in the audience so this is a question we'll be asking you periodically throughout based on the information we've discussed up to this point so we've talked about a couple of terms and we've talked about one characteristic of a pipe so based on just that much how much data can we expect to send in one go so far you'll get the hang up as we go through so what I'm looking for is based on what we've learned the most I consented leading the field if I hit our maximum throughput on the factories we have we're looking to at least cheese DDP so that's bandwidth time latencies 100 megabits time 80 milliseconds divided by 8 you get bikes that's the number of bikes it takes to fill a pipe so as we go through the various factors today we're going to be using sort of like a framework a reference framer of this pile of cat gifts it's 256 K of data so if you think about a file sitting on a hard drive you know an FTP server is going to read that data hand it off the TCP which will then send it across the land or the link to the receiver the person who got to have those cat gifts right so that's a byte stream this is a byte stream the CCP must feel segments up and send across so we're going to look at this as we go through so what are the the two main factors that we're going to talk about today and then bring down the amount of data the receiver says it can receive if I say I can receive 10 K bytes that's it that's your limit right I can see receive 100 K or 0 you can only send how much the receiver said it contain now let's look sighs that the amount of data center to thinks it can send we're looking at the receiver side and then we're going to look at the sender side the factors that control that go on the receiver side and people talk about the TCP window what is what is that reference there's received windows and CIN windows and just in windows and sliding windows for different perceive window is the receive window or the receive buffer ref hey kar aka our wind aka socket receives buff you might see those terms referenced around and all talking about the same thing the amount of free space that the receiver has in its socket buffer so application goes an aspirin sockets an operating system I want to talk to or see data from this IP on this port and when it does that it's going to allocate a certain amount of space to receive data from the other host the TCP is going to put the data in the buffer for the application then to read from and take the data out it can be the operating system default whatever that may be or it can specify a particular time when an allocates the buffer so this is something that is advertised in every single TCP header that the host sends so if you look down at a TCP header you'll see window size value and it's right there it's in device so just as a quick aside what happens if data is coming in faster then the application can remove it from the V buffer I have example of what that looks like Oh hope you guys know I love with D so if you EFT to if the application is not taking data as a buffer fast enough then you end up with chocolate on the floor or stuff in your mouth quite good all right we went through this so it is right there in the header which we see there's a couple things about calculate window size window scaling factor or one of those so the space allocated to the window size of 16 to 2 by 16 this integer that's a maximum 64 K if you're the bdp of the pipes that we're talking about the number of bytes it takes to fill that ice is more than 64 K well we're not going to reach our potential so we have windows tailing which allows the window size to go up to one gig your color not really going to see it at that size in RFC 1323 for further reading now this must be supported by both hope and it's advertised in the freeway handshake it's very important when you're capturing data you try to get it through the handshake we start your capture first then you try to reproduce the problem or if it's an eroding rolling factor you try to include through a handshake when you're looking at the data because it has the scaling factor which is really just a clever way to create a 32-bit integer we're just bit shifting over to the left so shifted by 8 means you multiply the window sizes that is advertised in the header you multiply it by negative case 256 so now where are we we have our our cat left 256 KB we want to send across and we send it all in one go so we just blast out 236 10 no we have a received window of 64k so the most we can send in now based on what we know is 64 K and then we have to stop right so you have a window size it's 16 bits so the most it can be is 64 K so then you have the scaling factor that's only told in the three way handshake and what it does is it takes those 16 bits whatever the numbers are you know in binary and the windows scale factor is just another number so if it's 8 we're shifting its 8 bits to the left and that tells you how much to multiply the original value by so if we go back to then we can see the amount that actually advertised is 3045 bytes but then the host knows to multiply that value by 256 to get the real effective window size which is 700,000 something okay here so now how much data can we send based on what we know what's that a window full which is the receive window right which in our example y x is 64 K so the minimum of the receive window and the V DP is where we are at now okay should we look at the package look at the back so it's kind of visible I could probably save some of this out to make it a little better now I recommend if you have a link fill sequence number build and thank you can fulfill them in the act column names for the whole do this that's better okay so here's the pcap generally when I'm looking at a peak app I like to make sure that I have it for you in check we do I like to look down and see what is in the TPP options of approving handshake to see the team SS the windows scale which is zero in this point so we're not multiplying any kind of one and it has Zack committed I like to sees the route of time about 98 milliseconds always good when you're looking at through this so I am aligning fruitless on the peak tap stick one the tool that I use the most often always and forever is TCP stream graphs gtp trade yeah so what we're looking at here is not if you've not looked at before the x-axis is time and second the y-axis is sequence number I have well sequence numbers which means it just starts at zero the sigit number starts off as a random 32-bit number so sequence number or how we keep track of how much data is being sent where we are in your life stream how we keep it in Jordan so sequence number 1 2 3 directly corresponds to 5 1 2 3 right so it's a we're really just talking byte on the y-axis and time bytes over time that's through good right by suppertime this overtime is over second so what we want to see is the line going up and to the right the steeper the line the multi of Rupa so we can zoom in and I'll show you [Music] these little lines argue actual packets to represent the package the line on the bottom are the acknowledgments so as they go up at us receiving knowledge that's for that data because they're going higher higher of the x-axis we just sent all this data this is green line at the top into receive window that's how much space that he the number that the receiver is advertising so the space between the packet and that green line that how much space is available in the receive buffer so as we look at the peak app we can see that we're sending data and it's reaching the green line we're filling the window up to the receive buffer and you can see it really plainly in this pretty picture and then acknowledgments start to come in the stare stuff down there with the little hand icon is these stair steps are acknowledged it's coming in acknowledging this data to the central here and add the acknowledgment comment we're able to send more data so let's look at me another cool thing about the graph is that wherever you click is where he knows you in the trick I click on this one that goes until the window and it should go to that packet sure that right so what's the only one though as economists come in or sending more data so this pcap shows that were bound by the received window of the client that's receiving the data okay cool questions so far so now that's that's pretty much it for the most part yes question t enemy desert program playing music very low power good question so the question is when is the window scaling kick in what does it matter especially we're talking about low latency connections that's a good point for again we're talking about GDP the length of the pipe is the latency how long it takes you know to send the data and get acknowledgement so if we have a low latency then that a larger size window it doesn't matter as much we're really talking about this long fat Network we're late to really comes into play is is where that matters almost tender sides and talk about three things they're all interrelated we're going to talk by compliance we've touched on a little bit we're going to go back and go into a little more detail congestion window right everyone you know kind understand the the receive window the window sizes it's right there in the header what does the congestion window what work I find that information and then it sends off the signs of the application so bison flies this is sort of the same side of the coin now the flip side of the coin as the receive window so if we're trying to fill the receive window again based on the latency how many how much data can we send and it is in flight we've sent it but it's not yet acknowledged so it's on the wire that the other host acknowledgments are still traveling back we haven't yet received the acknowledgment so how much outstanding data is there on the wire I just said that okay so if we go back here we're zooming in a bit we've got the receive window of 64k there's the sender because okay great 64k start sending data it sends 48k the larger dotted line is unacknowledged data we still have 16k as usual window so we can continue to send you know by since life on an all data up to 64 K then we have to stop so let's have a look at that and actually going back and looking a little closer at the the pcap we're just looking at so at this point in capture our Viking flight is 1460 now I remove the TTB link bill just to squeeze this end but the amount of data was sending with this packet is fourteen sixty that means up to this point we're all caught up there's no outstanding data we stay in one packet in the bike since life is now fourteen sixty we now know we're starting from zero and this is a brand new pack train going out now by since life is available right here yeah Viking flight is under the sequence number act analysis you can right-click on it and apply column I recommend we're dealing with throughput it's a good one to have so we're we know we're going to side 64 K so we start sending packet packet packet and compact effect it and we did here we have just about filled the window is not enough to send another another full MSF so we get acknowledgment now TPP's acknowledges every other packet so that's one half here represents two packets so now we've cleared two packets worth of data and we can send two more in flight then we got to stop we filled the window and that is in pictures [Music] so within 64k worth of data in one shot and we stopped then we have to wait round-trip time for packets to start coming back to us and these stair-step acknowledgments are the stairs up line are the acknowledgments so as they come in we sent a packet comes up and then two more come in so we send two more in stop we keep doing that until we finish with the amount of data we extend in this particular packet and once we're done sending the packet through then we wait for the rest of the acknowledgement to come in so now we're back to bison slice zero and we can send another 64k worth of data in one shot and we have to stop yes so when when TCP receives the data to acknowledges it it can still be in the receive buffer so as we're talking earlier TCP is receiving data acknowledging it putting it to the side for the application to come again if the application is busy and it can't it can't take the data out of the buffer and we fill the buffer up you're going to see that receive window with smaller and smaller and smaller and smaller until potentially get zero and then the receiver the sender has to just stop until it opens back up so in this case we're not seeing the received window goes down but we're just filling the available space within flight data so we've been 64k we hit the window now we have to wait on rounds of time it takes you know right around 100 milliseconds 98 milliseconds for the axis to start rolling in here they come rolling in every few packets and we're sending a tube back up to the time until the entire packet frame we have to send now complete and then we can send another one so that is biker blood okay now congestion window Seawind CW indeed you'll see that a lot if we have the receive window on the receiver has its flow control saying I can take they are cool I need to stop is telling the sender how much data you can receive that's the receipt side flow control congestion window is the sender size flow control it's based on the capacity and the conditions of the network how much data I think I can send in one go now this is not an apipa header it's not visible exactly in the pcap it's a state variable that is tied to each individual connection I got a connection to you I got a connection to you there's congestion window for both of those you can look at a peek app and kind of make a guess but it's not visible in the data I was referred to in multiples of MSS so for MSS and the maximum segment size if you know the game the connection we do the three we handshake I say hey I can only take 14 60 bytes at one time in one packet and you tell me I can only take maybe thirteen eighty whatever it is so that's the MSS so we were refer the congestion window into multiple with MSS so two four eight immune whatever it is so again at the beginning I think the RC way back in the day suggested you know two to four these days the initial congestion window can be much higher there's some academic papers out I think there's some RFC's out that are advocating for 10 which is good for short-lived connections other implications lab is you know beyond my paygrade during the closer probably everyone here has heard tcp slow-start for we're trying to probe the network to figure what's the optimal amount of data to be sending so we start them all to start with - we wait for one or whatever it is 3/4 we send a few and then we have to wait round-trip time for the anonymous to come back we got all we got all the acts it's all grown good ok let me send some more new increases redouble it for every act I guess every second of this act I'm going to increase it by one so if I send two and I get in to acknowledge and then go to four if I get it not for or acknowledged I'm going to go to eight we're going to continue that way it's exponential think of an exponential curve until it hit the threshold call the slow start threshold so we have a threshold here we're ramping up or ramping up and once we hit that threshold we change the way we grow the congestion window to linear so let me this morning I think let me get a picture for that so I grab something off Google Images I apologize whatever I filled it from one other yet what I think it will continue to increase slow start goes into a linear growth continues to increase until it hits the receipt window which is again you know the feeling you can't go past or their packet loss in which it will reduce so near example graph the dots there being the current congestion window each sending roundtrip we start with one with two four once we get eight the slow start threshold with an eight we switch to a linear growth 9 10 11 12 8 12 maybe there's a packet loss we drop back down the behavior and how it behaves in the face of packet loss completely depends on the algorithm is being UNIX so those don't take this as the gospel truth so again if there's mentioning that if the receiver says I can take 64 K give it to me we may not be able to send 64 K if we have not grown the congestion window to that point yet we may only be at you know five packets so that's the most we can send that's the flow control imposed by the sender itself now the growth of this as I mentioned in recovery and packet loss completely depends on the host the congestion control algorithm I was recently trying to figure out what some what windows years and I came across this I could be wrong but it was a windows website Microsoft website and said windows clients like say windows 10 that's a client one right I'm not a windows guy don't know that well uses new Reno there is Reno new Reno Vegas a bunch of different ones named after cities in the southwest where you can lose a bunch of money Windows Server 2008 r2 2012 I think the default of the DCCC P or C TTP again the details of these are outside the scope of the conversation you can go google like new Reno versus CCP and find papers people have done random shows like us you'd like you know what I'm going to do some testing and graph the results why you care about this if you're dealing with a particular environment where one algorithm is more effective than another say you have something like a satellite transmission pack pack it off there it doesn't necessarily mean congestion it could just be that the hostile transmission environment right so you don't really want to back off if you lose a packet you want to keep going so you might use a different algorithm that doesn't reduce its congestion window as much in that environment again the details of which won't get into Linux one of these is in more recent birth kernel versions of bulk unity Vic now how do you see if you want to say well what's the congestion window like it's not in the packets not in the header you can look at the peek app and make some guesses I'm sure there's multiple tools tools out there I'll just save up to the Dynamo there's one TCP analyzer on Windows it's part of some toolkit you can download you can google it and then this socket statistics tool on linux do a look up the man page on SS you'll see that like a - P or something we'll give you the congestion window of the connections so now back to the cache we have 256 K or Chinese in now we align to the left here the dark line solid line that data that you sent is acknowledged so when we talk about that sliding window FTTP you have some constraint in this case the biggest constraint is receive window that as we move along the bike stream of sending data acknowledging data sending data and knowledge and data and we have the knowledge dated off to the left and it slides to the right until we translated the entire bikes here so we have 64 K the most we can send but at this point right now our congestion window has opened up to 33 MSS which is 48 K the last 16 K we can't send that is flow control imposed by the sender itself okay now is your time to shine based on what we've covered so far what's the most data we can send in one go just congestion window up well yes that's right the minimum of the current state of the congestion window received window and DDP let's have a look at the congestion windows in action [Music] poor scan is really correct stated he received a theoretical math as a receiver consent and then GGC folks start to find out what the identity limits and in statics lower than which we go fast location I asked again you may have switched a couple of times okay so your Center goes under yeah that's what the receiver says if you take your game and give me 64 K Max and you don't know what then right so you see each one start to establish the congestion window or find out if there is congestion before we reach safety for K which is repeated you can you can so again it depends on the threshold so if the threshold so you're saying the receiver says I can take 64 K worth of data in one shot now flow starts going to start off small keep spending keep doubling the amount ascend until either it hits congestion and it has to back off or it hits the slow start threshold and changes to a linear growth or if the threshold is higher than the receipt window so that the point at where it changes from slow start to a linear growth if that is higher than to receive window then yet you'll still you will bill to receive windows during close are [Music] the threshold and additionally okay so yeah let me bring them back up so the congestion window is something that is increasing over time start small increases increases increases these slow start and the method by which we increase the congestion window in the beginning of a connection to slow starts we come out of slow start once we get a configured slow start threshold so that line going across is the threshold at eight so we will stop doing closed start yes so that is part of the operating system how it arrives at that I think it it comes up from previous connections that it has probed the network from I think maybe it's also know it has a initial window that's in operating system as a constant I believe I don't so along the way under the congestion ring there will be potential and then if the factor that you don't really know where he stays somebody they've got someone goes over later it's condition 104 receivable whichever comes first yep yep or have to deal with the actual right the computer that's right all right so we did this so let's look at the congestion window in action I think we can just use this one so if we go back to the game so I'm going to change this and let's look at our graph we're going to Union so in the very game here's our key window up here by the hand is this is the amount of data we set in one roundtrip why did we send more beginning of the connection the congestion window is small one two we sent that data and then we zoom in you can see that it got an Alex acknowledgment and then line went up and we send more we still just send a few we wait around trip time once that act comes in we send a few more we're up to 4 then 4 wait around till time that get acknowledged we increase the congestion window within 816 then we start sending more I think we're up to 32 or so so that's how it continued that's how the congestion window will grow in the beginning of the connection you can see that we're still way far away from our keys windows way up here but we're stopping sending data for reason and that's because we're feeling them work out we're probing the network we're increasing the congestion window every round trip during slow side okay now we've come to our file factor major factor I feel like in my experience when Don says why I'm not getting throughput here's a peek at the things that we're talking about today these major factors are going to cover it's at least eighty percent if you go down the checklist and look at these factors you're probably going to find what it is in these factors there are a few other ones which will briefly touch on Gantt we're going to talk about you know backing off so the send buffer this is the size of the socket buffer the application is writing data to so on the receiver side it allocated a receive buffer to receive data now in sending side we've allocated a buffer to write data to application reason how much data of the cat gets and it writes it into a buffer to TPP will then use to send across the lane decide if we don't if the application doesn't specify will be an OS default or it can be you can request the size when you allocate the buffer and aside that it should be will depend on the VDP just like the receipt window we have a BBC of a certain size if you receive window is less than the VDP you're not reaching your potential and on the sending side if you're thin buffer is not at least VDP you're not reaching your potential this is let me just go I when I was researching this a couple years ago for a an issue out of thing I stumbled on this limit the N page which I thought was pretty good let me just bring it up and later the slide when you guys is the lingual be in there they talked about this is ready no a developer type page and this is about how to set socket parameters how to set the descend bomber and down the remarks section I found it's talking about that talk about the ISDN that's the ideal swing backlog which is really just a way to talk about DDP so the ideal amount of data we should have to fully utilize the routledge and down here says if the I see again think PDP value for the connection is larger than the send upper limits the throughput achieved what I'll be optimal so if your bdp it's a hundred K and your stem buffer is 32k you're not going to get optimal throughput you guys continue that digit I keep forgetting to the next without a thing how to show you real quick this biz page first time I forgot to do it I did it like three times my last presentation so this is a page on you miss Tian is talked about in buffer size and down in the remarks section there's a lot of good information about PDP how much how big your sends up or should be so now having another factor we want to send to 6k we're camped another one we have a receipt window we also have a congestion window and now if this is the person who wrote the application is doing the sending so FTP server let's say or a you know a web server and the send buffer size is only 32 K it doesn't matter that we have 64 K absent of receivable no it doesn't matter that our congestion window has grown to forty-eight K only going to be able to put 32k on the wire at once because that's the size of the buffer that's holding the data we're sending so this is another factor so now if you get the hang of it how much data can we send the minimum of send off party is smallest one send offer congestion ego-c window EDP I don't know Asli these days like if you run I first cuz that's like a right answer 1125 hyper i think its default is 64 K at least I first to I'm not sure that you can do with dash W and set it to whatever you want again I think the Linux has in the /proc whatever it's got the I think our MMR set the window size in there with the defaults will be so it'll have the stuff in there but I'm not I'm not really sure across the different operating systems Lotus in my experience if the application doesn't set it operating system mewling handles affine where I see the it being a bottlenecks is when the Asian developer said I want this much and it wasn't enough okay let's look at send buffer problem so this is our P cap we were looking Apple started to slow start and then we start ramping up our congestion window send more data and we get to this point from stop okay we're still growing menu you know the congestion window we clearly haven't reached the receipt window when we wait for all the data to be acknowledged and then we send another training package okay it doesn't really look much different that we increased the amount of data were finding in fact he's always about the same size we send the trainer packet we wait for them all to be acknowledged and then we send another training packets and we're we're getting nowhere here the receive window so if we zoom in all of these they click the last one so we've sent some data here comes a bunch of ack ack ack ack ack ack and we're still got like a Meg I think that is a receive window and then here we go sending another packet train again biking the slice if it starts as 1460 or whatever the MSS is you know that all the outside in previous data is acknowledged or starting from zero again in terms of bike reply so here goes a training package going out now I have a blue marker blue line that denotes push missed just because it helps me identify where when the push bit is set so I want to put the push bit they're going not everyone but you know you generally people think about the push bit or receiver side I'm going to they over the bush good I need to flush this data to the application but it also has meaning on the sending side it can be especially going to be bulk data transfer situations string a stream of data it can be an indicator of a buffer size or a right side how much data the application is giving to TCP so there's what's special about this one what's the 64k exactly if you see these kind of numbers especially with something like a push bit that should be indicated to you so we keep scrolling down and we get the next push bit what's that number 128 K and then we stop and here come the acts one we get one packet out but here we go again we get to around that one packing middle kind of change the exact numbers but we get to 128 K again and then you stop this pattern repeats over and over again I don't think a pack of all you see any packet loss why is the congestion windows not growing again Kahn 20 K exactly so we look at we look at the pattern repeat itself over and over a couple of times it's been 64 K here's a small gap but over and over we're not increasing the congestion window we don't have packet loss we don't have retransmissions so that coupled with the fact that these push this to landing exactly on powers of two 64 K 128k those should be flags to your troubleshooter mind it's not a coincidence the send buffer that is allocated by this application probably among 28 it could be 64k sometimes what you request in the operating system you don't get exactly that amount a pledges a little bit but the fact that we land exactly on a power - and then we stop we don't send any more data until it all gets acknowledged is the limitation of the stand bunker so what you want to do is go back to who owns the application of the server and tell them they need to increase the send other side where I first encountered this was a tomcat download issue and the default send up or sideways I think it was 64 K but there was also some weird windows issue that I think that I think it was a there's a like a KB article Microsoft cyclist if it if the send-off Versailles is a certain amount and you have certain conditions in the network it will never grow beyond 64 K and the way you change it is just increase percent of the Tomcat you can configure in the configuration of Tomcat descends offer and receive buffer size increases in buffer problem only okay right at Tonopah hours in good so now like the recovered I think what we'll handle the jury of through positions you're going to encounter at all certain on but we're again we're focusing on behavior TPP so take them you know packet loss or things like that maybe hardly equation but we're not getting the expected throughput and it's not the network these are the TPP factors that should handle the addressing or some issues so here's your checklist first you need to know the BBB is the bandwidth times latencies what is advertised window over the receiver are we filling oxido is the window going down down BAM meaning applications not removing data from the bucket the buffer fast enough and what is the the advertised window in relation to the DVP are the bike and flight increasing over time each round trip are we sending more and more data either exponentially or linearly increasing torques receiving them but we're still doing them no okay well does the sender stop send too much data and wait for it wait for the ax to come in and then spend the same amount of data and what's the accident come in over and over and over without ever increasing the amount of data extended and we've not reached the receive buffer probably probably the send buffer the example we just looked at now when we haven't talked about but I'll touch on briefly is there hacking loss you see retransmissions do you see dewbacks if so the sender is going to reduce its congestion windows like oh we have congestion you have packet loss I need to back off that's how TCP is supposed to work so let's see what that looks like this guy so we'll look at the TTP traits of a download so let's going up into the right but let's dip it right it's kind of got a waving from undulation to it if we zoom in at the beginning we can see that close to our behavior ramping up remember the slope how steep the line is indicate the throughput so if we're getting steeper steeper and then that angle changes it's a little flatter that's a decrease in throughput so we can zoom in and see exactly what's happening so we're not reaching this data we're sending we haven't reached the received one though we were increasing our fruit hood so if we zoom in here remember these lines going up is more we're moving through the byte stream the sequence numbers going up are sending more and more data as you move to the right is time so when you see something like this this little dye that is to the right but below something that has already been sent I'll be elevated is sent over here to this guy's to the right and below what is what is that what is this guy retransmission probably say packages this one over here here's another one retransmission when we have retransmission this congestion window will decrease how decreases the pencil a little and you get to a certain point certain level away from this rack and you can see how it up and then flattens out and this here or TCP services this is the client side the trick on the client side so the packet didn't show up they've been the server sent the packet didn't show up so we we received you know the one after and so this is out of order we started sending duplicate ACKs hey I'm missing one I'm missing one after the third use a good act the server goes oh we didn't get it let me be send that to you that's a fast retransmission that will trigger the congestion window nothing oh we had hack loss I better lower my congestion window and decrease the throughput so after that event you can see it's dark ramping up again see how it's getting steeper and steeper it up it slides on again these changing angles are a-changin through play he zoom in again this was even worse right these hack us down here federal hackers are to the right and below data has been [Music] reported first do we reach a higher reach a higher blip before we lost packets here is a graphic a deeper before you can eyeball it there's a couple ways I do you can eyeball it and say that looks deeper than the other one is it when you kind of have to eyeball it another way you can do it I'm trying to I think is identify a section like many directors eveness of the graph tells you the actual diversity yet to speak mess up the graph then take a group of if you did you good math and you took the slope of this line I tell you improvement yeah okay so that is congestion window decreasing due to packet loss [Music] if you have no packet loss can you say congestion when there was not the issue well the congestion window if the congestion window appears to stop at a certain point and we haven't reached the received window and there's no packet loss you know again I would probably go back to something like the SID bumper being a limitation and we're not allowed to you know it's not descended offer what would else cause it to stop increasing there's a couple small factors we're going to look at I mean there's a forget someone had one this week a case study where there's also a function of TCP that if it has an idle period so we are sending data and then the connection is idle it goes that it will go back to slow start so your congestion window drops and to start again okay they bring it to the doctor is in session later let's let those guys on the crank right now yeah if you guys have pcaps you're looking at later when we have the doctors in session so they're on a thumb drive so they don't want Dropbox or something you'll have an opportunity to have you know jack Ferdinand in Hong Kong I think Christian I'll take a crack at it lies we did so it is not that it continued to increase the critics great now what have we not talked about so the first thing about the receiving side again we're talking about we're looking at this we're focusing on TCP layer sending data as faculty began if there's a gap of the light whatever we're going to figure why that is but we know congestion window might call it a gas because we have to wait for round-trip time we've talked about them factories what's another factor on the receiving side that might call the delay window full yes if we fill the window or the application can't remove the data from the receive buffer fast enough the TCP window will go to zero but what's another thing that might delay is acknowledging data that it receives delay that what's the delay act well as I mentioned earlier TCP generally acts every other packet if I get one packet and there's no more coming what do I do well I so I'm going to wait a bit if I'll get another one then I'll go ahead and send yet that's a delay test you can couple that with something called Nagle algorithm on the sending side that is an algorithm that says you know should I send data if I don't have a full MSS worth of data you get these two things together and you can have these little deadlocks I will go into the details of that because is a video here I'll bring it up so you can see the title and I encourage you to watch this yeah does everyone come down here and wasn't fun let me turn okay so I love their Wireshark tutorial please enable to delay hack interaction by a Hong Kong Bay alright so look that up later and watch it all right now what if day has been since it's all been acknowledged there's no bison flight no outstanding but there's a gap maybe like two seconds or something what could that be if we're looking beyond TPP application processing hey guys here's a gif of flat now I got to go off and talk to the database they're going to go out and talk to the file system and then I can respond to you I could say who knows how much time but there's no there's no TPP issue there so when you look at your winters why is there this guy why is there this thing here and you focus on TV TV some point you have to like step back and okay what else is going on here what's another what's a another reason for delay maybe on the other end user I mean you've got your cat yes you want to watch all right so you're watching a cat yes and it you know and it later you click on something else so user interaction okay so that's pretty much what I got I wanted to follow up the first video I shot this just super quick Saturday afternoon with my daughter we've just gotten back from the movie so you'll kind of know which one that was we had some popcorn and I was trying to illustrate when you sit the package you don't get a response you send another package popcorn no response and then you finally get a server response I've seen issues like that where some ICS that we're saying probably get requests you have to send it multiple times of is it natural to get through you get a response I'm making up an excuse just to pleasure a video of my daughter oh she's still adorable that means Emily grew up so fast again okay well we happy to take questions we're ready we're right about time taken now finally any time I'll be here sees all the rest of the conference I'm happy to talk with you denies the thing about anything being here if you re definitely carry a pack of all calm as a website I I talk to people with look at the cactus eagle all the time Twitter and the website so questions you've got a fantastic audience thank you for your attention up bosses
Info
Channel: SharkFest Wireshark Developer and User Conference
Views: 5,233
Rating: undefined out of 5
Keywords:
Id: tyk2-0MY9p0
Channel Id: undefined
Length: 67min 9sec (4029 seconds)
Published: Mon Jun 26 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.