Riverbed SteelHead - WAN Optimization

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello and welcome to the steelhead way and optimizer video series this video series is in seven different sections this one will cover number one introduction to an optimization in this section we'll talk about when optimization I will look at the need will demonstrate some of the issues and then we'll present our approach from riverbed and how we can help with Wynn optimization waiting waiting waiting we've all been here latency it's the killer performance but what can you do about it the reality is there's uh there's science involved here folks light and electrons only go so fast they may go really fast but they still get affected by distance and delay and things that are in your network light going through fiber has issues with refraction and electrons going through copper have issues with impedance things that slow down flow so the thing is you gotta figure out how can you make round-trip time not such a big problem it's there it's not going to go away and then guess what a bigger pipes not going to be the answer even with the bigger pipe you're still going to have the same level of delay in the network TCP still has to do all of its acknowledgments across the same delay factor so you're still going to have the same amount of delay in round-trip time regardless of the size of the pipe I might get some more sessions going through there but you're not going to improve the performance of any of those sessions so distance really matters when you're looking at adjacent architectures like gateways jrss this everything else I think it adds a hop is adding more latency to your network especially things that are going across transatlantic trans-pacific type links as well yes just like in real estate location matters looking at just a couple examples you can see how the lay factors add up as you're moving across geographies each individual country is going to have different quality of equipment as well fibre and copper as well plus things that are happening on the internet or your private network that day affect all this stuff as well and it's variable so let's take a look at SATCOM latency comm is something that's associated with lots of delay just because of how the technology works you want to try to squeeze the most bandwidth that you can out of this technology so tuning it appropriately is the way you're going to get the most out of it one way you can do that is using skips protocol that riverbed supports to do that as well as tuning those buffers to get get that pipe full when you look at transport force comm there are lots of different types of options available and there's lots of different latency involved so everything requires a custom configuration a SATCOM link is composed of several segments you have the ground segment which imposes its delay you have the space segment which has its delay which is varies depending on where you're going then you have you user segments which could be something like an airframe or it could be a Humvee or even a mobile terminal of some kind all together make an accumulative delay factor somewhere on 550 milliseconds on average but you got to also include the backhaul to your ground network to where your did is that so on top of all that you have to consider adding the additional delay back to the data center somewhere on 100 milliseconds now that we've looked at some sources of latency let's look at actually how it affects things so we'll actually run a couple of scenarios here looking at things like client bandwidth and Link aggregate bandwidth one of the formulas we're going to use is going to be bdp which is based on bandwidth times delay this will be used to help understand how latency does affect manwith in a pipe one major factor for dealing with delay in a pipe is going to be the window size window sizes can vary and they're negotiated between sessions typically 64 kilobytes was for standard TCP there's been the RFC 1323 which now allows us to get up to one gigabyte in size but you need mechanisms in place that actually do this efficiently on your network now let's take a look at a client throughput example this example shows a 10 Meg link with 500 milliseconds of round-trip time we take a window size and we have to convert it from bytes to bits so we get 524 280 bits per second then we're going to take that and divide it by the actual delay factor of 0.5 hundred which is 500 milliseconds we end up with somewhere around one megabit of throughput so this is the most that this session could actually send with a negotiated window size of 60 64 kilobytes so you can see how this can be a limiting factor and how delay has major effect on that and now looking at the same scenario but with a lower round-trip time we do the computations here and we'll see that it's actually yielding us a much better throughput number because now we have less delay so it has less effect on the round-trip time for acknowledgments and things like that on the network still at the same window size but at the same time if we were negotiating window sizes we could actually change these numbers as well now let's take a look at link capacity in this case we'll use the bdp formula multiplying the bytes per second times the round-trip time in seconds first thing we got to do is we're going to convert the actual bandwidth from bits per second the bytes per second which is 1,250,000 bytes per second now we're going to convert the round-trip time 500 milliseconds into seconds which is 0.5 hundred and we multiply the two together we end up with a bandwidth delay product of 625,000 now we're going to take that number divide that into the window size and that's going to yield our actual bandwidth that we can utilize which is 10% in this case looking at this same scenario but with lower delay of 100 milliseconds you can see that we're actually going to be able to achieve a higher throughput level because the delay has now been lessened so now we get close to 50% of our bandwidth that's possible across this link again remember in all these scenarios we use the 64 kilobyte window size this is a variable that we can now now leverage to make our network work more efficiently now that we know what our delay factors are at riverbed the steelhead is our way an optimization solution three things that we do to make things work better is we're smart about what we're going to send across a network we're going to deduplicate data we're not going to send it needed to lead twice across your network we're gonna fix how TCP works we're gonna stop it from being so chatty and having so many turns and so many acknowledgments and we're going to also deal with window sizing from a centralized standpoint and make it more efficient with your network and we're also gonna make the applications behave like they're supposed to lots of the apps we're not written to run across a win Steelheads are deployed in pairs we have one that's installed on the client side to service of clients we have a second which is installed at the server side the service of servers they form a peering relationship and create an inner chain which allows them to communicate about optimized applications they then inform outer channels for the client and server side to allow them to access the optimized services that the Steelheads provide as shown in the diagram we typically like to be deployed on the inside of your firewall and on the unencrypted side of any crypto device further options for deploying Steelheads will be discussed later now let's look at three major optimization features of the steelhead scalable data referencing or SDR for short is a byte level deduplication algorithm that runs on both Steelheads both the client and the server side steelhead keep references of data patterns they've seen before and share them with each other they store these patterns in their data store when either steelhead sees a pattern it's seen before it just sends a reference and prevents the data bytes from being sent across the network a second time looking at SDR in action we can see that we're sending a green data byte which is the first time it's encountered so it is cached and sent to the remote steelhead which does the same we now have a reference for that that pattern number one we do the same and when the red bike goes across its referenced on both Steelheads as well the third time we send another packet and this time it's one that we've seen before so we're going to use the reference instead so we just send the reference number over to this remote steelhead and say you've seen this before satisfy the requests locally and the bytes have been kept off the network this then frees up significant bandwidth for other operations another major optimization feature of the steelhead is how we optimize TCP and its operations as we saw previously Windows sizing is critical to getting the most throughput out of your communications link windows scaling deployed on the steelhead allows us to dynamically adapt to network conditions and provide the optimal windows size for the maximum throughput payload repackaging and connection pooling further add to the gains let's see it in action so a client makes a request from the data center for a file a 65 kilobyte Windows size sends his requests to the steelhead steelhead looks at it and says well current conditions about one megabit but let me see if I can do something else I'll check with my pier and the pier looks and says I can give you twice that window size and actually give you the almost twice your bandwidth now plus with everything else you do a Steelheads with TCP repackaging and connection pooling will probably increase that number reading further a third major optimization feature is our application optimization we currently support well over 1,300 applications one of the key things to optimizing application is to avoid application turns specifically to stop the turns from going across your network steelhead knows how applications work and can complete transactions locally when possible avoiding costly round trips across your LAN so here's an example of how it works a client needs to complete a transaction he sends the request to the steelhead the steal had looked at the request and says I can optimize this app let's do this locally and then he informs to pier steelhead of the transaction and the server completes a transaction locally as well keeping the data off the network again so in summary when optimizations really important latency is a real problem for the mission you have to find ways to deal with latency legacy protocols there's plenty of that stuff out there not meant to be used the way it is but yet we have to use it the way we use it so anyways to help it work better SATCOM as part of the environment is something we have to deal with but we need that find ways to actually deal with the latency it has Steelheads going to help you with all of the above data optimization we'll use ser to keep data often Network transport optimization will help you with your window sizing issues and application optimization we can do over 1,300 protocols to make them work better on your network thanks for your time I appreciate it and I hope that you learn something about our products today please see the additional six videos for more information on riverbed Steelheads
Info
Channel: dave maphis
Views: 4,857
Rating: 4.9298244 out of 5
Keywords: riverbed
Id: 7UtL3VmOae4
Channel Id: undefined
Length: 11min 21sec (681 seconds)
Published: Mon Mar 04 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.