NTP vs. PTP: How Do You Get Accuracy?

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so we can stay on schedule it's my great pleasure to introduce a friend of mine dr. Doug Arnold from mine burg he's going to talk about MTP and PGP and how do you get accuracy Doug I want to thank my good friend Pat diamond here for it's a saying cannot blame write for signing me up for this presentation he also even went so far as to suppose the the title for this talk sex ed few months ago sounded great and then recently I was preparing the slides and realized I use actually this is actually to talk let me go yeah there it is this is really actually to talk so in this time slot one time thought you get to talk I'm going to talk about MTP and PGP under comparing and contrasting them and also talking about accuracy issues and packet network time distribution okay so I'm going to tell you a little bit about the characteristics of these two protocols and a couple of features each one has a very interesting feature that helps with the biggest source of time error packet networks which is the the network as well as some best practices for accuracy so I want to also thank my predecessor talk speakers specifically some of our distinguished scientists here from NIST to the Levine and Mark Weiss who covered the basics from that can spend too much time mark talked a lot about two-way time transfer to a time transfers where you transfer the time in both directions from the clock that has the time on the clock that needs it and back again and therefore in that process you can measure the propagation delay so you don't have to pre calibrate that out you can you can measure it as part of the protocol and both ntp and PPP essentially work exactly the same way with this two-way time transfer through the packet Network and together the for time stamps and calculate the offset and just to compare a little bit here three of the more popular technologies for transferring time you know in a network iRig ntp and PTP in the case of ira goat typically on the order of a microsecond accuracy the network is basically you have to have a separate network for the iRig and and it's also these coaxial cables which are kind of a pain and it doesn't calibrate itself you have to do that ahead of time if it's important to your accuracy it does require specialized hardware and it's kind of a master/slave paradigm which means the clock that has the time initiates the transfer says here I've got the time you need it here take it ntp in contrast in in the way it's typically use is on the order of a millisecond as you know let's do to talk about but it can be over a very wide network can be all the way across the country or even to another country it automatically calibrate as a propagation delay and it doesn't necessarily require any specialized Hardware whatsoever in fact NTP s been around long enough that it's available in every operating system and router etc so it's one of the great things about it and it's a client-server architecture so that's different from master/slave and that the whole thing is initiated by the clock that needs time it says go to the server because give me the time the PTP in limited configurations can be the best of both worlds it can be as accurate as a dedicated timing network it can have this it has the self calibrating property of NTP but it does require specialized hardware and and generally is limited to a smaller network so to understand for the strengths and weaknesses of affinity it's good to know where they came from so NTP was really designed primarily at first to set log file event timing in servers and routers and if they were good to a second I was usually good enough and then it became important for authorization protocols like Kerberos that have tickets that have only a certain amount of time they're good for and that when they first came out you know 10 base T bandwidth were like the best case so I think I think a lot of the networks actually slower than that when this first came out in the eighties and so the idea was not to use much bandwidth and another fundamental principle is I don't want to have too many servers I got a lot of clients but not too many servers so I want most of the thinking to be done at the client end I went to service to have a simple task I can do many times over and over and also I had fault-tolerance built in through server redundancy and the ability to identify servers that are often time PTP was invented by John Iten at HP Labs when HP was still an equipment test and measurement company and so being a test and measurement guy he his natural inclination was the time should always be traceable because one of the things here and that meant there would be one grandmaster per network no ensembles of multiple masters just because that makes traceability more complex but it also as implications for robustness when you only have one source of time it was standardized by the I Triple E are actually coming out with a new edition in about a year or so and it's really the first it was initially developed for industrial automation that was the first industry that was ready to use this and so it was designed with them in mind at first and they're they're doing things with kind of small engineered networks and they want their slaves to be as simple as possible might be just a temperature sensor doesn't one have to do too much of the protocol itself now it's being used in a lot of industries telecommunications is a very big one just put a lot of love investment in this and but quite a few others as well including my favorite particle colliders so most of the accuracy of NTP or PPP doesn't actually come from the thing you buy from the person who builds a server or Grand Master it comes from the network your network because you have devices to queue and so just the fundamental assumption of two-way time transfer that things take the same amount of time to go from the server to the client as it does from the client to the server or master the slave slave the master is the same and what can happen is you can have a big spike as shown in this top graph here in a calculated offset because you have a lot of queuing in one direction but not the other and no way that manifests itself in this your your slave clock system is there's going to be a low pass filter and so it's going to take that big spike and it's going to sort of average it out over some time that dictated by the time constant of your filter it's going to create an error those errors can become quite large and so what NTP has is a nice feature to help with this also helps with things like security is it has ability to identify a bad source of time if you have multiple sources of time so a client can go to many servers and NTP and those are sort of take all the calculated offsets from these different servers and and put a range related to the delay to get from that server as sort of a error bounds on that and line them all up and see if one of these sources of time is is really off from the rest so it has a voting scheme kind of a Byzantine general problem solution that can identify if if a minority of these sources of time are are bad compared to the majority so that can help a lot both with security and with just general robustness and that's a property of NTP it's been around for a long time it's not a property of PGP as defined by I Triple E 1588 2008 the new edition will have something like this in it as well what PTP has that's helpful is you know Hardware time stamping was assumed from the beginning for PGP because it was designed for more precise timing needs and so wasn't going to rely on software to do time stamping though it's all going to be done usually at the mi I later between a Mac and a Phi or somewhere close to that within the Phi of the network stack and it doesn't just have those at the endpoints with the master and slave devices but in the middle of Ethernet switches and IP routers can also have PGP support and what's shown here is a transparent clock where I can't timestamps a packet as it goes through both when it enters and leaves the device and I can account for how long it was there I can just remove that queuing delay and then there's something also called a boundary clock which does PGP support in a different way but it also accounts for you know remove the affective cues so this is an advantage for PTP especially at the switch level because you know we with ntp people have sent to added at least it's not maybe not typical application but you can buy hardware timestamp p2p ntp but there you generally not support within the switches and routers themselves so how to get actually this is the second come the bonus talk you guys get here today so there's a few things you want to do if you're using ntp so if you're doing ntp you can buy an NTP time server from a number of vendors that actually will do hardware time stamping of the NTP package so when the packet comes out of the server at least it's very precise its sub microsecond level precise and if you put your client as close to that in terms of how many hops that will help a lot [Music] well you the other thing you can do is what kind of hops you know if you've got low latency and lowly look switches with they're very fast and they have not too much loading not too many big frames like video streaming video then that helps a lot and that's good for this industry because that's typically what you're doing already just to make your trades faster and monitoring is good you want to you want to monitor what you can do care turn on peer stats so the different servers can look at each other and you can identify if one of them seems to be off for some reason a lot of things also apply to PGP but MPTP you have the option of your switches and routers can be at PTP support they can be transparent clocks for boundary clock and if but even if you don't have those there's technology developed by the telecommunications industry that will get the best time transfer performance you can through switches that that have queuing and don't have PTP support into telecom profile technology which I'll talk about in a second redundant grandmasters they can monitor each other you to backup to monitor the active master Hardware slaves are available such as a PCI Express card which has Hardware time stamping and a good oscillator as well and it's designed by a company that is timing experts so in the case of this telecom profile if you don't have boundary clouds or transfer clock properties and your switches you can still be in good shape if you only have a few hops and you have a packet selector technology so here we have our our big offset due to queuing but it goes through this packet selector and what that does is it again is a whole bunch of PGP packets because the telecom profile sent sends more of them may be 128 sync messages per second so it has enough to look through and find let's find some of those that have a small delay and we'll use those for our timing and throw the rest away so some kind of lucky packet filter can really make a big difference and that's good if you you know for shell and network with no on pass support monitoring because networks change they change I always recommend monitoring typically when you install BGP in your network you're going to have more than one Grandmaster because if one goes offline for some reason you want the other one available the convenient thing for your IT staff is just put them next to each other in the rack so they can manage them that way but that not necessarily what's best what's best is put it as far away from each other as you can on the other end of the network in terms of network topology that way you can use your backup Grandmaster to look at what it's giving in terms of time and say well it looks like I'm getting this 800 nanoseconds offset through the network so probably the slaves are getting that as well and therefore they should be inspect if that's within your tolerances for your your application and the other important thing about monitoring is you put time in you make a bunch of measurements you make sure you do it right and it's all working so you kind of forget about it and it's there one year two years and then things start to change like traffic patterns are changing you know in other parts of the network are being swapped out and so something that worked initially doesn't work anymore maybe so you always want to monitor your network or your timing requirements start to get more and more severe and something that works great when you first put it in there's no longer good enough so you want to know if it is good enough or not so okay let me finish if you take away so vanilla NTP probably not good enough for financial timing where you have requirements like 100 microseconds or maybe much more severe if you're doing high-frequency trading and you one in a hundred good your networks doing and certainly just going out to some random pool server you've only been a hood is as Judah pointed out is not the most robust thing you can do NTP has this ability to identify bad servers you want to make sure that's turned on that you have peering and you have if you have clients that are looking at more than one source of time with PTP you have the option of either on past support transparent Clark boundary clocks or using this telecom profile technology do lucky package filtering and lastly you always want to monitor your works once they're running okay anyone have a question or hear you I just wanted to I guess ask your question about NTP I mean NTP this you seem to like mix up and TP and NTP dntp D is the free software implementation and you're absolutely right quality is low and NTP over the network it's low quality but NTP itself as a protocol it's not all active from for PGP and there is NTP Hardware time stamping so there's no you know technical reason to not get same level of quality and in fact you know you can't get the same level of quality if you if you use the right technology for it so NTP if you if its standard NTP that makes the RFA's from ITF it doesn't doesn't have to have poor time sip it can have hardware time stamping for example and that can help a lot but it is still bound by relatively low packet rates that make it harder to do things like lucky packet filters and that sort of thing it just wasn't designed and they're not saying you can't do non-standard ntp but or something that has the same packet formats but it's not the ntp defined by IETF and we [Applause]
Info
Channel: Alliance for Telecommunications Industry Solutions
Views: 3,792
Rating: 3.2857144 out of 5
Keywords:
Id: wPGRXQYDAyQ
Channel Id: undefined
Length: 19min 25sec (1165 seconds)
Published: Fri Feb 03 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.