1 / 24

Network Time Protocol (NTP) General Overview

Network Time Protocol (NTP) General Overview. David L. Mills University of Delaware http://www.eecis.udel.edu/~mills mailto:mills@udel.edu. Introduction. Network Time Protocol (NTP) synchronizes clocks of hosts and routers in the Internet

feng
Télécharger la présentation

Network Time Protocol (NTP) General Overview

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Network Time Protocol (NTP)General Overview David L. Mills University of Delaware http://www.eecis.udel.edu/~mills mailto:mills@udel.edu

  2. Introduction • Network Time Protocol (NTP) synchronizes clocks of hosts and routers in the Internet • Well over 100,000 NTP peers deployed in the Internet and its tributaries all over the world • Provides nominal accuracies of low tens of milliseconds on WANs, submilliseconds on LANs, and submicroseconds using a precision time source such as a cesium oscillator or GPS receiver • Unix NTP daemon ported to almost every workstation and server platform available today - from PCs to Crays - Unix, Windows, VMS and embedded systems • Following is a general overview of the NTP architecture, protocol and algorithms • Data are included from a survey of NTP clients and servers in the Internet of 1997

  3. Needs for synchronized time • Stock market sale and buy orders and confirmation timestamps • Network fault isolation, reporting and restoral • Network monitoring, measurement and control • Distributed multimedia stream synchronization • RPC at-most-once transactions; replay defenses; sequence-number disambiguation • Research experiment setup, measurement and control • Cryptographic key management and lifetime control

  4. NTP capsule summary • Primary (stratum 1) servers synchronize to national time standards via radio, satellite and modem • Secondary (stratum 2, ...) servers and clients synchronize to primary servers via hierarchical subnet • Clients and servers operate in master/slave, symmetric or multicast modes with or without cryptographic authentication • Reliability assured by redundant servers and diverse network paths • Engineered algorithms reduce jitter, mitigate multiple sources and avoid improperly operating servers • System clock is disciplined in time and frequency using an adaptive algorithm responsive to network time jitter and clock oscillator frequency wander

  5. NTP configurations S3 S3 S3 S2 S2 S2 S2 • (a) Workstations use multicast mode with multiple department servers • (b) Department servers use client/server modes with multiple campus servers and symmetric modes with each other • (c) Campus servers use client/server modes with up to six different external primary servers and symmetric modes with each other and external secondary (buddy) servers * * S4 S3 S3 Workstation (a) Clients (b) S1 S1 S1 S1 S1 S1 * * * S2 S2 S2 Clients (c) * to buddy (S2)

  6. How NTP works Peer 1 Filter 1 Intersection and Clustering Algorithms • Multiple synchronization peers provide redundancy and diversity • Clock filters select best from a window of eight clock offset samples • Intersection and clustering algorithms pick best subset of servers believed to be accurate and fault-free • Combining algorithm computes weighted average of offsets for best accuracy • Phase/frequency-lock feedback loop disciplines local clock time and frequency to maximize accuracy and stability Combining Algorithm Peer 2 Filter 2 Loop Filter P/F-Lock Loop Peer 3 Filter 3 VFO NTP Messages Timestamps

  7. NTP process decomposition (NTPv4) Peer 1 Filter 1 Selection and Clustering Algorithms • Each peer process runs independently at poll intervals determined by the system process and remote server • System process runs at poll intervals determined by the measured network phase jitter and local clock oscillator frequency stability • Clock adjust process runs at 1-s intervals to discipline the VFO phase and frequency Combining Algorithm Peer 2 Filter 2 Loop Filter Clock Adj. Proc. Peer 3 Filter 3 SystemProcess VFO RemoteServers PeerProcesses

  8. NTP dataflow analysis Server 1 D, E Peer 1 q, d, e, j Selection and Combining Algorithms • Each server calculates server variables offset Q, delay D and dispersion E relative to the root of the synchronization subtree • At each NTP message arrival, the peer process updates peer offset q, delay d, dispersion e and filter error jr (NTPv4) from timestamps and clock filter algorithm • At system poll intervals, the clock selection and combining algorithms update system variables Q, D,E, and j • Dispersions e and E increase with time at a rate depending on specified frequency tolerance f System Q, D, E, j Server 2 D, E Peer 2 q, d, e , j Server 3 D, E Peer 3 q, d, e , j

  9. Clock filter algorithm T2 Server T3 • The most accurate offset q0 is measured at the lowest delay d0 (apex of the wedge scattergram). • The correct time q must lie within the wedge q0 ± (d - d0)/2. • The d0 is estimated as the minimum of the last eight delay measurements and (d0 ,q0) becomes the offset and delay output. • Each output can be used only once and must be more recent than the previous output. • The distance metric l is based on delay, frequency tolerance and time since the last measurement. x q0 T1 Client T4

  10. Performance of clock filter algorithm • These plots show the absolute clock offset in semilog coordinates for a path between the US east and west coasts over six days • (left) Raw absolute data offset samples • (right) Data offset samples processed by the clock filter algorithm • The algorithm reduces offset errors by a factor of about ten • The algorithm is particularly effective at removing spikes

  11. Intersection algorithm B • DTS correctness interval is the intersection which contains points from the largest number of correctness intervals • NTP algorithm requires the midpoint of the intervals to be in the intersection • Initially, set falsetickers f and counters c and d to zero • Scan from far left endpoint: add one to c for every lower endpoint, subtract one for every upper endpoint, add one to d for every midpoint • If c³ m-f and d³ m-f, declare success and exit procedure • Do the same starting from the far right endpoint • If success undeclared, increase f by one and try all over again • if fm/2, declare failure correctness interval = q - l £ q0 £ q + l m = number of clocks f = number of presumed falsetickers A, B, C are truechimers D is falseticker A D C Correct DTS Correct NTP

  12. Clustering algorithm Sort survivors of intersection algortihm by increasing synchronization distance. Let n be the number of survivors and nmin a lower limit. For each survivor si, compute the select dispersion (weighted sum of clock difference squares) between si and all others. Let smax be the survivor with maximum select dispersion (relative to all other survivors) and smin the survivor with minimum sample dispersion (clock differences relative to past samples of the same survivor). smax£smin or n£ nmin? yes no Delete the survivor smax; reduce n by one The resulting survivors are processed by the combining algorithm to produce a weighted average used as the final offset adjustment

  13. Error budget - notation • Constants (peers A and B)r maximum reading errorf maximum frequency errorw dispersion normalize: 0.5 • Packet variablesDB peer root delayEB peer root dispersion • Sample variablesT1, T2, T3, T4 protocol timestampsx clock offsety roundtrip delayz dispersiont interval since last update • System variablesQ clock offsetD root delayE root dispersionjs selection jitterj jittert interval since last update • Peer variablesq clock offsetd roundtrip delaye dispersionjr filter jittern filter stages: 8t interval since last update

  14. Error budget - calculations Sample Variables Peer Variables System Variables S S Peer A S Peer B NTP Version 4 Error Budget

  15. Clock discipline algorithm qr+ Vd Vs NTP Phase Detector Clock Filter qc- VFO Loop Filter x Vc ClockAdjust Phase/FreqPrediction y • Vd is a function of the phase difference between NTP and the VFO • Vs depends on the stage chosen on the clock filter shift register • x and y are the phase update and frequency update, respectively, computed by the prediction functions • Clock adjust process runs once per second to compute Vc, which controls the frequency of the local clock oscillator • VFO phase is compared to NTP phase to close the feedback loop

  16. NTP protocol header and timestamp formats NTP Protocol Header Format (32 bits) LI leap warning indicator VN version number (4) Strat stratum (0-15) Poll poll interval (log2) Prec precision (log2) LI VN Mode Strat Poll Prec Root Delay Root Dispersion Reference Identifier Reference Timestamp (64) NTP Timestamp Format (64 bits) Originate Timestamp (64) Seconds (32) Fraction (32) Value is in seconds and fraction since 0h 1 January 1900 Cryptosum Receive Timestamp (64) Transmit Timestamp (64) NTPv4 Extension Field Extension Field 1 (optional) Field Length Field Type Extension Field (padded to 32-bit boundary) Extension Field 2… (optional) Last field padded to 64-bit boundary Key/Algorithm Identifier NTP v3 and v4 Authenticator (Optional) Message Hash (64 or 128) NTP v4 only authentication only Authenticator uses DES-CBC or MD5 cryptosum of NTP header plus extension fields (NTPv4)

  17. A day in the life of a busy NTP server • NTP primary (stratum 1) server rackety is a Sun IPC running SunOS 4.1.3 and supporting 734 clients scattered all over the world • This machine supports NFS, NTP, RIP, IGMP and a mess of printers, radio clocks and an 8-port serial multiplexor • The mean input packat rate is 6.4 packets/second, which corresponds to a mean poll interval of 157 seconds for each client • Each input packet generates an average of 0.64 output packets and requires a total of 2.4 ms of CPU time for the input/output transaction • In total, the NTP service requires 1.54% of the available CPU time and generates 10.5, 608-bit packets per second, or 0.41% of a T1 line • The conclusion drawn is that even a slow machine can support substantial numbers of clients with no significant degradation on other network services

  18. Server population by stratum (from survey)

  19. Client population by stratum (from survey)

  20. Reference clock sources • In a survey of 36,479 peers, found 1,733 primary and backup external reference sources • 231 radio/satellite/modem primary sources • 47 GPS satellite (worldwide), GOES satellite (western hemisphere) • 57 WWVB radio (US) • 17 WWV radio (US) • 63 DCF77 radio (Europe) • 6 MSF radio (UK) • 5 CHU radio (Canada) • 7 modem time service (NIST and USNO (US), PTB (Germany), NPL (UK)) • 25 other (precision PPS sources, etc.) • 1,502 local clock backup sources (used only if all other sources fail) • For some reason or other, 88 of the 1,733 sources appeared down at the time of the survey

  21. Current progress and status • NTP Version 4 protocol, architecture and algorithms • Backwards compatible protocol algorithm implemented and tested • Improved local clock model completed and tested • Nanokernel precision time kernel modifications simulated, implemented and tested with SPARC, Alpha and Intel architectures • IETF pulse-per-second application program interface implemented and tested for SPARC and Intel architectures • Autonomous configuration autoconfigure • Multicast discovery with propagation correction completed and tested • Manycast discovery largely completed • Distributed add/drop greedy heuristic designed and simulated • Span-limited, hierarchical multicast groups using NTP distributed mode and add/drop heuristics under study • Autonomous authentication autokey • Implemented and in test

  22. Future plans • Complete autoconfigure and autokey implementation in NTP Version 4 • Deploy, test and evaluate NTP Version 4 daemon in DARTnet II testbed, then at friendly sites in the US, Europe and Asia • Revise the NTP formal specification and launch on standards track • Participate in deployment strategies with NIST, USNO, others • Prosecute standards agenda in IETF, ANSI, ITU, POSIX • Develop scenarios for other applications such as web caching, DNS servers and other multicast services

  23. NTP online resources • NTP specification documents • Internet (Draft) NTP standard specification RFC-1305 • Simple NTP (SNTP) RFC-2030 • NTP Version 4 papers and reports at http://www.eecis.udel.edu/~mills • Under consideration in ANSI, ITU, POSIX • NTP web page http://www.ntp.org/ • NTP Version 3 and Version 4 software and HTML documentation • Utility programs for remote monitoring, control and performance evaluation • Ported to over two dozen architectures and operating systems • Supporting resources • List of public NTP time servers (primary and secondary) • NTP newsgroup and FAQ compendia • Tutorials, hints and bibliographies • Links to other NTP software

  24. Further information • Network Time Protocol (NTP): http://www.ntp.org/ • Current NTP Version 3 and 4 software and documentation • FAQ and links to other sources and interesting places • David L. Mills: http://www.eecis.udel.edu/~mills • Papers, reports and memoranda in PostScript and PDF formats • Briefings in HTML, PostScript, PowerPoint and PDF formats • Collaboration resources hardware, software and documentation • Songs, photo galleries and after-dinner speech scripts • FTP server ftp.udel.edu (pub/ntp directory) • Current NTP Version 3 and 4 software and documentation repository • Collaboration resources repository • Related project descriptions and briefings • See “Current Research Project Descriptions and Briefings” at http://www.eecis.udel.edu/~mills/status.htm

More Related