TimeNL comes of age

Increased capacity and reliability

Man looks at a watch on his wrist

The original blog is in Dutch. This is the English translation.

In 2019, we launched our public NTP service, TimeNL. Our aim was to make the NTP ecosystem more transparent and to emphasise the importance of time synchronisation. Having observed that the initiative was well-received, we decided to develop TimeNL further. We began with an NTS (Network Time Security) pilot, before building an anycast variant with twenty-eight (1) globally distributed nodes in 2020. Since then, TimeNL has been very well used, and we have continued doing NTP research. Plans for version 2, with extensions designed to make TimeNL more resilient and reliable, were also formulated. With most of those plans now realised, it seems like a good time for an update.

TimeNL version 1: the problem of a diffuse ecosystem

Before looking at TimeNL version 2, it's useful to recall why TimeNL was originally set up.

The first version of TimeNL was introduced because the NTP ecosystem is very diffuse. Most NTP service providers consequently aren't transparent about the characteristics of the services they offer, such as what time sources they use (e.g. atomic clock, GPS or DCF77). Nor do they usually provide information about service availability, reliability and redundancy, or about ISO27001 certification, for example. It's hard to establish how well NTP servers are managed and monitored, or what service levels the operators aspire to. And few service providers offer proper user support or contact mechanisms.

Lack of transparency means it's hard to make a sound, well-informed choice about which public NTP service to use. And that introduces risks for time service users. After all, reliable timing, with at least microsecond accuracy in some cases, is very important for certain applications. An inaccurate timestamp can have legal implications where 'legally traceable time' is required, as in the financial services sector. Accurate timestamps are also important in the analysis of log files for forensic and other purposes. Closer to home, we have the first-come, first-served principle used in domain name registrations. Other applications for which time is significant include TLS certificates, 'high-frequency trading' (on stock markets that have to be MiFID II-compliant), digital signing (e.g. DNSSEC), power grids, OAuth tokens and Kerberos, SCADA systems, CCTV security systems, and broadcasting (mixing, mastering in studios, etc). Cryptocurrencies rely on accurate timing as well. And, of course, countless more mundane applications are time-dependent, including simple diary applications and the synchronisation of domestic clocks. In short, accurate time is needed for all sorts of reasons.

We therefore set up TimeNL to draw attention to the significance of time services, while also helping to address some of the shortcomings we had observed with public NTP services. Our public NTP service was accordingly designed to be transparent, considered, responsibly operated (e.g. with respect for privacy), and based on modern standards, current software and state-of-the-art equipment.

For example, we don't rely exclusively on the American GPS system for our reference clock, but use the European Galileo system as well, with DCF77 as a backup. TimeNL is also accessible using IPv6.

For the full backstory to TimeNL version 1, see our website.

TimeNL version 2: increased reliability

We gradually came to the conclusion that our first version of TimeNL wasn't the finished article, because it was potentially vulnerable to certain types of fault. We therefore set about designing a new version capable of satisfying two further requirements.

First, that it should be able to cope with radio signal interference. GNSS (and DCF77) signals sometimes exhibit inaccuracies due to reflection (e.g. off buildings), exceptional atmospheric and other conditions, jamming or spoofing. At least equally significantly, they can also be affected by storm damage and other issues.

Our second requirement was increased availability levels for the TimeNL servers. We wanted to make the single NTP server we started out with horizontally scalable, to enable us to cope with sudden peaks in NTP traffic. Such peaks can be caused by unintended user-side errors, or theoretically by malicious intervention. For example, there have been a few occasions when the flow of NTP queries rose considerably, from roughly 2,000 per second to 75,000 per second (see figure 1).

Line graph showing a sudden increase in the number of NTP requests per second.

Figure 1: Sudden peak in NTP queries.

Design of TimeNL version 2

Figure 2 is the schematic design of TimeNL version 2, based on our two additional reliability requirements. The main components are:

  • Two core clocks: the primary core clock (TimeNL version 1) provides a very accurate time signal based on the GNSS and DCF77 reference clocks and associated antennas. In our case, the antennas are located on the roof of the SIDN building in Arnhem. The secondary core clock also provides an accurate time signal, but via a connected Rubidium atomic 'holdover clock'. That clock is normally synchronised with the primary clock, but, in the event of a GNSS and DCF77 signal outage, can assume the role of reference clock for the primary and secondary core clocks and maintain that role for an extended period.

  • Multiple frontend clocks, which use the Precision Time Protocol (PTP) to obtain time checks from one of the two core clocks. The idea of a frontend system of this kind is that it allows for the rapid expansion of capacity, simply by adding more systems. Another advantage is that the frontend clocks can offer NTS (Network Time Security) and even provide PTP services to external users.

  • A PTP backbone: a network based on the Precision Time Protocol, which connects all our clock systems and synchronises them on the basis of the PTP protocol. PTP allows for a very high level of accuracy.

The primary and secondary core clocks are, respectively, a Meinberg M3000 and a Meinberg M1000. They act as stratum-1 NTP servers and as PTP GMs (grandmasters). The frontend clocks are Linux-based servers.

Design of TimeNL version 2

Figure 2: Design of TimeNL version 2.

Our realisation of the various components is described below.

PTP backbone

The time servers for TimeNL version 2 (core clocks and frontend clocks) are linked by an IEEE1588 PTPv2 connection. PTP stands for Precision Time Protocol, an extremely accurate synchronisation protocol whose field of application differs from NTP. We have found PTP to be ideal for our purpose, namely the highly precise synchronisation of the frontend clocks with the core clocks.

One reason for PTP's accuracy is the use of 'hardware timestamping' on the physical network interfaces (the PHY layer in figure 3). By generating the timestamp as low as possible in the protocol stack, the variations that can occur higher up are avoided.

Hardware timestamping on the PHY layer

Figure 2: Hardware timestamping on the PHY layer prevents protocol stack delays.

Rubidium atomic holdover clock

For the secondary core clock, we use a Meinberg M1000 linked to a Meinberg XHERb chassis. The core clock communicates with the TimeNL version 1 hardware (2) via the PTP backbone, a Meinberg M3000 (the primary core clock). However, the two clocks' roles can be reversed – if, for example, the M3000 loses a reliable GNSS or DCF77 reference clock signal.

Figure 4 shows our server rack assembly at the time of set-up and testing.

TimeNL v2 test set-up

Figure 4: TimeNL v2 test set-up.

Using the ultra-precise M1000's PRS10 Rubidium Oscillator and Meinberg's Multi-Reference Source (MRS) and Intelligent Reference Switching Algorithm (IRSA) technologies, we can constantly check the reliability of the GPS/Galileo and DCF77 reference clocks. If reliability cannot be confirmed, the system automatically switches to the holdover clock, which is permanently on stand-by as a Trusted Source (TRS).

The M3000 and M1000 core clocks keep each other 'in sync' via the PTP backbone (reversing roles where desirable). The Best Master Clock Algorithm (BMCA), which forms part of the PTP standard, determines which of the two clocks ultimately serves as the PTP grand master (GM) clock. The GM clock is then used by the stratum-1 NTP frontend clocks for time synchronisation.

Stratum-1 NTP frontend clocks

As figure 2 shows, the internet traffic directed to us via the NTP Pool is currently received by two frontend clocks, both of which act as stratum-1 NTP servers. That functional separation enables us to develop the capacity of the PTP backbone and of the NTP servers separately. That is important in the context of our implementation, partly because our M1000 was supplied with a reduced-capacity CPU module because of the global FPGA chip shortage (3), which isn't ideal, given the volume of traffic we sometimes get from the NTP Pool. Fortunately, the two NTP frontend clocks are easily able to handle the traffic.

It will be straightforward to add further NTP frontend clocks in due course. As mentioned earlier, one of our requirements was that the set-up should be easily horizontally scalable. We have realised that by implementing the NTP frontend clocks with off-the-shelf Linux servers and linking them in the 'PTP cloud'. We have also optimised the Linux servers for their particular task.

The Linux NTP servers additionally enable us to offer NTS (Network Time Security). We have previously run an NTS pilot (nts.time.nl), but NTS is now supported by the production implementation.

Initial experience with TimeNL version 2

Our new set-up has now been running for more than two months without a hitch. The frontend clocks (Linux systems) are performing very well, coping comfortably with the traffic peaks (even better than the Meinbergs). They also increase our flexibility – by, for example, enabling us to compartmentalise functionality. For instance, we can have an interface with Network A and another with Network B, each served by its own autonomous instance of the NTP software. We use Chrony for that purpose (Meinberg has its own version of the classic NTP software) and we have integrated that with LinuxPTP as the stratum-0 time source.

As well as 'hardening' (i.e. securing) the two frontend clocks, we also wanted to optimise their performance. After all, every nanosecond improvement counts with PTP. We therefore used Chrony to set hardware timestamps. We run a low-latency core and have used tools such as 'chrt' to configure real-time scheduling attributes and the like. We have taken the unusual step of disabling EEE (Energy-Efficient Ethernet) on a number of network cards, because EEE can adversely affect accuracy, particularly if the interface has relatively little to do (and goes to sleep).

PTP was new to us, implying a steep learning curve. Unlike the NTP protocol, which is maintained under the IETF umbrella that we know well, the PTP standard is from the IEEE stable. As a result, there was a lot to get used to. PTP's field of application was originally closed and aimed at local, trusted network environments. In consequence, PTP leaves a lot to be desired in terms of security. For example, you need to ensure that a malicious or benign 'rogue' clock can't trick the BMCA and start acting as your grand master (GM), since the consequences could be serious. The PTP working group is aware of the shortcomings and is working on improvements. And, as PTP gradually gains ground, some PTP hardware vendors are looking to improve security as well. In the meantime, we have tackled the problems by setting up a sort of ringfenced PTP Boundary Clock (BC), whose operational performance we'll be evaluating.

Various practical tasks had to be taken care of as well, such as redesigning our monitoring set-up and making arrangements for knowledge assurance and appropriate operational management.

Next up: pilots with network operators

Various parties have expressed an interest in knowledge exchange and/or collaboration on the basis of TimeNL. For example, a network operator could offer its customers 'time-as-a-service', using TimeNL as one of its supporting time providers. Providing an NTP service has become a routine activity for us. A PTP service may become equally straightforward in due course. There is certainly interest in such a service. We've seen other companies take similar steps recently, emphasising the level of interest in such services.

In the period ahead, we'll look into the possibility of running pilots with network operators, in order to explore the potential of such services for contributing to our goal of a secure and stable internet for all.

If our first small steps are successful, we can start to think bigger. What we ultimately envisage is a pan-European 'time-backbone' via which multiple European time providers make time services available across a wide-area-network. The multi-provider backbone design would promote diversity in the NTP and PTP ecosystem, increasing its resilience. The advantage of the approach would be that network operators wouldn't need to acquire and maintain their own infrastructures (antennas, time severs, PTP networks and holdover clocks).

We regard this initiative as an investment in the internet community and infrastructure, and we see ourselves as the facilitators of developments that others will hopefully take forward.

Feedback welcome!

If you've got any suggestions about how we can continue improving TimeNL or about NTP/PTP-related research issues, or if you want to bounce any other ideas around, please get in touch.


  1. Now reduced to twenty-seven nodes distributed across nineteen locations, handling a total of 100,000 to 175,000 NTP qps.

  2. In addition to the hardware we use for our anycast pilot 'any.time.nl' and our NTS pilot 'nts.time.nl'.

  3. We have agreed an exchange/upgrade procedure and will schedule the exchange/upgrade as soon as the superior CPU module becomes available again.