Re: Predictable Internet Time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 03/01/2017 17:42, Phillip Hallam-Baker wrote:
Agree 100%

Hence my proposal for supporting multiple time scales for different purposes:


1) TAI: Use this and only this for any and all purposes that involve recording the time an event took place. Including forensic and scientific purposes.

2) PIT: Use this for inter-machine communications. It may also be used to present TAI in a human readable form because the mapping from TAI to PIT is fixed.

It would, in my view, be better to use TAI for all inter-machine communications, and then allow applications that care, including non-scientific display of time to run the PIT algorithm locally.

There was a draft that I think Yaakov Stein and I wrote in the early days of TICTOC distinguishing transferred time from presentation time.

As far as I can see the only application that needs the PIT adapted version of time is an astronomers wall clock, after all most humans live with an error of an 0..2 hours between local astronomical time and the time-zone time, and most machines would be quite happy to live with what ever time they are using with no further leap seconds.

- Stewart


3) Local Time Zones: For human display purposes


From the conversation it seems that the best definition for PIT would be

PIT = TAI  + Smear ( Lag (UTC, 50 years ))



On Tue, Jan 3, 2017 at 12:27 PM, Stewart Bryant <stewart.bryant@xxxxxxxxx> wrote:

Smearing worries me.

If you have an application where tiny fractions of a second make no difference, then
a slow smear is a good approximation to no leap second.

However, there are some highly accurate implementations of NTP, and some highly
sensitive applications that use it, and having a long term interval error, which is what
happens during a smear, is harmful to those applications.

It seems to me that it might be better to freeze NTP on the current leap second
offset. Provide the current leap second offset to the application as a parameter
and let the application deal with it as it chooses.

- Stewart


On 03/01/2017 14:08, Tony Finch wrote:
Joe Touch <touch@xxxxxxx> wrote:
Smearing leads to differing interpretations of elapsed time for two reasons:

1) smearing isn't unambiguously specified
2) smearing doesn't match the clock standards set by the ITU (who
defines UTC)
Since leap smear is becoming more popular, it would be sensible to try to
get a consensus on the best way to do it if you do it. Clearly
organizations that do leap smear think (2) leap seconds are too much
trouble so it's better to diverge from official time in a controlled
manner.

To clear up (1) there are a few technical choices on which people seem to
be working towards some kind of agreement...

* If you centre the smear period over the leap second, your maximum error
   from UTC is 0.5s, which seems to be preferable to starting or ending the
   smear period on the leap second

* Linear smear works better than sigmoid smear, since it minimizes the
   rate divergence for a given smear period, and NTP's algorithms react
   better

* Longer smear periods are better, because they give NTP more time to
   react to the rate change, and they minimize the rate difference

It looks to me like a 24h leap smear from 12:00 UTC before the leap to
12:00 UTC after the leap has a good chance of becoming more popular than
other leap smear models.

Tony.




[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]