On 21 jun 2008, at 20:38, Chad Giffin wrote:
Make time_t 64 bits wide. Make the most significant bit (bit 63) a
sign bit. Make the next 50 significant bits store the number of
seconds elapsed since January 1st 2000 GMT. The last 13 bits be of
fractions of a second.
Compute time in 1/8192s of a second? That seems highly unworkable.
Also, what kind of time are we talking about? We all like to live
under the assumption that a day is 86400 seconds long, but
unfortunately, the definitions of "day", "86400" and "second" are such
that this is not the case, so there are different kinds of time that
make different kinds of compromises, like as leap seconds for UTC.
Note that leap seconds make converting between UTC and the actual
number of seconds since any chosen epoch a non-trivial exercise for
the present and the past, and impossible for the future without
deviating from the definition of "second".
Last but not least, common C APIs already support microsecond
resolution for timing, I forget the exact name, though.
What do you think?
I think using a single value for high precision timing as well as
calendar timing is asking for trouble. Then again, there are also
people who think changing your clock is a good way to get up a bit
earlier in the summer.
_______________________________________________
IETF mailing list
IETF@xxxxxxxx
https://www.ietf.org/mailman/listinfo/ietf