Re: [PATCH v3] date: detect underflow/overflow when parsing dates with timezone offset

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Phillip Wood <phillip.wood123@xxxxxxxxx> writes:

>>> +/* timestamp of 2099-12-31T23:59:59Z, including 32 leap days */
>>> +static const time_t timestamp_max = ((2100L - 1970) * 365 + 32) * 24 * 60 * 60 - 1;
>>>
>> Nit: but since we're calculating the number of years here (2100L -
>> 1970), shouldn't we also be calculating the number of leap days instead
>> of hardcoding it?
>
> I'm happy with a hard coded constant for the number of leap days - I
> think it is probably easier to check that (which I have done) than it
> would be to check the calculation as I'm not sure off the top of my
> head if is it safe to do (2100-1970)/4 or whether we need something
> more complicated.

It's even OK to use a hard coded constant for the number of days
since the epoch to the git-end-of-time ;-)

The timestamp of the git-end-of-time would not fit in time_t on
32-bit systems, I would presume?  If our tests are trying to see if
timestamps around the beginning of year 2100 are handled
"correctly", the definition of the correctness needs to be
consitional on the platform.

On systems with TIME_T_IS_64BIT, we'd want to see such a timestamp
to be represented fine.  On systems without, we'd want to see the
"Timestamp too large for this system" error when we feed such a
timestamp to be parsed.

Thanks.




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux