Hi Paul,
On 10/16/21 2:20 AM, Paul Eggert wrote:
On 10/15/21 3:03 PM, Alejandro Colomar (man-pages) wrote:
Actually, since timegm(3) is implemented in terms of mktime(3), as far
as I could read from glibc code, the problem will be the same, I think.
No, because another thread could setenv ("TZ", ...) between the time
that you call mktime and the time you look at the 'timezone' variable.
So even though mktime itself is thread-safe, the expression 'mktime(tm)
- timezone' is not.
Yes, there are probably many bugs in that sample code I wrote, which
glibc solves in its timegm(3) implementation. That probably gives more
force to the original point: timegm(3) is the only non-error-prone
solution in glibc for using UTC times, so it should not be marked as
"avoid using". The standards should get a function that does that, be
it timegm(), mktime_z(), or both.
Just for curiosity, I'm not sure about this, but from what I've seen,
the only lock in glibc is in gmtime(), which is called repeatedly from
within mktime(). If another thread calls setenv("TZ"...) in between one
of those calls, wouldn't it produce the same problems?
But timegm(3) shouldn't need to depend on environment variables.
It does depend, if leap seconds are involved.
Okay. (I don't know too much about those.)
and the subtraction might overflow.
Yup, casting to int64_t needed.
That would help, but it still wouldn't suffice. It'd mishandle -1
returns, for example.
Ahh, yes.
Plus, we're better of not putting today's hardware
assumptions into code (suppose int is 64 bits in future machines?).
BTW, I had a look at mktime source code, and it uses long, which might
be 32 bits, and then there's a lot of checking for overflow.
mktime uses long_int, which is not necessarily 'long'. And no matter
what type you pick, it could overflow on some platform, even if it's an
only-hypothetical platform now.
I think that's not a problem for the following reasons:
- int is unlikely to be >32 bits. If so, we would miss one of the
"conventional" types: int8_t, int16_t, int32_t couldn't map to
fundamental types, unless we add a new type (short short int?), which is
also unlikely because scanf() already uses %hhi for signed char. I
think it's more likely to see something like 'long long long int'.
- The current types can already handle 128-bit archs (just use the same
mapping as in amd64 and change long long int to be int128_t), so maybe
we'll need the triple long when we get to 256-bit archs. Very
hypothetically, that is.
- Even if int ever happened to be 64 bit, this problem would still be
something very theoretical, since INT64_MAX is way greater than the age
of the universe, and many orders of magnitude greater than the expected
lifespan of the sun, and therefore the concept of leap years, months,
ydays, wdays, and so on will be meaningless for such values. How many
seconds since the Epoch will have happened the 2nd March of the year
that the Milky Way collides with Andromeda, at 11:30? I think the
correct answer to that question should be undefined behavior; or an
error if you want to be nice.
So I wouldn't care for now, and maybe just add some initial check such as:
if (tm->tm_year > SOME_ARBITRARY_HUGE_VALUE || tm->tm_mon >
SOME_ARBITRARY_HUGE_VALUE || ...) {
errno = EOVERFLOW;
return -1;
}
and then go on.
Thanks,
Alex
--
Alejandro Colomar
Linux man-pages comaintainer; https://www.kernel.org/doc/man-pages/
http://www.alejandro-colomar.es/