On Thu, 2023-09-21 at 12:46 -0700, Linus Torvalds wrote: > On Thu, 21 Sept 2023 at 12:28, Linus Torvalds > <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote: > > > > And that's ok when we're talking about times that are kernel running > > times and we haev a couple of centuries to say "ok, we'll need to make > > it be a bigger type", > > Note that the "couple of centuries" here is mostly the machine uptime, > not necessarily "we'll need to change the time in the year 2292". > Right. On-disk formats are really a different matter anyway, so that value is only relevant within a single running instance. > Although we do also have "ktime_get_real()" which is encoding the > whole "nanoseconds since 1970". That *will* break in 2292. > Still pretty much SEP, unless we all end up as cyborgs after this. > Anyway, regardless, I am *not* suggesting that ktime_t would be useful > for filesystems, because of this issue. > > I *do* suspect that we might consider a "tenth of a microsecond", though. > > Resolution-wise, it's pretty much in the "system call time" order of > magnitude, and if we have Linux filesystems around in the year-31k, > I'll happily consider it to be a SEP thing at that point ("somebody > else's problem"). > FWIW, I'm reworking the multigrain ctime patches for internal consumers. As part of that, when we present multigrain timestamps to userland via statx, we'll truncate them at a granularity of (NSEC_PER_SEC / HZ). So, we could easily do that today since we're already going to be truncating off more than that for external uses. Having a single word to deal with would sure be simpler too, particularly since we're using atomic operations here. I'll have to think about it. The first step is to get all of the timestamp handling wrappers in place anyway. Cheers, -- Jeff Layton <jlayton@xxxxxxxxxx>