On Mon, May 01, 2023 at 12:05:17PM -0400, Jeff Layton wrote: > On Mon, 2023-05-01 at 22:09 +0800, kernel test robot wrote: > The test does this: > > SAFE_CLOCK_GETTIME(CLOCK_REALTIME_COARSE, &before_time); > clock_wait_tick(); > tc->operation(); > clock_wait_tick(); > SAFE_CLOCK_GETTIME(CLOCK_REALTIME_COARSE, &after_time); > > ...and with that, I usually end up with before/after_times that are 1ns > apart, since my machine is reporting a 1ns granularity. > > The first problem is that the coarse grained timestamps represent the > lower bound of what time could end up in the inode. With multigrain > ctimes, we can end up grabbing a fine-grained timestamp to store in the > inode that will be later than either coarse grained time that was > fetched. > > That's easy enough to fix -- grab a coarse time for "before" and a fine- > grained time for "after". > > The clock_getres function though returns that it has a 1ns granularity > (since it does). With multigrain ctimes, we no longer have that at the > filesystem level. It's a 2ns granularity now (as we need the lowest bit > for the flag). Why are you even using the low bit for this? Nanosecond resolution only uses 30 bits, leaving the upper two bits of a 32 bit tv_nsec field available for internal status bits. As long as we mask out the internal bits when reading the VFS timestamp tv_nsec field, then we don't need to change the timestamp resolution, right? Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx