On Wed, 2024-07-10 at 10:38 +0200, Arnd Bergmann wrote: > On Tue, Jul 9, 2024, at 20:27, Jeff Layton wrote: > > On Tue, 2024-07-09 at 19:06 +0200, Arnd Bergmann wrote: > > > On Tue, Jul 9, 2024, at 17:27, Jeff Layton wrote: > > > > On Tue, 2024-07-09 at 17:07 +0200, Arnd Bergmann wrote: > > > > > > Yes, I had considered it on an earlier draft, but my attempt was pretty > > laughable. You inspired me to take another look though... > > > > If we go that route, what I think we'd want to do is add a new floor > > value to the timekeeper and a couple of new functions: > > > > ktime_get_coarse_floor - fetch the max of current coarse time and floor > > ktime_get_fine_floor - fetch a fine-grained time and update the floor > > I was thinking of keeping a name that is specific to the vfs > usage instead of the ktime_get_* namespace. I'm sure the timekeeping > maintainers will have an opinion on this though, one way or another. > Fair enough. > > The variety of different offsets inside the existing timekeeper code is > > a bit bewildering, but I guess we'd want ktime_get_fine_floor to call > > timekeeping_get_ns(&tk->tkr_mono) and keep the latest return cached. > > When the coarse time is updated we'd zero out that cached floor value. > > Why not update the cached value during the timekeeping update as well > instead of setting it to zero? That way you can just always use the > cached value for VFS and simplify the common code path for reading > that value. > You mean just update it to the coarse time on the update? That seems like it would also work. > > Updating that value in ktime_get_fine_floor will require locking or > > (more likely) some sort of atomic op. timekeeping_get_ns returns u64 > > though, so I think we're still stuck needing to do a cmpxchg64. > > Right, or atomic64_cmpxchg() to make it work on 32-bit. > I think that's the catch. Without being able to move to cmpxchg32 for the floor update, we're not buying much by bringing it into the timekeeper. Is there some big benefit that I'm missing? -- Jeff Layton <jlayton@xxxxxxxxxx>