Jeff! On Wed, Oct 18 2023 at 13:41, Jeff Layton wrote: > +void ktime_get_mg_fine_ts64(struct timespec64 *ts) > +{ > + struct timekeeper *tk = &tk_core.timekeeper; > + unsigned long flags; > + u32 nsecs; > + > + WARN_ON(timekeeping_suspended); > + > + raw_spin_lock_irqsave(&timekeeper_lock, flags); > + write_seqcount_begin(&tk_core.seq); Depending on the usage scenario, this will end up as a scalability issue which affects _all_ of timekeeping. The usage of timekeeper_lock and the sequence count has been carefully crafted to be as non-contended as possible. We went a great length to optimize that because the ktime_get*() functions are really hotpath all over the place. Exposing such an interface which wreckages that is a recipe for disaster down the road. It might be a non-issue today, but once we hit the bottleneck of that global lock, we are up the creek without a paddle. Well not really, but all we can do then is fall back to ktime_get_real(). So let me ask the obvious question: Why don't we do that right away? Many moons ago when we added ktime_get_real_coarse() the main reason was that reading the time from the underlying hardware was insanely expensive. Many moons later this is not true anymore, except for the stupid case where the BIOS wreckaged the TSC, but that's a hopeless case for performance no matter what. Optimizing for that would be beyond stupid. I'm well aware that ktime_get_real_coarse() is still faster than ktime_get_real() in micro-benchmarks, i.e. 5ns vs. 15ns on the four years old laptop I'm writing this. Many moons ago it was in the ballpark of 40ns vs. 5us due to TSC being useless and even TSC read was way more expensive (factor 8-10x IIRC) in comparison. That really mattered for FS, but does todays overhead still make a difference in the real FS use case scenario? I'm not in the position of running meaningful FS benchmarks to analyze that, but I think the delta between ktime_get_real_coarse() and ktime_get_real() on contemporary hardware is small enough that it justifies this question. The point is that both functions have pretty much the same D-cache pattern because they access the same data in the very same cacheline. The only difference is the actual TSC read and the extra conversion, but that's it. The TSC read has been massively optimized by the CPU vendors. I know that the ARM64 counter has been optimized too, though I have no idea about PPC64 and S390, but I would be truly surprised if they didn't optimize the hell out of it because time read is really used heavily both in kernel and user space. Does anyone have numbers on contemporary hardware to shed some light on that in the context of FS and the problem at hand? Thanks, tglx