Hi Thomas, On Sun, Apr 10, 2022 at 01:29:32AM +0200, Thomas Gleixner wrote: > But the below uncompiled hack gives you access to the 'best' clocksource > of a machine, i.e. the one which the platform decided to be the one > which is giving the best resolution. The minimal bitwidth of that is > AFAICT 20 bits. In the jiffies case this will at least advance every > tick. Oh, huh, that's pretty cool. I can try to make a commit out of that. Are you suggesting I use this as the fallback for all platforms that currently return zero, or just for m68k per Arnd's suggestion, and then use sched_clock() for the others? It sounds to me like you're saying this would be best for all of them. If so, that'd be quite nice. > The price, e.g. on x86 would be that RDTSC would be invoked via an > indirect function call. Not the end of the world... Well on x86, random_get_entropy() is overridden in the arch/ code to call get_cycles(). So this would really just be for 486 and for other architectures with no cycle counter that are currently returning zero. However, this brings up a good point: if your proposed ktime_read_raw_clock() function really is so nice, should it be used everywhere unconditionally with no arch-specific overrides? On x86, is it really guaranteed to be RDTSC, and not, say, some off-core HPET situation? And is this acceptable to call from the hard irq handler? Not yet having too much knowledge, I'm tentatively leaning toward the safe side, of just using ktime_read_raw_clock() in the current places that return zero all the time -- that is, for the purpose this patchset has. Jason