Re: [PATCH RFC v1 00/10] archs/random: fallback to using sched_clock() if no cycle counter

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jason,

On Fri, Apr 08 2022 at 20:21, Jason A. Donenfeld wrote:
> Sometimes the next best thing is architecture-defined. For example,
> really old MIPS has the CP0 random register, which isn't a cycle
> counter, but is at least something. However, some platforms don't even
> have an architecture-defined fallback. In that case, what are we left
> with?
>
> By my first guess, we have ktime_get_boottime_ns(), jiffies, and
> sched_clock(). It seems like sched_clock() has already done a lot of
> work in being always available with some incrementing value, falling
> back to jiffies as necessary. So this series goes with that as a
> fallback, for when the architecture doesn't define random_get_entropy in
> its own way and when there's no working cycle counter.

sched_clock() is a halfways sane option, but yes as Arnd pointed out:

> Another option would be falling back to different things on different
> platforms. For example, Arnd mentioned to me that on m68k,
> ktime_get_ns() might be better than sched_clock(), because it doesn't
> use CONFIG_GENERIC_SCHED_CLOCK and therefore is only as good as
> jiffies.

ktime_get_ns() or the slightly faster variant ktime_get_mono_fast_ns()
are usable. In the worst case they are jiffies driven too, but systems
with jiffies clocksource are pretty much museum pieces.

It's slightly slower than get_cycles() and a get_cyles() based
sched_clock(), but you get the most granular clock of the platform
automatically, which has it's charm too :)

The bad news is that depending on the clocksource frequency the lower
bits might never change. Always true for clocksource jiffies.
sched_clock() has the same issue.

But the below uncompiled hack gives you access to the 'best' clocksource
of a machine, i.e. the one which the platform decided to be the one
which is giving the best resolution. The minimal bitwidth of that is
AFAICT 20 bits. In the jiffies case this will at least advance every
tick.

The price, e.g. on x86 would be that RDTSC would be invoked via an
indirect function call. Not the end of the world...

Thanks,

        tglx
---
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -646,6 +646,11 @@ static void halt_fast_timekeeper(const s
 	update_fast_timekeeper(&tkr_dummy, &tk_fast_raw);
 }
 
+u32 ktime_read_raw_clock(void)
+{
+	return tk_clock_read(&tk_core.timekeeper.tkr_mono);
+}
+
 static RAW_NOTIFIER_HEAD(pvclock_gtod_chain);
 
 static void update_pvclock_gtod(struct timekeeper *tk, bool was_set)








[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]

  Powered by Linux