On Sat, Oct 01, 2022 at 01:10:50AM +0200, Jason A. Donenfeld wrote: > Rather than merely hoping that the callback gets called on another CPU, > arrange for that to actually happen, by round robining which CPU the > timer fires on. This way, on multiprocessor machines, we exacerbate > jitter by touching the same memory from multiple different cores. > > Cc: Dominik Brodowski <linux@xxxxxxxxxxxxxxxxxxxx> > Cc: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> > Cc: Sultan Alsawaf <sultan@xxxxxxxxxxxxxxx> > Signed-off-by: Jason A. Donenfeld <Jason@xxxxxxxxx> > --- > drivers/char/random.c | 14 ++++++++++++-- > 1 file changed, 12 insertions(+), 2 deletions(-) > > diff --git a/drivers/char/random.c b/drivers/char/random.c > index fdf15f5c87dd..74627b53179a 100644 > --- a/drivers/char/random.c > +++ b/drivers/char/random.c > @@ -1209,6 +1209,7 @@ static void __cold try_to_generate_entropy(void) > struct entropy_timer_state stack; > unsigned int i, num_different = 0; > unsigned long last = random_get_entropy(); > + int cpu = -1; > > for (i = 0; i < NUM_TRIAL_SAMPLES - 1; ++i) { > stack.entropy = random_get_entropy(); > @@ -1223,8 +1224,17 @@ static void __cold try_to_generate_entropy(void) > stack.samples = 0; > timer_setup_on_stack(&stack.timer, entropy_timer, 0); > while (!crng_ready() && !signal_pending(current)) { > - if (!timer_pending(&stack.timer)) > - mod_timer(&stack.timer, jiffies); > + if (!timer_pending(&stack.timer)) { > + preempt_disable(); > + do { > + cpu = cpumask_next(cpu, cpu_online_mask); > + if (cpu == nr_cpumask_bits) > + cpu = cpumask_first(cpu_online_mask); > + } while (cpu == smp_processor_id() && cpumask_weight(cpu_online_mask) > 1); > + stack.timer.expires = jiffies; > + add_timer_on(&stack.timer, cpu); Sultan points out that timer_pending() returns false before the function has actually run, while add_timer_on() adds directly to the timer base, which means del_timer_sync() might fail to notice a pending timer, which means UaF. This seems like a somewhat hard problem to solve. So I think I'll just drop this patch 2/2 here until a better idea comes around. Jason