On 2022-02-11 17:50:34 [+0100], Jason A. Donenfeld wrote: > Hi Sebastian, Hi Jason, > > I *think* we could drop that "fast_pool != > > this_cpu_ptr(&irq_randomness)" check at the top since that cmpxchg will > > save us and redo the loop. But if I remember correctly you worried about > > fast_pool->pool being modified (which is only a corner case if we are on > > the other CPU while the orig CPU is back again). Either way, it would be > > random and we would not consume more entropy. > > No, we cannot, and "it's all random anyway so who cares if we corrupt > things!" is not rigorous, as entropy may actually be thrown away as > it's moved between words on each mix. If we're not running on the same > CPU, one CPU can corrupt the other's view of fast pool before updating > count. We must keep this. Okay, I assumed something like that. > > So if we have to keep this then please swap that migrate_disable() with > > local_irq_disable(). Otherwise PeterZ will yell at me. > > Okay, I'll do that then, and then in the process get rid of the > cmpxchg loop since it's no longer required. So the only reason why we have that atomic_t is for rare case where run on the remote CPU and need to remove the upper bit in the counter? > > > if (unlikely(crng_init == 0)) { > > > - if (fast_pool->count >= 64 && > > > + if (new_count >= 64 && > > > crng_fast_load(fast_pool->pool, sizeof(fast_pool->pool)) > 0) { > > > - fast_pool->count = 0; > > > + atomic_set(&fast_pool->count, 0); > > > fast_pool->last = now; > > > > I'm fine if we keep this as is for now. > > What do we do here vs RT? I suggested this > > https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git/commit/?id=a2d2d54409481aa23a3e11ab9559a843e36a79ec > > > > Is this doable? > > It might be, but last time I checked it seemed problematic. As I > mentioned in an earlier thread, I'll take a look again at that next > week after this patch here settles. Haven't forgotten. Ah, cheers. > v+1 coming up with irqs disabled. > > Jason Sebastian