Hi Sebastian, On Thu, Feb 10, 2022 at 7:04 PM Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> wrote: > So. > - CPU1 schedules a worker > - CPU1 goes offline before the gets on the CPU. > - The worker runs CPU2 > - CPU2 is back online > - and now > CPU1 CPU2 > new_count = ++fast_pool->count; > reg = fast_pool->count (FAST_POOL_MIX_INFLIGHT | 64) > incl reg (FAST_POOL_MIX_INFLIGHT | 65) > WRITE_ONCE(fast_pool->count, 0); > fast_pool->count = reg ((FAST_POOL_MIX_INFLIGHT | 65) > > So we lost the WRITE_ONCE(, 0), FAST_POOL_MIX_INFLIGHT is still set and > worker is not scheduled. Not easy to trigger, not by an ordinary user. > Just wanted to mention… Thanks for pointing this out. I'll actually fix this using atomics, and fix another minor issue at the same time the same way, and move to making sure the worker is running on the right CPU like we originally discussed. I'm going to send that as an additional patch so that we can narrow in on the issue there. It's a little bit involved but not too bad. I'll have that for you shortly. > crng_fast_load() does spin_trylock_irqsave() in hardirq context. It does > not produce any warning on RT but is still wrong IMHO: > If we just could move this, too. > I don't know how timing critical this is but the first backtrace from > crng_fast_load() came (to my surprise) from hwrng_fillfn() (a kthread) > and added 64bytes in one go. I'll look into seeing if I can do it. On my first pass a few days ago, it seemed a bit too tricky, but I'll revisit after this part here settles. Thanks for your benchmarks, by the way. That's useful. Jason