Hi Dominik, Does this introduce a lock nesting inversion situation? With your patch, crng_fast_load() now does: lock(primary_crng) invalidate_batched_entropy() lock(batch_lock) unlock(batch_lock) unlock(primary_crng) While get_random_{u32,u64}() does: lock(batch_lock) extract_crng() lock(primary_crng) unlock(primary_crng) unlock(batch_lock) Is this correct? If so, we might have to defer this patch until after <https://git.kernel.org/pub/scm/linux/kernel/git/crng/random.git/commit/?id=2dfab1b1> or something like it lands, which attempts to get rid of the batched entropy lock. If that analysis seems right to you, I could pull this patch into that development branch for poking and prodding. Jason