From: David Keisar Schmidt <david.keisarschm@xxxxxxxxxxxxxxx> Hi, The security improvements for prandom_u32 done in commits c51f8f88d705 from October 2020 and d4150779e60f from May 2022 didn't handle the cases when prandom_bytes_state() and prandom_u32_state() are used. Specifically, this weak randomization takes place in three cases: 1. mm/slab.c 2. mm/slab_common.c 3. arch/x86/mm/kaslr.c The first two invocations (mm/slab.c, mm/slab_common.c) are used to create randomization in the slab allocator freelists. This is done to make sure attackers can’t obtain information on the heap state. The last invocation, inside arch/x86/mm/kaslr.c, randomizes the virtual address space of kernel memory regions. Hence, we have added the necessary changes to make those randomizations stronger, switching prandom_u32 instance to siphash. Changes since v5: * Fixed coding style issues in mm/slab and mm/slab_common. * Deleted irrelevant changes which were appended accidentally in arch/x86/mm/kaslr. Changes since v4: * Changed only the arch/x86/mm/kaslr patch. In particular, we replaced the use of prandom_bytes_state and prandom_seed_state with siphash inside arch/x86/mm/kaslr.c. Changes since v3: * edited commit messages Changes since v2: * edited commit message. * replaced instances of get_random_u32 with get_random_u32_below in mm/slab.c, mm/slab_common.c Regards, David Keisar Schmidt (3): mm/slab: Replace invocation of weak PRNG mm/slab_common: Replace invocation of weak PRNG arch/x86/mm/kaslr: use siphash instead of prandom_bytes_state arch/x86/mm/kaslr.c | 21 +++++++++++++++------ mm/slab.c | 29 +++++++++-------------------- mm/slab_common.c | 11 +++-------- 3 files changed, 27 insertions(+), 34 deletions(-) -- 2.37.3