I adapted the v2 patch to the latest linux-next tree and made the v3 patch without "RFC", since this idea seems to be acceptable in general based on previous dicussion with mm and hardening folks. Please check the link specified below for more details of the discussion, and further suggestions are welcome. v3: - Replace SLAB_RANDOMSLAB with the new existing SLAB_NO_MERGE flag. - Shorten long code lines by wrapping and renaming. - Update commit message with latest perf benchmark and additional theorectical explanation. v2: - Use hash_64() and a per-boot random seed to select kmalloc() caches. - Change acceptable # of caches from [4,16] to {2,4,8,16}, which is more compatible with hashing. - Supplement results of performance and memory overhead tests. - Link: https://lore.kernel.org/all/20230508075507.1720950-1-gongruiqi1@xxxxxxxxxx/ v1: - Link: https://lore.kernel.org/all/20230315095459.186113-1-gongruiqi1@xxxxxxxxxx/ GONG, Ruiqi (1): Randomized slab caches for kmalloc() include/linux/percpu.h | 12 ++++++--- include/linux/slab.h | 20 ++++++++++++--- mm/Kconfig | 49 ++++++++++++++++++++++++++++++++++++ mm/kfence/kfence_test.c | 6 +++-- mm/slab.c | 2 +- mm/slab.h | 2 +- mm/slab_common.c | 55 +++++++++++++++++++++++++++++++++++++---- 7 files changed, 130 insertions(+), 16 deletions(-) -- 2.25.1