The quilt patch titled Subject: mm/slab: limit kmalloc() minimum alignment to dma_get_cache_alignment() has been removed from the -mm tree. Its filename was mm-slab-limit-kmalloc-minimum-alignment-to-dma_get_cache_alignment.patch This patch was dropped because it was merged into the mm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: Catalin Marinas <catalin.marinas@xxxxxxx> Subject: mm/slab: limit kmalloc() minimum alignment to dma_get_cache_alignment() Date: Mon, 12 Jun 2023 16:31:48 +0100 Do not create kmalloc() caches which are not aligned to dma_get_cache_alignment(). There is no functional change since for current architectures defining ARCH_DMA_MINALIGN, ARCH_KMALLOC_MINALIGN equals ARCH_DMA_MINALIGN (and dma_get_cache_alignment()). On architectures without a specific ARCH_DMA_MINALIGN, dma_get_cache_alignment() is 1, so no change to the kmalloc() caches. Link: https://lkml.kernel.org/r/20230612153201.554742-5-catalin.marinas@xxxxxxx Signed-off-by: Catalin Marinas <catalin.marinas@xxxxxxx> Reviewed-by: Vlastimil Babka <vbabka@xxxxxxx> Tested-by: Isaac J. Manjarres <isaacmanjarres@xxxxxxxxxx> Cc: Christoph Hellwig <hch@xxxxxx> Cc: Robin Murphy <robin.murphy@xxxxxxx> Cc: Alasdair Kergon <agk@xxxxxxxxxx> Cc: Ard Biesheuvel <ardb@xxxxxxxxxx> Cc: Arnd Bergmann <arnd@xxxxxxxx> Cc: Daniel Vetter <daniel@xxxxxxxx> Cc: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> Cc: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx> Cc: Jerry Snitselaar <jsnitsel@xxxxxxxxxx> Cc: Joerg Roedel <joro@xxxxxxxxxx> Cc: Jonathan Cameron <jic23@xxxxxxxxxx> Cc: Jonathan Cameron <Jonathan.Cameron@xxxxxxxxxx> Cc: Lars-Peter Clausen <lars@xxxxxxxxxx> Cc: Logan Gunthorpe <logang@xxxxxxxxxxxx> Cc: Marc Zyngier <maz@xxxxxxxxxx> Cc: Mark Brown <broonie@xxxxxxxxxx> Cc: Mike Snitzer <snitzer@xxxxxxxxxx> Cc: "Rafael J. Wysocki" <rafael@xxxxxxxxxx> Cc: Saravana Kannan <saravanak@xxxxxxxxxx> Cc: Will Deacon <will@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/slab_common.c | 24 +++++++++++++++++++++--- 1 file changed, 21 insertions(+), 3 deletions(-) --- a/mm/slab_common.c~mm-slab-limit-kmalloc-minimum-alignment-to-dma_get_cache_alignment +++ a/mm/slab_common.c @@ -17,6 +17,7 @@ #include <linux/cpu.h> #include <linux/uaccess.h> #include <linux/seq_file.h> +#include <linux/dma-mapping.h> #include <linux/proc_fs.h> #include <linux/debugfs.h> #include <linux/kasan.h> @@ -862,9 +863,18 @@ void __init setup_kmalloc_cache_index_ta } } +static unsigned int __kmalloc_minalign(void) +{ + return dma_get_cache_alignment(); +} + void __init new_kmalloc_cache(int idx, enum kmalloc_cache_type type, slab_flags_t flags) { + unsigned int minalign = __kmalloc_minalign(); + unsigned int aligned_size = kmalloc_info[idx].size; + int aligned_idx = idx; + if ((KMALLOC_RECLAIM != KMALLOC_NORMAL) && (type == KMALLOC_RECLAIM)) { flags |= SLAB_RECLAIM_ACCOUNT; } else if (IS_ENABLED(CONFIG_MEMCG_KMEM) && (type == KMALLOC_CGROUP)) { @@ -877,9 +887,17 @@ new_kmalloc_cache(int idx, enum kmalloc_ flags |= SLAB_CACHE_DMA; } - kmalloc_caches[type][idx] = create_kmalloc_cache( - kmalloc_info[idx].name[type], - kmalloc_info[idx].size, flags); + if (minalign > ARCH_KMALLOC_MINALIGN) { + aligned_size = ALIGN(aligned_size, minalign); + aligned_idx = __kmalloc_index(aligned_size, false); + } + + if (!kmalloc_caches[type][aligned_idx]) + kmalloc_caches[type][aligned_idx] = create_kmalloc_cache( + kmalloc_info[aligned_idx].name[type], + aligned_size, flags); + if (idx != aligned_idx) + kmalloc_caches[type][idx] = kmalloc_caches[type][aligned_idx]; /* * If CONFIG_MEMCG_KMEM is enabled, disable cache merging for _ Patches currently in -mm which might be from catalin.marinas@xxxxxxx are