On Thu, Apr 07, 2022 at 03:46:37AM +0000, Hyeonggon Yoo wrote: > On Tue, Apr 05, 2022 at 02:57:56PM +0100, Catalin Marinas wrote: > > --- a/mm/slab_common.c > > +++ b/mm/slab_common.c > > @@ -838,9 +838,18 @@ void __init setup_kmalloc_cache_index_table(void) > > } > > } > > > > -static void __init > > +unsigned int __weak arch_kmalloc_minalign(void) > > +{ > > + return ARCH_KMALLOC_MINALIGN; > > +} > > + > > As ARCH_KMALLOC_ALIGN and arch_kmalloc_minalign() may not be same after > patch 10, I think s/ARCH_KMALLOC_ALIGN/arch_kmalloc_minalign/g > for every user of it would be more correct? Not if the code currently using ARCH_KMALLOC_MINALIGN needs a constant. Yes, there probably are a few places where the code can cope with a dynamic arch_kmalloc_minalign() but there are two other cases where a constant is needed: 1. As a BUILD_BUG check because the code is storing some flags in the bottom bits of a pointer. A smaller ARCH_KMALLOC_MINALIGN works just fine here. 2. As a static alignment for DMA requirements. That's where the newly exposed ARCH_DMA_MINALIGN should be used. Note that this series doesn't make the situation any worse than before since ARCH_DMA_MINALIGN stays at 128 bytes for arm64. Current users can evolve to use a dynamic alignment in future patches. My main aim with this series is to be able to create kmalloc-64 caches on arm64. > > @@ -851,10 +860,17 @@ new_kmalloc_cache(int idx, enum kmalloc_cache_type type, slab_flags_t flags) > > flags |= SLAB_ACCOUNT; > > } > > > > - kmalloc_caches[type][idx] = create_kmalloc_cache( > > - kmalloc_info[idx].name[type], > > - kmalloc_info[idx].size, flags, 0, > > - kmalloc_info[idx].size); > > + if (minalign > ARCH_KMALLOC_MINALIGN) { > > + aligned_size = ALIGN(aligned_size, minalign); > > + aligned_idx = __kmalloc_index(aligned_size, false); > > + } > > + > > + if (!kmalloc_caches[type][aligned_idx]) > > + kmalloc_caches[type][aligned_idx] = create_kmalloc_cache( > > + kmalloc_info[aligned_idx].name[type], > > + aligned_size, flags, 0, aligned_size); > > + if (idx != aligned_idx) > > + kmalloc_caches[type][idx] = kmalloc_caches[type][aligned_idx]; > > I would prefer detecting minimum kmalloc size in create_kmalloc_caches() > in runtime instead of changing behavior of new_kmalloc_cache(). That was my initial attempt but we have a couple of create_kmalloc_cache() (not *_caches) calls directly, one of them in mm/slab.c kmem_cache_init(). So I wanted all the minalign logic in a single place, hence I replaced the explicit create_kmalloc_cache() call with new_kmalloc_cache(). See this patch and patch 9 for some clean-up. -- Catalin