On Mon, Sep 30, 2019 at 11:23:34AM +0200, Michal Hocko wrote: > On Mon 23-09-19 18:36:32, Vlastimil Babka wrote: > > On 8/26/19 1:16 PM, Vlastimil Babka wrote: > > > In most configurations, kmalloc() happens to return naturally aligned (i.e. > > > aligned to the block size itself) blocks for power of two sizes. That means > > > some kmalloc() users might unknowingly rely on that alignment, until stuff > > > breaks when the kernel is built with e.g. CONFIG_SLUB_DEBUG or CONFIG_SLOB, > > > and blocks stop being aligned. Then developers have to devise workaround such > > > as own kmem caches with specified alignment [1], which is not always practical, > > > as recently evidenced in [2]. > > > > > > The topic has been discussed at LSF/MM 2019 [3]. Adding a 'kmalloc_aligned()' > > > variant would not help with code unknowingly relying on the implicit alignment. > > > For slab implementations it would either require creating more kmalloc caches, > > > or allocate a larger size and only give back part of it. That would be > > > wasteful, especially with a generic alignment parameter (in contrast with a > > > fixed alignment to size). > > > > > > Ideally we should provide to mm users what they need without difficult > > > workarounds or own reimplementations, so let's make the kmalloc() alignment to > > > size explicitly guaranteed for power-of-two sizes under all configurations. > > > What this means for the three available allocators? > > > > > > * SLAB object layout happens to be mostly unchanged by the patch. The > > > implicitly provided alignment could be compromised with CONFIG_DEBUG_SLAB due > > > to redzoning, however SLAB disables redzoning for caches with alignment > > > larger than unsigned long long. Practically on at least x86 this includes > > > kmalloc caches as they use cache line alignment, which is larger than that. > > > Still, this patch ensures alignment on all arches and cache sizes. > > > > > > * SLUB layout is also unchanged unless redzoning is enabled through > > > CONFIG_SLUB_DEBUG and boot parameter for the particular kmalloc cache. With > > > this patch, explicit alignment is guaranteed with redzoning as well. This > > > will result in more memory being wasted, but that should be acceptable in a > > > debugging scenario. > > > > > > * SLOB has no implicit alignment so this patch adds it explicitly for > > > kmalloc(). The potential downside is increased fragmentation. While > > > pathological allocation scenarios are certainly possible, in my testing, > > > after booting a x86_64 kernel+userspace with virtme, around 16MB memory > > > was consumed by slab pages both before and after the patch, with difference > > > in the noise. > > > > > > [1] https://lore.kernel.org/linux-btrfs/c3157c8e8e0e7588312b40c853f65c02fe6c957a.1566399731.git.christophe.leroy@xxxxxx/ > > > [2] https://lore.kernel.org/linux-fsdevel/20190225040904.5557-1-ming.lei@xxxxxxxxxx/ > > > [3] https://lwn.net/Articles/787740/ > > > > > > Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx> > > > > So if anyone thinks this is a good idea, please express it (preferably > > in a formal way such as Acked-by), otherwise it seems the patch will be > > dropped (due to a private NACK, apparently). > > Sigh. > > An existing code to workaround the lack of alignment guarantee just show > that this is necessary. And there wasn't any real technical argument > against except for a highly theoretical optimizations/new allocator that > would be tight by the guarantee. > > Therefore > Acked-by: Michal Hocko <mhocko@xxxxxxxx> Agreed. Acked-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> -- Kirill A. Shutemov