On Wed, Aug 28, 2019 at 06:45:07PM +0000, Christopher Lameter wrote: > > Ideally we should provide to mm users what they need without difficult > > workarounds or own reimplementations, so let's make the kmalloc() alignment to > > size explicitly guaranteed for power-of-two sizes under all configurations. > > The objection remains that this will create exceptions for the general > notion that all kmalloc caches are aligned to KMALLOC_MINALIGN which may Hmm? kmalloc caches will be aligned to both KMALLOC_MINALIGN and the natural alignment of the object. > be suprising and it limits the optimizations that slab allocators may use > for optimizing data use. The SLOB allocator was designed in such a way > that data wastage is limited. The changes here sabotage that goal and show > that future slab allocators may be similarly constrained with the > exceptional alignents implemented. Additional debugging features etc etc > must all support the exceptional alignment requirements. While I sympathise with the poor programmer who has to write the fourth implementation of the sl*b interface, it's more for the pain of picking a new letter than the pain of needing to honour the alignment of allocations. There are many places in the kernel which assume alignment. They break when it's not supplied. I believe we have a better overall system if the MM developers provide stronger guarantees than the MM consumers have to work around only weak guarantees. > > * SLOB has no implicit alignment so this patch adds it explicitly for > > kmalloc(). The potential downside is increased fragmentation. While > > pathological allocation scenarios are certainly possible, in my testing, > > after booting a x86_64 kernel+userspace with virtme, around 16MB memory > > was consumed by slab pages both before and after the patch, with difference > > in the noise. > > This change to slob will cause a significant additional use of memory. The > advertised advantage of SLOB is that *minimal* memory will be used since > it is targeted for embedded systems. Different types of slab objects of > varying sizes can be allocated in the same memory page to reduce > allocation overhead. Did you not read the part where he said the difference was in the noise? > The result of this patch is just to use more memory to be safe from > certain pathologies where one subsystem was relying on an alignment that > was not specified. That is why this approach should not be called > �natural" but "implicit alignment". The one using the slab cache is not > aware that the slab allocator provides objects aligned in a special way > (which is in general not needed. There seems to be a single pathological > case that needs to be addressed and I thought that was due to some > brokenness in the hardware?). It turns out there are lots of places which assume this, including the pmem driver, the ramdisk driver and a few other similar drivers. > It is better to ensure that subsystems that require special alignment > explicitly tell the allocator about this. But it's not the subsystems which have this limitation which do the allocation; it's the subsystems who allocate the memory that they then pass to the subsystems. So you're forcing communication of these limits up & down the stack. > I still think implicit exceptions to alignments are a bad idea. Those need > to be explicity specified and that is possible using kmem_cache_create(). I swear we covered this last time the topic came up, but XFS would need to create special slab caches for each size between 512 and PAGE_SIZE. Potentially larger, depending on whether the MM developers are willing to guarantee that kmalloc(PAGE_SIZE * 2, GFP_KERNEL) will return a PAGE_SIZE aligned block of memory indefinitely.