RE: [PATCH v4] slob: add size header to all allocations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Vlastimil Babka
> Sent: 30 November 2021 14:56
> 
> On 11/23/21 11:18, David Laight wrote:
> > From: Vlastimil Babka
> >> Sent: 22 November 2021 10:46
> >>
> >> On 11/22/21 11:36, Christoph Lameter wrote:
> >> > On Mon, 22 Nov 2021, Vlastimil Babka wrote:
> >> >
> >> >> But it seems there's no reason we couldn't do better? I.e. use the value of
> >> >> SLOB_HDR_SIZE only to align the beginning of actual object (and name the
> >> >> define different than SLOB_HDR_SIZE). But the size of the header, where we
> >> >> store the object lenght could be just a native word - 4 bytes on 32bit, 8 on
> >> >> 64bit. The address of the header shouldn't have a reason to be also aligned
> >> >> to ARCH_KMALLOC_MINALIGN / ARCH_SLAB_MINALIGN as only SLOB itself processes
> >> >> it and not the slab consumers which rely on those alignments?
> >> >
> >> > Well the best way would be to put it at the end of the object in order to
> >> > avoid the alignment problem. This is a particular issue with SLOB because
> >> > it allows multiple types of objects in a single page frame.
> >> >
> >> > If only one type of object would be allowed then the object size etc can
> >> > be stored in the page struct.
> >
> > Or just a single byte that is the index of the associated free list structure.
> > For 32bit and for the smaller kmalloc() area it may be reasonable to have
> > a separate array indexed by the page of the address.
> >
> >> > So I guess placement at the beginning cannot be avoided. That in turn runs
> >> > into trouble with the DMA requirements on some platforms where the
> >> > beginning of the object has to be cache line aligned.
> >>
> >> It's no problem to have the real beginning of the object aligned, and the
> >> prepended header not.
> >
> > I'm not sure that helps.
> > The header can't share a cache line with the previous item (because it
> > might be mapped for DMA) so will always take a full cache line.
> 
> So if this is true, then I think we already have a problem with SLOB today
> (and AFAICS it's not even due to changes done by my 2019 commit 59bb47985c1d
> ("mm, sl[aou]b: guarantee natural alignment for kmalloc(power-of-two)" but
> older).
> 
> Let's say we are on arm64 where (AFAICS):
> ARCH_KMALLOC_MINALIGN = ARCH_DMA_MINALIGN = 128
> ARCH_SLAB_MINALIGN = 64

Is that valid?
Isn't SLAB being used to implement kmalloc() so the architecture
defined alignment must apply?

> The point is that ARCH_SLAB_MINALIGN is smaller than ARCH_DMA_MINALIGN.
> 
> Let's say we call kmalloc(64) and get a completely fresh page.
> In SLOB, alloc() or rather __do_kmalloc_node() will calculate minalign to
> max(ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN) thus 128.
> It will call slob_alloc() for size of size+minalign=64+128=192, align and
> align_offset = 128
> Thus the allocation will use 128 bytes for the header, 64 for the object.
> Both the header and object aligned to 128 bytes.
> But the remaining 64 bytes of the second 128 bytes will remain free, as the
> allocated size is 192 bytes:
> 
> | 128B header, aligned | 64B object | 64B free | rest also free |

That is horribly wasteful on memory :-)

> If there's another kmalloc allocation, the 128 bytes aligment due to
> ARCH_KMALLOC_MINALIGN will avoid it from using these 64 bytes, so that's
> fine. But if there's a kmem_cache_alloc() from a cache serving <=64B
> objects, it will be aligned to ARCH_SLAB_MINALIGN and happily use those 64
> bytes that share the 128 block where the previous kmalloc allocation lies.

If the memory returned by kmem_cache_alloc() can be used for DMA then
ARCH_DMA_MINALIGN has to apply to the returned buffers.
So, maybe, that cache can't exist?

I'd expect that ARCH_DMA_MINALIGN forces allocations to be a multiple
of that size.
More particularly the rest of the area can't be allocated to anything else.
So it ought to be valid to return the 2nd half of a 128 byte cache line
provided the first half isn't written while the allocation is active.

But that ARCH_KMALLOC_MINALIGN only applies to 'large' items?
Small items only need aligning to the power of 2 below their size.
So 8 bytes items only need 8 byte alignment even though a larger
item might need (say) 64 byte alignment.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux