Re: [PATCH 06/25] slab: make kmem_cache_create() work with 32-bit sizes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 05, 2018 at 02:48:33PM -0700, Andrew Morton wrote:
> On Tue, 6 Mar 2018 12:37:49 -0600 (CST) Christopher Lameter <cl@xxxxxxxxx> wrote:
> 
> > On Mon, 5 Mar 2018, Alexey Dobriyan wrote:
> > 
> > > struct kmem_cache::size and ::align were always 32-bit.
> > >
> > > Out of curiosity I created 4GB kmem_cache, it oopsed with division by 0.
> > > kmem_cache_create(1UL<<32+1) created 1-byte cache as expected.
> > 
> > Could you add a check to avoid that in the future?
> > 
> > > size_t doesn't work and never did.
> > 
> > Its not so simple. Please verify that the edge cases of all object size /
> > alignment etc calculations are doable with 32 bit entities first.
> > 
> > And size_t makes sense as a parameter.
> 
> Alexey, please don't let this stuff dangle on.
> 
> I think I'll merge this as-is but some fixups might be needed as a
> result of Christoph's suggestion?

I see this email in public archives, but not in my mailbox :-\

Anyway,

I think the answer is in fact simple.

1)
"int size" proves that 4GB+ caches were always broken both on SLUB
and SLAB. I could audit calculate_sizes() and friends but why bother
if create_cache() already truncated everything.

You're writing:

	that the edge cases of all object size ...
	... are doable with 32 bit entities

AS IF they were doable with 64-bit. They weren't.

2)
Dynamically allocated kernel data structures are in fact small.
I know of "struct kvm_vcpu", it is 20KB on my machine and it's
the biggest.

kmalloc is limited to 64MB, after that it fallbacks to page allocator.
Which means that some huge structure cache must be created by cache or
not affected by conversion as it still falls back to page allocator.

3)
->size and ->align were signed ints, making them unsigned makes
overflows twice as unlikely :^)

> And size_t makes sense as a parameter.

size_t doesn't make sense for kernel as 4GB+ objects are few and far
between.

I remember such patches could shrink SLUB by ~1KB. SLUB is 30KB total.
So it is 2-3% reduction simply by not using "unsigned long" and "size_t"
and using 32-bit arithmetic.

Userspace shifted to size_t, people copy, it bloats kernel for no reason.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux