Re: [PATCH 23/25] slub: make struct kmem_cache_order_objects::x unsigned int

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 05, 2018 at 02:51:08PM -0700, Andrew Morton wrote:
> On Tue, 6 Mar 2018 12:51:47 -0600 (CST) Christopher Lameter <cl@xxxxxxxxx> wrote:
> 
> > On Mon, 5 Mar 2018, Alexey Dobriyan wrote:
> > 
> > > struct kmem_cache_order_objects is for mixing order and number of objects,
> > > and orders aren't bit enough to warrant 64-bit width.
> > >
> > > Propagate unsignedness down so that everything fits.
> > >
> > > !!! Patch assumes that "PAGE_SIZE << order" doesn't overflow. !!!
> > 
> > PAGE_SIZE could be a couple of megs on some platforms (256 or so on
> > Itanium/PowerPC???) . So what are the worst case scenarios here?
> > 
> > I think both order and # object should fit in a 32 bit number.
> > 
> > A page with 256M size and 4 byte objects would have 64M objects.
> 
> Another dangling review comment.  Alexey, please respond?

PowerPC is 256KB, IA64 is 64KB.

So "PAGE_SIZE << order" overflows if order is 14 (or 13 if signed int
slips in somewhere. Highest safe order is 12, which should be enough.

When was the last time you saw 2GB slab?
It never happenes as costly order is 3(?).




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux