On Wed, 21 Mar 2018, Mikulas Patocka wrote: > > + s->allocflags = allocflags; > > I'd also use "WRITE_ONCE(s->allocflags, allocflags)" here and when writing > s->oo and s->min to avoid some possible compiler misoptimizations. It only matters that 0 etc is never written. > Another problem is that it updates s->oo and later it updates s->max: > s->oo = oo_make(order, size, s->reserved); > s->min = oo_make(get_order(size), size, s->reserved); > if (oo_objects(s->oo) > oo_objects(s->max)) > s->max = s->oo; > --- so, the concurrently running code could see s->oo > s->max, which > could trigger some memory corruption. Well s->max is only relevant for code that analyses the details of slab structures for diagnostics. > s->max is only used in memory allocations - > kmalloc(BITS_TO_LONGS(oo_objects(s->max)) * sizeof(unsigned long)), so > perhaps we could fix the bug by removing s->max at all and always > allocating enough memory for the maximum possible number of objects? > > - kmalloc(BITS_TO_LONGS(oo_objects(s->max)) * sizeof(unsigned long), GFP_KERNEL); > + kmalloc(BITS_TO_LONGS(MAX_OBJS_PER_PAGE) * sizeof(unsigned long), GFP_KERNEL); MAX_OBJS_PER_PAGE is 32k. So you are looking at contiguous allocations of 256kbyte. Not good. The simplest measure would be to disallow the changing of the order while the slab contains objects. Subject: slub: Disallow order changes when objects exist in a slab There seems to be a couple of races that would have to be addressed if the slab order would be changed during active use. Lets disallow this in the same way as we also do not allow other changes of slab characteristics when objects are active. Signed-off-by: Christoph Lameter <cl@xxxxxxxxx> Index: linux/mm/slub.c =================================================================== --- linux.orig/mm/slub.c +++ linux/mm/slub.c @@ -4919,6 +4919,9 @@ static ssize_t order_store(struct kmem_c unsigned long order; int err; + if (any_slab_objects(s)) + return -EBUSY; + err = kstrtoul(buf, 10, &order); if (err) return err;