Hi Namhyung, On Wed, Feb 29, 2012 at 05:54:34PM +0900, Namhyung Kim wrote: > Unlike SLAB, SLUB doesn't set PG_slab on tail pages, so if a user would > call free_pages() incorrectly on a object in a tail page, she will get >i confused with the undefined result. Setting the flag would help her by > emitting a warning on bad_page() in such a case. > > Reported-by: Sangseok Lee <sangseok.lee@xxxxxxx> > Signed-off-by: Namhyung Kim <namhyung.kim@xxxxxxx> I read this thread and I feel the we don't reach right point. I think it's not a compound page problem. We can face above problem where we allocates big order page without __GFP_COMP and free middle page of it. Fortunately, We can catch such a problem by put_page_testzero in __free_pages if you enable CONFIG_DEBUG_VM. Did you tried that with CONFIG_DEBUG_VM? > --- > mm/slub.c | 12 ++++++++++-- > 1 files changed, 10 insertions(+), 2 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 33bab2aca882..575baacbec9b 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -1287,6 +1287,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) > struct page *page; > struct kmem_cache_order_objects oo = s->oo; > gfp_t alloc_gfp; > + int i; > > flags &= gfp_allowed_mask; > > @@ -1320,6 +1321,9 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) > if (!page) > return NULL; > > + for (i = 0; i < 1 << oo_order(oo); i++) > + __SetPageSlab(page + i); > + > if (kmemcheck_enabled > && !(s->flags & (SLAB_NOTRACK | DEBUG_DEFAULT_FLAGS))) { > int pages = 1 << oo_order(oo); > @@ -1369,7 +1373,6 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node) > > inc_slabs_node(s, page_to_nid(page), page->objects); > page->slab = s; > - page->flags |= 1 << PG_slab; > > start = page_address(page); > > @@ -1396,6 +1399,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page) > { > int order = compound_order(page); > int pages = 1 << order; > + int i; > > if (kmem_cache_debug(s)) { > void *p; > @@ -1413,7 +1417,11 @@ static void __free_slab(struct kmem_cache *s, struct page *page) > NR_SLAB_RECLAIMABLE : NR_SLAB_UNRECLAIMABLE, > -pages); > > - __ClearPageSlab(page); > + for (i = 0; i < pages; i++) { > + BUG_ON(!PageSlab(page + i)); > + __ClearPageSlab(page + i); > + } > + > reset_page_mapcount(page); > if (current->reclaim_state) > current->reclaim_state->reclaimed_slab += pages; > -- > 1.7.9 > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@xxxxxxxxx. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ > Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a> -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>