On Fri, 18 Nov 2011, Stanislaw Gruszka wrote: > diff --git a/mm/slub.c b/mm/slub.c > index 7d2a996..a66be56 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -3645,6 +3645,9 @@ void __init kmem_cache_init(void) > struct kmem_cache *temp_kmem_cache_node; > unsigned long kmalloc_size; > > + if (debug_guardpage_minorder()) > + slub_max_order = 0; > + > kmem_size = offsetof(struct kmem_cache, node) + > nr_node_ids * sizeof(struct kmem_cache_node *); > I'd recommend at least printing a warning about why slub_max_order was reduced because users may be concerned why they can't now change any cache's order with /sys/kernel/slab/cache/order. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>