To avoid locking and per-cpu overhead, SLUB optimisically uses high-order allocations up to order-3 by default and falls back to lower allocations if they fail. While care is taken that the caller and kswapd take no unusual steps in response to this, there are further consequences like shrinkers who have to free more objects to release any memory. There is anecdotal evidence that significant time is being spent looping in shrinkers with insufficient progress being made (https://lkml.org/lkml/2011/4/28/361) and keeping kswapd awake. SLUB is now the default allocator and some bug reports have been pinned down to SLUB using high orders during operations like copying large amounts of data. SLUBs use of high-orders benefits applications that are sized to memory appropriately but this does not necessarily apply to large file servers or desktops. This patch causes SLUB to use order-0 pages like SLAB does by default. There is further evidence that this keeps kswapd's usage lower (https://lkml.org/lkml/2011/5/10/383). Signed-off-by: Mel Gorman <mgorman@xxxxxxx> --- Documentation/vm/slub.txt | 2 +- mm/slub.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/Documentation/vm/slub.txt b/Documentation/vm/slub.txt index 07375e7..778e9fa 100644 --- a/Documentation/vm/slub.txt +++ b/Documentation/vm/slub.txt @@ -117,7 +117,7 @@ can be influenced by kernel parameters: slub_min_objects=x (default 4) slub_min_order=x (default 0) -slub_max_order=x (default 1) +slub_max_order=x (default 0) slub_min_objects allows to specify how many objects must at least fit into one slab in order for the allocation order to be acceptable. diff --git a/mm/slub.c b/mm/slub.c index 1071723..23a4789 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2198,7 +2198,7 @@ EXPORT_SYMBOL(kmem_cache_free); * take the list_lock. */ static int slub_min_order; -static int slub_max_order = PAGE_ALLOC_COSTLY_ORDER; +static int slub_max_order; static int slub_min_objects; /* -- 1.7.3.4 -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html