+ fs-buffer-move-allocation-failure-loop-into-the-allocator.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Subject: + fs-buffer-move-allocation-failure-loop-into-the-allocator.patch added to -mm tree
To: hannes@xxxxxxxxxxx,azurit@xxxxxxxx,mhocko@xxxxxxx,stable@xxxxxxxxxx
From: akpm@xxxxxxxxxxxxxxxxxxxx
Date: Fri, 11 Oct 2013 13:52:35 -0700


The patch titled
     Subject: fs: buffer: move allocation failure loop into the allocator
has been added to the -mm tree.  Its filename is
     fs-buffer-move-allocation-failure-loop-into-the-allocator.patch

This patch should soon appear at
    http://ozlabs.org/~akpm/mmots/broken-out/fs-buffer-move-allocation-failure-loop-into-the-allocator.patch
and later at
    http://ozlabs.org/~akpm/mmotm/broken-out/fs-buffer-move-allocation-failure-loop-into-the-allocator.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/SubmitChecklist when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: Johannes Weiner <hannes@xxxxxxxxxxx>
Subject: fs: buffer: move allocation failure loop into the allocator

Buffer allocation has a very crude indefinite loop around waking the
flusher threads and performing global NOFS direct reclaim because it can
not handle allocation failures.

The most immediate problem with this is that the allocation may fail due
to a memory cgroup limit, where flushers + direct reclaim might not make
any progress towards resolving the situation at all.  Because unlike the
global case, a memory cgroup may not have any cache at all, only anonymous
pages but no swap.  This situation will lead to a reclaim livelock with
insane IO from waking the flushers and thrashing unrelated filesystem
cache in a tight loop.

Use __GFP_NOFAIL allocations for buffers for now.  This makes sure that
any looping happens in the page allocator, which knows how to orchestrate
kswapd, direct reclaim, and the flushers sensibly.  It also allows memory
cgroups to detect allocations that can't handle failure and will allow
them to ultimately bypass the limit if reclaim can not make progress.

Reported-by: azurIt <azurit@xxxxxxxx>
Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxx>
Cc: <stable@xxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 fs/buffer.c     |   14 ++++++++++++--
 mm/memcontrol.c |    2 ++
 2 files changed, 14 insertions(+), 2 deletions(-)

diff -puN fs/buffer.c~fs-buffer-move-allocation-failure-loop-into-the-allocator fs/buffer.c
--- a/fs/buffer.c~fs-buffer-move-allocation-failure-loop-into-the-allocator
+++ a/fs/buffer.c
@@ -1005,9 +1005,19 @@ grow_dev_page(struct block_device *bdev,
 	struct buffer_head *bh;
 	sector_t end_block;
 	int ret = 0;		/* Will call free_more_memory() */
+	gfp_t gfp_mask;
 
-	page = find_or_create_page(inode->i_mapping, index,
-		(mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS)|__GFP_MOVABLE);
+	gfp_mask = mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS;
+	gfp_mask |= __GFP_MOVABLE;
+	/*
+	 * XXX: __getblk_slow() can not really deal with failure and
+	 * will endlessly loop on improvised global reclaim.  Prefer
+	 * looping in the allocator rather than here, at least that
+	 * code knows what it's doing.
+	 */
+	gfp_mask |= __GFP_NOFAIL;
+
+	page = find_or_create_page(inode->i_mapping, index, gfp_mask);
 	if (!page)
 		return ret;
 
diff -puN mm/memcontrol.c~fs-buffer-move-allocation-failure-loop-into-the-allocator mm/memcontrol.c
--- a/mm/memcontrol.c~fs-buffer-move-allocation-failure-loop-into-the-allocator
+++ a/mm/memcontrol.c
@@ -2766,6 +2766,8 @@ done:
 	return 0;
 nomem:
 	*ptr = NULL;
+	if (gfp_mask & __GFP_NOFAIL)
+		return 0;
 	return -ENOMEM;
 bypass:
 	*ptr = root_mem_cgroup;
_

Patches currently in -mm which might be from hannes@xxxxxxxxxxx are

mm-memcg-protect-mem_cgroup_read_events-for-cpu-hotplug.patch
mm-vmscanc-dont-forget-to-free-shrinker-nr_deferred.patch
mm-memcg-handle-non-error-oom-situations-more-gracefully.patch
fs-buffer-move-allocation-failure-loop-into-the-allocator.patch
mm-nobootmemc-have-__free_pages_memory-free-in-larger-chunks.patch
memcg-refactor-mem_control_numa_stat_show.patch
memcg-support-hierarchical-memorynuma_stats.patch
mm-avoid-increase-sizeofstruct-page-due-to-split-page-table-lock.patch
mm-rename-use_split_ptlocks-to-use_split_pte_ptlocks.patch
mm-convert-mm-nr_ptes-to-atomic_long_t.patch
mm-introduce-api-for-split-page-table-lock-for-pmd-level.patch
mm-thp-change-pmd_trans_huge_lock-to-return-taken-lock.patch
mm-thp-move-ptl-taking-inside-page_check_address_pmd.patch
mm-thp-do-not-access-mm-pmd_huge_pte-directly.patch
mm-hugetlb-convert-hugetlbfs-to-use-split-pmd-lock.patch
mm-convert-the-rest-to-new-page-table-lock-api.patch
mm-implement-split-page-table-lock-for-pmd-level.patch
x86-mm-enable-split-page-table-lock-for-pmd-level.patch
memblock-factor-out-of-top-down-allocation.patch
memblock-introduce-bottom-up-allocation-mode.patch
x86-mm-factor-out-of-top-down-direct-mapping-setup.patch
x86-mem-hotplug-support-initialize-page-tables-in-bottom-up.patch
x86-acpi-crash-kdump-do-reserve_crashkernel-after-srat-is-parsed.patch
mem-hotplug-introduce-movable_node-boot-option.patch
swap-add-a-simple-detector-for-inappropriate-swapin-readahead-fix.patch
linux-next.patch
debugging-keep-track-of-page-owners-fix-2-fix-fix-fix.patch

--
To unsubscribe from this list: send the line "unsubscribe mm-commits" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Kernel Newbies FAQ]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Photo]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux