This is a note to let you know that I've just added the patch titled shmem: fix init_page_accessed use to stop !PageLRU bug to the 3.14-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: shmem-fix-init_page_accessed-use-to-stop-pagelru-bug.patch and it can be found in the queue-3.14 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From 66d2f4d28cd030220e7ea2a628993fcabcb956d1 Mon Sep 17 00:00:00 2001 From: Hugh Dickins <hughd@xxxxxxxxxx> Date: Wed, 2 Jul 2014 15:22:38 -0700 Subject: shmem: fix init_page_accessed use to stop !PageLRU bug From: Hugh Dickins <hughd@xxxxxxxxxx> commit 66d2f4d28cd030220e7ea2a628993fcabcb956d1 upstream. Under shmem swapping load, I sometimes hit the VM_BUG_ON_PAGE(!PageLRU) in isolate_lru_pages() at mm/vmscan.c:1281! Commit 2457aec63745 ("mm: non-atomically mark page accessed during page cache allocation where possible") looks like interrupted work-in-progress. mm/filemap.c's call to init_page_accessed() is fine, but not mm/shmem.c's - shmem_write_begin() is clearly wrong to use it after shmem_getpage(), when the page is always visible in radix_tree, and often already on LRU. Revert change to shmem_write_begin(), and use init_page_accessed() or mark_page_accessed() appropriately for SGP_WRITE in shmem_getpage_gfp(). SGP_WRITE also covers shmem_symlink(), which did not mark_page_accessed() before; but since many other filesystems use [__]page_symlink(), which did and does mark the page accessed, consider this as rectifying an oversight. Signed-off-by: Hugh Dickins <hughd@xxxxxxxxxx> Acked-by: Mel Gorman <mgorman@xxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxx> Cc: Dave Hansen <dave.hansen@xxxxxxxxx> Cc: Prabhakar Lad <prabhakar.csengg@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Mel Gorman <mgorman@xxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- mm/shmem.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1035,6 +1035,9 @@ repeat: goto failed; } + if (page && sgp == SGP_WRITE) + mark_page_accessed(page); + /* fallocated page? */ if (page && !PageUptodate(page)) { if (sgp != SGP_READ) @@ -1116,6 +1119,9 @@ repeat: shmem_recalc_inode(inode); spin_unlock(&info->lock); + if (sgp == SGP_WRITE) + mark_page_accessed(page); + delete_from_swap_cache(page); set_page_dirty(page); swap_free(swap); @@ -1142,6 +1148,9 @@ repeat: __SetPageSwapBacked(page); __set_page_locked(page); + if (sgp == SGP_WRITE) + init_page_accessed(page); + error = mem_cgroup_cache_charge(page, current->mm, gfp & GFP_RECLAIM_MASK); if (error) @@ -1438,13 +1447,9 @@ shmem_write_begin(struct file *file, str loff_t pos, unsigned len, unsigned flags, struct page **pagep, void **fsdata) { - int ret; struct inode *inode = mapping->host; pgoff_t index = pos >> PAGE_CACHE_SHIFT; - ret = shmem_getpage(inode, index, pagep, SGP_WRITE, NULL); - if (ret == 0 && *pagep) - init_page_accessed(*pagep); - return ret; + return shmem_getpage(inode, index, pagep, SGP_WRITE, NULL); } static int Patches currently in stable-queue which might be from hughd@xxxxxxxxxx are queue-3.14/mm-page_alloc-use-jump-labels-to-avoid-checking-number_of_cpusets.patch queue-3.14/mm-non-atomically-mark-page-accessed-during-page-cache-allocation-where-possible.patch queue-3.14/mm-page_alloc-convert-hot-cold-parameter-and-immediate-callers-to-bool.patch queue-3.14/mm-page_alloc-only-check-the-zone-id-check-if-pages-are-buddies.patch queue-3.14/mm-page_alloc-only-check-the-alloc-flags-and-gfp_mask-for-dirty-once.patch queue-3.14/mm-page_alloc-take-the-alloc_no_watermark-check-out-of-the-fast-path.patch queue-3.14/fs-buffer-do-not-use-unnecessary-atomic-operations-when-discarding-buffers.patch queue-3.14/mm-do-not-use-atomic-operations-when-releasing-pages.patch queue-3.14/mm-page_alloc-reduce-number-of-times-page_to_pfn-is-called.patch queue-3.14/mm-page_alloc-use-unsigned-int-for-order-in-more-places.patch queue-3.14/mm-shmem-avoid-atomic-operation-during-shmem_getpage_gfp.patch queue-3.14/shmem-fix-init_page_accessed-use-to-stop-pagelru-bug.patch queue-3.14/mm-memory.c-use-entry-access_once-pte-in-handle_pte_fault.patch queue-3.14/mm-do-not-use-unnecessary-atomic-operations-when-adding-pages-to-the-lru.patch queue-3.14/include-linux-jump_label.h-expose-the-reference-count.patch queue-3.14/mm-page_alloc-calculate-classzone_idx-once-from-the.patch queue-3.14/mm-page_alloc-lookup-pageblock-migratetype-with-irqs-enabled-during-free.patch -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html