+ mm-reuse-only-pte-mapped-ksm-page-in-do_wp_page.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm: reuse only-pte-mapped KSM page in do_wp_page()
has been added to the -mm tree.  Its filename is
     mm-reuse-only-pte-mapped-ksm-page-in-do_wp_page.patch

This patch should soon appear at
    http://ozlabs.org/~akpm/mmots/broken-out/mm-reuse-only-pte-mapped-ksm-page-in-do_wp_page.patch
and later at
    http://ozlabs.org/~akpm/mmotm/broken-out/mm-reuse-only-pte-mapped-ksm-page-in-do_wp_page.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: Kirill Tkhai <ktkhai@xxxxxxxxxxxxx>
Subject: mm: reuse only-pte-mapped KSM page in do_wp_page()

Add an optimization for KSM pages almost in the same way that we have for
ordinary anonymous pages.  If there is a write fault in a page, which is
mapped to an only pte, and it is not related to swap cache; the page may
be reused without copying its content.

[Note that we do not consider PageSwapCache() pages at least for now,
 since we don't want to complicate __get_ksm_page(), which has nice
 optimization based on this (for the migration case).  Currenly it is
 spinning on PageSwapCache() pages, waiting for when they have unfreezed
 counters (i.e., for the migration finish).  But we don't want to make it
 also spinning on swap cache pages, which we try to reuse, since there is
 not a very high probability to reuse them.  So, for now we do not
 consider PageSwapCache() pages at all.]

So in reuse_ksm_page() we check for 1) PageSwapCache() and 2)
page_stable_node(), to skip a page, which KSM is currently trying to link
to stable tree.  Then we do page_ref_freeze() to prohibit KSM to merge one
more page into the page, we are reusing.  After that, nobody can refer to
the reusing page: KSM skips !PageSwapCache() pages with zero refcount; and
the protection against of all other participants is the same as for reused
ordinary anon pages pte lock, page lock and mmap_sem.

Link: http://lkml.kernel.org/r/154471491016.31352.1168978849911555609.stgit@localhost.localdomain
Signed-off-by: Kirill Tkhai <ktkhai@xxxxxxxxxxxxx>
Reviewed-by: Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx>
Cc: "Kirill A. Shutemov" <kirill@xxxxxxxxxxxxx>
Cc: Hugh Dickins <hughd@xxxxxxxxxx>
Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx>
Cc: Christian Koenig <christian.koenig@xxxxxxx>
Cc: Claudio Imbrenda <imbrenda@xxxxxxxxxxxxxxxxxx>
Cc: Rik van Riel <riel@xxxxxxxxxxx>
Cc: Huang Ying <ying.huang@xxxxxxxxx>
Cc: Minchan Kim <minchan@xxxxxxxxxx>
Cc: Kirill Tkhai <ktkhai@xxxxxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---


--- a/include/linux/ksm.h~mm-reuse-only-pte-mapped-ksm-page-in-do_wp_page
+++ a/include/linux/ksm.h
@@ -53,6 +53,8 @@ struct page *ksm_might_need_to_copy(stru
 
 void rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc);
 void ksm_migrate_page(struct page *newpage, struct page *oldpage);
+bool reuse_ksm_page(struct page *page,
+			struct vm_area_struct *vma, unsigned long address);
 
 #else  /* !CONFIG_KSM */
 
@@ -86,6 +88,11 @@ static inline void rmap_walk_ksm(struct
 static inline void ksm_migrate_page(struct page *newpage, struct page *oldpage)
 {
 }
+static inline bool reuse_ksm_page(struct page *page,
+			struct vm_area_struct *vma, unsigned long address)
+{
+	return false;
+}
 #endif /* CONFIG_MMU */
 #endif /* !CONFIG_KSM */
 
--- a/mm/ksm.c~mm-reuse-only-pte-mapped-ksm-page-in-do_wp_page
+++ a/mm/ksm.c
@@ -706,8 +706,9 @@ again:
 	 * case this node is no longer referenced, and should be freed;
 	 * however, it might mean that the page is under page_ref_freeze().
 	 * The __remove_mapping() case is easy, again the node is now stale;
-	 * but if page is swapcache in migrate_page_move_mapping(), it might
-	 * still be our page, in which case it's essential to keep the node.
+	 * the same is in reuse_ksm_page() case; but if page is swapcache
+	 * in migrate_page_move_mapping(), it might still be our page,
+	 * in which case it's essential to keep the node.
 	 */
 	while (!get_page_unless_zero(page)) {
 		/*
@@ -2644,6 +2645,26 @@ again:
 		goto again;
 }
 
+bool reuse_ksm_page(struct page *page,
+		    struct vm_area_struct *vma,
+		    unsigned long address)
+{
+	VM_BUG_ON_PAGE(is_zero_pfn(page_to_pfn(page)), page);
+	VM_BUG_ON_PAGE(!page_mapped(page), page);
+	VM_BUG_ON_PAGE(!PageLocked(page), page);
+
+	if (PageSwapCache(page) || !page_stable_node(page))
+		return false;
+	/* Prohibit parallel get_ksm_page() */
+	if (!page_ref_freeze(page, 1))
+		return false;
+
+	page_move_anon_rmap(page, vma);
+	page->index = linear_page_index(vma, address);
+	page_ref_unfreeze(page, 1);
+
+	return true;
+}
 #ifdef CONFIG_MIGRATION
 void ksm_migrate_page(struct page *newpage, struct page *oldpage)
 {
--- a/mm/memory.c~mm-reuse-only-pte-mapped-ksm-page-in-do_wp_page
+++ a/mm/memory.c
@@ -2510,8 +2510,11 @@ static vm_fault_t do_wp_page(struct vm_f
 	 * Take out anonymous pages first, anonymous shared vmas are
 	 * not dirty accountable.
 	 */
-	if (PageAnon(vmf->page) && !PageKsm(vmf->page)) {
+	if (PageAnon(vmf->page)) {
 		int total_map_swapcount;
+		if (PageKsm(vmf->page) && (PageSwapCache(vmf->page) ||
+					   page_count(vmf->page) != 1))
+			goto copy;
 		if (!trylock_page(vmf->page)) {
 			get_page(vmf->page);
 			pte_unmap_unlock(vmf->pte, vmf->ptl);
@@ -2526,6 +2529,15 @@ static vm_fault_t do_wp_page(struct vm_f
 			}
 			put_page(vmf->page);
 		}
+		if (PageKsm(vmf->page)) {
+			bool reused = reuse_ksm_page(vmf->page, vmf->vma,
+						     vmf->address);
+			unlock_page(vmf->page);
+			if (!reused)
+				goto copy;
+			wp_page_reuse(vmf);
+			return VM_FAULT_WRITE;
+		}
 		if (reuse_swap_page(vmf->page, &total_map_swapcount)) {
 			if (total_map_swapcount == 1) {
 				/*
@@ -2546,7 +2558,7 @@ static vm_fault_t do_wp_page(struct vm_f
 					(VM_WRITE|VM_SHARED))) {
 		return wp_page_shared(vmf);
 	}
-
+copy:
 	/*
 	 * Ok, we need to copy. Oh, well..
 	 */
_

Patches currently in -mm which might be from ktkhai@xxxxxxxxxxxxx are

scripts-tags-add-more-declarations.patch
mm-remove-useless-check-in-pagecache_get_page.patch
ksm-react-on-changing-sleep_millisecs-parameter-faster.patch
mm-remove-__hugepage_set_anon_rmap.patch
mm-reuse-only-pte-mapped-ksm-page-in-do_wp_page.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux