[withdrawn] ksm-assist-buddy-allocator-to-assemble-1-order-pages.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm/ksm.c: assist buddy allocator to assemble 1-order pages
has been removed from the -mm tree.  Its filename was
     ksm-assist-buddy-allocator-to-assemble-1-order-pages.patch

This patch was dropped because it was withdrawn

------------------------------------------------------
From: Kirill Tkhai <ktkhai@xxxxxxxxxxxxx>
Subject: mm/ksm.c: assist buddy allocator to assemble 1-order pages

try_to_merge_two_pages() merges two pages, one of them is a page of
currently scanned mm, the second is a page with identical hash from
unstable tree.  Currently, we merge the page from unstable tree into the
first one, and then free it.

The idea of this patch is to prefer freeing that page of them, which has a
free neighbour (i.e., neighbour with zero page_count()).  This allows
buddy allocator to assemble at least 1-order set from the freed page and
its neighbour; this is a kind of cheep passive compaction.

AFAIK, 1-order pages set consists of pages with PFNs [2n, 2n+1] (odd,
even), so the neighbour's pfn is calculated via XOR with 1.  We check the
result pfn is valid and its page_count(), and prefer merging into
@tree_page if neighbour's usage count is zero.

There a is small difference with current behavior in case of error path. 
In case the second try_to_merge_with_ksm_page() fails, we return from
try_to_merge_two_pages() with @tree_page removed from the unstable tree. 
It does not seem to matter, but if we do not want a change at all, it's
not a problem to move remove_rmap_item_from_tree() from
try_to_merge_with_ksm_page() to its callers.

Link: http://lkml.kernel.org/r/153995241537.4096.15189862239521235797.stgit@localhost.localdomain
Signed-off-by: Kirill Tkhai <ktkhai@xxxxxxxxxxxxx>
Cc: Hugh Dickins <hughd@xxxxxxxxxx>
Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx>
Cc: "Kirill A. Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx>
Cc: Andy Shevchenko <andriy.shevchenko@xxxxxxxxxxxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxxx>
Cc: Mike Rapoport <rppt@xxxxxxxxxxxxxxxxxx>
Cc: Claudio Imbrenda <imbrenda@xxxxxxxxxxxxxxxxxx>
Cc: Jonathan Corbet <corbet@xxxxxxx>
Cc: Nick Desaulniers <ndesaulniers@xxxxxxxxxx>
Cc: Dave Jiang <dave.jiang@xxxxxxxxx>
Cc: Jerome Glisse <jglisse@xxxxxxxxxx>
Cc: Jia He <jia.he@xxxxxxxxxxxxxxxx>
Cc: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>
Cc: Colin Ian King <colin.king@xxxxxxxxxxxxx>
Cc: Jiang Biao <jiang.biao2@xxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/ksm.c |   17 +++++++++++++++++
 1 file changed, 17 insertions(+)

--- a/mm/ksm.c~ksm-assist-buddy-allocator-to-assemble-1-order-pages
+++ a/mm/ksm.c
@@ -1335,6 +1335,23 @@ static struct page *try_to_merge_two_pag
 {
 	int err;
 
+	if (IS_ENABLED(CONFIG_COMPACTION)) {
+		unsigned long pfn;
+
+		/*
+		 * Find neighbour of @page containing 1-order pair in buddy
+		 * allocator and check whether its count is 0. If so, we
+		 * consider the neighbour as a free page (this is more
+		 * probable than it's freezed via page_ref_freeze()), and
+		 * we try to use @tree_page as ksm page and to free @page.
+		 */
+		pfn = page_to_pfn(page) ^ 1;
+		if (pfn_valid(pfn) && page_count(pfn_to_page(pfn)) == 0) {
+			swap(rmap_item, tree_rmap_item);
+			swap(page, tree_page);
+		}
+	}
+
 	err = try_to_merge_with_ksm_page(rmap_item, page, NULL);
 	if (!err) {
 		err = try_to_merge_with_ksm_page(tree_rmap_item,
_

Patches currently in -mm which might be from ktkhai@xxxxxxxxxxxxx are

mm-remove-useless-check-in-pagecache_get_page.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux