- page-migration-cleanup-extract-try_to_unmap-from-migration-functions.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled

     page migration cleanup: extract try_to_unmap from migration functions

has been removed from the -mm tree.  Its filename is

     page-migration-cleanup-extract-try_to_unmap-from-migration-functions.patch

This patch was dropped because it was merged into mainline or a subsystem tree

------------------------------------------------------
Subject: page migration cleanup: extract try_to_unmap from migration functions
From: Christoph Lameter <clameter@xxxxxxx>


Extract try_to_unmap and rename remove_references -> move_mapping

try_to_unmap() may significantly change the page state by for example setting
the dirty bit.  It is therefore best to unmap in migrate_pages() before
calling any migration functions.

migrate_page_remove_references() will then only move the new page in place of
the old page in the mapping.  Rename the function to
migrate_page_move_mapping().

This allows us to get rid of the special unmapping for the fallback path.

Signed-off-by: Christoph Lameter <clameter@xxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxx>
---

 mm/migrate.c |   76 +++++++++++++++++++------------------------------
 1 file changed, 31 insertions(+), 45 deletions(-)

diff -puN mm/migrate.c~page-migration-cleanup-extract-try_to_unmap-from-migration-functions mm/migrate.c
--- a/mm/migrate.c~page-migration-cleanup-extract-try_to_unmap-from-migration-functions
+++ a/mm/migrate.c
@@ -166,15 +166,14 @@ retry:
 }
 
 /*
- * Remove references for a page and establish the new page with the correct
- * basic settings to be able to stop accesses to the page.
+ * Replace the page in the mapping.
  *
  * The number of remaining references must be:
  * 1 for anonymous pages without a mapping
  * 2 for pages with a mapping
  * 3 for pages with a mapping and PagePrivate set.
  */
-static int migrate_page_remove_references(struct page *newpage,
+static int migrate_page_move_mapping(struct page *newpage,
 				struct page *page)
 {
 	struct address_space *mapping = page_mapping(page);
@@ -183,35 +182,6 @@ static int migrate_page_remove_reference
 	if (!mapping)
 		return -EAGAIN;
 
-	/*
-	 * Establish swap ptes for anonymous pages or destroy pte
-	 * maps for files.
-	 *
-	 * In order to reestablish file backed mappings the fault handlers
-	 * will take the radix tree_lock which may then be used to stop
-  	 * processses from accessing this page until the new page is ready.
-	 *
-	 * A process accessing via a swap pte (an anonymous page) will take a
-	 * page_lock on the old page which will block the process until the
-	 * migration attempt is complete. At that time the PageSwapCache bit
-	 * will be examined. If the page was migrated then the PageSwapCache
-	 * bit will be clear and the operation to retrieve the page will be
-	 * retried which will find the new page in the radix tree. Then a new
-	 * direct mapping may be generated based on the radix tree contents.
-	 *
-	 * If the page was not migrated then the PageSwapCache bit
-	 * is still set and the operation may continue.
-	 */
-	if (try_to_unmap(page, 1) == SWAP_FAIL)
-		/* A vma has VM_LOCKED set -> permanent failure */
-		return -EPERM;
-
-	/*
-	 * Give up if we were unable to remove all mappings.
-	 */
-	if (page_mapcount(page))
-		return -EAGAIN;
-
 	write_lock_irq(&mapping->tree_lock);
 
 	radix_pointer = (struct page **)radix_tree_lookup_slot(
@@ -310,7 +280,7 @@ int migrate_page(struct page *newpage, s
 
 	BUG_ON(PageWriteback(page));	/* Writeback must be complete */
 
-	rc = migrate_page_remove_references(newpage, page);
+	rc = migrate_page_move_mapping(newpage, page);
 
 	if (rc)
 		return rc;
@@ -349,7 +319,7 @@ int buffer_migrate_page(struct page *new
 
 	head = page_buffers(page);
 
-	rc = migrate_page_remove_references(newpage, page);
+	rc = migrate_page_move_mapping(newpage, page);
 
 	if (rc)
 		return rc;
@@ -482,6 +452,33 @@ redo:
 		lock_page(newpage);
 
 		/*
+		 * Establish swap ptes for anonymous pages or destroy pte
+		 * maps for files.
+		 *
+		 * In order to reestablish file backed mappings the fault handlers
+		 * will take the radix tree_lock which may then be used to stop
+	  	 * processses from accessing this page until the new page is ready.
+		 *
+		 * A process accessing via a swap pte (an anonymous page) will take a
+		 * page_lock on the old page which will block the process until the
+		 * migration attempt is complete. At that time the PageSwapCache bit
+		 * will be examined. If the page was migrated then the PageSwapCache
+		 * bit will be clear and the operation to retrieve the page will be
+		 * retried which will find the new page in the radix tree. Then a new
+		 * direct mapping may be generated based on the radix tree contents.
+		 *
+		 * If the page was not migrated then the PageSwapCache bit
+		 * is still set and the operation may continue.
+		 */
+		rc = -EPERM;
+		if (try_to_unmap(page, 1) == SWAP_FAIL)
+			/* A vma has VM_LOCKED set -> permanent failure */
+			goto unlock_both;
+
+		rc = -EAGAIN;
+		if (page_mapped(page))
+			goto unlock_both;
+		/*
 		 * Pages are properly locked and writeback is complete.
 		 * Try to migrate the page.
 		 */
@@ -501,17 +498,6 @@ redo:
 			goto unlock_both;
                 }
 
-		/* Make sure the dirty bit is up to date */
-		if (try_to_unmap(page, 1) == SWAP_FAIL) {
-			rc = -EPERM;
-			goto unlock_both;
-		}
-
-		if (page_mapcount(page)) {
-			rc = -EAGAIN;
-			goto unlock_both;
-		}
-
 		/*
 		 * Default handling if a filesystem does not provide
 		 * a migration function. We can only migrate clean
_

Patches currently in -mm which might be from clameter@xxxxxxx are

origin.patch
mm-remove-vm_locked-before-remap_pfn_range-and-drop-vm_shm.patch
page-migration-support-a-vma-migration-function.patch
allow-migration-of-mlocked-pages.patch
zoned-vm-counters-create-vmstatc-h-from-page_allocc-h.patch
zoned-vm-counters-basic-zvc-zoned-vm-counter-implementation.patch
zoned-vm-counters-basic-zvc-zoned-vm-counter-implementation-tidy.patch
zoned-vm-counters-convert-nr_mapped-to-per-zone-counter.patch
zoned-vm-counters-conversion-of-nr_pagecache-to-per-zone-counter.patch
zoned-vm-counters-remove-nr_file_mapped-from-scan-control-structure.patch
zoned-vm-counters-remove-nr_file_mapped-from-scan-control-structure-fix.patch
zoned-vm-counters-split-nr_anon_pages-off-from-nr_file_mapped.patch
zoned-vm-counters-zone_reclaim-remove-proc-sys-vm-zone_reclaim_interval.patch
zoned-vm-counters-conversion-of-nr_slab-to-per-zone-counter.patch
zoned-vm-counters-conversion-of-nr_pagetables-to-per-zone-counter.patch
zoned-vm-counters-conversion-of-nr_dirty-to-per-zone-counter.patch
zoned-vm-counters-conversion-of-nr_writeback-to-per-zone-counter.patch
zoned-vm-counters-conversion-of-nr_unstable-to-per-zone-counter.patch
zoned-vm-counters-conversion-of-nr_bounce-to-per-zone-counter.patch
zoned-vm-counters-remove-useless-struct-wbs.patch
cpuset-remove-extra-cpuset_zone_allowed-check-in-__alloc_pages.patch
corrections-to-memory-barrier-doc.patch

-
To unsubscribe from this list: send the line "unsubscribe mm-commits" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Kernel Newbies FAQ]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Photo]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux