- swapless-v2-try_to_unmap-rename-ignrefs-to-migration.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled

     Swapless V2: try_to_unmap() - Rename ignrefs to "migration"

has been removed from the -mm tree.  Its filename is

     swapless-v2-try_to_unmap-rename-ignrefs-to-migration.patch

This patch was probably dropped from -mm because
it has now been merged into a subsystem tree or
into Linus's tree, or because it was folded into
its parent patch in the -mm tree.


From: Christoph Lameter <clameter@xxxxxxx>

Currently page migration is depending on the ability to assign swap entries to
pages.  However, those entries will only be to identify anonymous pages.  Page
migration will not work without swap although swap space is never really used.

This patchset removes that dependency by introducing a special type of swap
entry that encodes a pfn number of the page being migrated.  If that swap pte
(a migration entry) is encountered then do_swap_page() will redo the fault
until the migration entry has been removed.

Migration entries have a very short lifetime and exist only while the page is
locked.  Only a few supporting functions are needed.

To some extend this covers the same ground as Marcelo's migration cache. 
However, I hope that this approach is simpler and less intrusive.

The migration functions will still be able to use swap entries if a page is
already on the swap cache.  But migration functions will no longer assign swap
entries to pages or remove them.  Maybe lazy migration can then manage its own
swap cache or migration cache if needed?

Efficiency of migration is increased by:

1. Avoiding useless retries
   The use of migration entries avoids raising the page count in do_swap_page().
   The existing approach can increase the page count between the unmapping
   of the ptes for a page and the page migration page count check resulting
   in having to retry migration although all accesses have been stopped.

2. Swap entries do not have to be assigned and removed from pages.

3. No swap space has to be setup for page migration. Page migration
   will never use swap.

The patchset will allow later patches to enable migration of VM_LOCKED vmas,
the ability to exempt vmas from page migration, and allow the implementation
of a another userland migration API for handling batches of pages.

This patchset was first discussed here:

http://marc.theaimsgroup.com/?l=linux-mm&m=114413402522102&w=2



This patch:

try_to_unmap: Rename ignore_refs to migrate

migrate is a better name since we implement special handling for
page migration later.

Signed-off-by: Christoph Lameter <clameter@xxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxx>
---

 mm/rmap.c |   18 +++++++++---------
 1 files changed, 9 insertions(+), 9 deletions(-)

diff -puN mm/rmap.c~swapless-v2-try_to_unmap-rename-ignrefs-to-migration mm/rmap.c
--- devel/mm/rmap.c~swapless-v2-try_to_unmap-rename-ignrefs-to-migration	2006-04-13 17:09:50.000000000 -0700
+++ devel-akpm/mm/rmap.c	2006-04-13 17:10:01.000000000 -0700
@@ -578,7 +578,7 @@ void page_remove_rmap(struct page *page)
  * repeatedly from either try_to_unmap_anon or try_to_unmap_file.
  */
 static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
-				int ignore_refs)
+				int migration)
 {
 	struct mm_struct *mm = vma->vm_mm;
 	unsigned long address;
@@ -602,7 +602,7 @@ static int try_to_unmap_one(struct page 
 	 */
 	if ((vma->vm_flags & VM_LOCKED) ||
 			(ptep_clear_flush_young(vma, address, pte)
-				&& !ignore_refs)) {
+				&& !migration)) {
 		ret = SWAP_FAIL;
 		goto out_unmap;
 	}
@@ -736,7 +736,7 @@ static void try_to_unmap_cluster(unsigne
 	pte_unmap_unlock(pte - 1, ptl);
 }
 
-static int try_to_unmap_anon(struct page *page, int ignore_refs)
+static int try_to_unmap_anon(struct page *page, int migration)
 {
 	struct anon_vma *anon_vma;
 	struct vm_area_struct *vma;
@@ -747,7 +747,7 @@ static int try_to_unmap_anon(struct page
 		return ret;
 
 	list_for_each_entry(vma, &anon_vma->head, anon_vma_node) {
-		ret = try_to_unmap_one(page, vma, ignore_refs);
+		ret = try_to_unmap_one(page, vma, migration);
 		if (ret == SWAP_FAIL || !page_mapped(page))
 			break;
 	}
@@ -764,7 +764,7 @@ static int try_to_unmap_anon(struct page
  *
  * This function is only called from try_to_unmap for object-based pages.
  */
-static int try_to_unmap_file(struct page *page, int ignore_refs)
+static int try_to_unmap_file(struct page *page, int migration)
 {
 	struct address_space *mapping = page->mapping;
 	pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT);
@@ -778,7 +778,7 @@ static int try_to_unmap_file(struct page
 
 	spin_lock(&mapping->i_mmap_lock);
 	vma_prio_tree_foreach(vma, &iter, &mapping->i_mmap, pgoff, pgoff) {
-		ret = try_to_unmap_one(page, vma, ignore_refs);
+		ret = try_to_unmap_one(page, vma, migration);
 		if (ret == SWAP_FAIL || !page_mapped(page))
 			goto out;
 	}
@@ -863,16 +863,16 @@ out:
  * SWAP_AGAIN	- we missed a mapping, try again later
  * SWAP_FAIL	- the page is unswappable
  */
-int try_to_unmap(struct page *page, int ignore_refs)
+int try_to_unmap(struct page *page, int migration)
 {
 	int ret;
 
 	BUG_ON(!PageLocked(page));
 
 	if (PageAnon(page))
-		ret = try_to_unmap_anon(page, ignore_refs);
+		ret = try_to_unmap_anon(page, migration);
 	else
-		ret = try_to_unmap_file(page, ignore_refs);
+		ret = try_to_unmap_file(page, migration);
 
 	if (!page_mapped(page))
 		ret = SWAP_SUCCESS;
_

Patches currently in -mm which might be from clameter@xxxxxxx are

origin.patch
page-migration-make-do_swap_page-redo-the-fault.patch
slab-extract-cache_free_alien-from-__cache_free.patch
migration-remove-unnecessary-pageswapcache-checks.patch
swapless-v2-try_to_unmap-rename-ignrefs-to-migration.patch
swapless-v2-add-migration-swap-entries.patch
swapless-v2-make-try_to_unmap-create-migration-entries.patch
swapless-v2-rip-out-swap-portion-of-old-migration-code.patch
swapless-v2-revise-main-migration-logic.patch
wait-for-migrating-page-after-incr-of-page-count-under-anon_vma-lock.patch
preserve-write-permissions-in-migration-entries.patch
migration_entry_wait-use-the-pte-lock-instead-of-the-anon_vma-lock.patch
read-write-migration-entries-implement-correct-behavior-in-copy_one_pte.patch
read-write-migration-entries-make-mprotect-convert-write-migration.patch
read-write-migration-entries-make-mprotect-convert-write-migration-fix.patch

-
To unsubscribe from this list: send the line "unsubscribe mm-commits" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Kernel Newbies FAQ]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Photo]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux