- migration_entry_wait-use-the-pte-lock-instead-of-the-anon_vma-lock.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled

     migration_entry_wait: Use the pte lock instead of the anon_vma lock.

has been removed from the -mm tree.  Its filename is

     migration_entry_wait-use-the-pte-lock-instead-of-the-anon_vma-lock.patch

This patch was probably dropped from -mm because
it has now been merged into a subsystem tree or
into Linus's tree, or because it was folded into
its parent patch in the -mm tree.


From: Christoph Lameter <clameter@xxxxxxx>

Use of the pte lock allows for much finer grained locking and avoids the
complexity coming with locking via the anon_vma.  It will also make the
fetching of the pte value cleaner.  Add a couple of other improvements as
well.

Signed-off-by: Christoph Lameter <clameter@xxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxx>
---

 include/linux/swapops.h |    6 ++--
 mm/memory.c             |    2 -
 mm/migrate.c            |   53 ++++++++++++++------------------------
 3 files changed, 25 insertions(+), 36 deletions(-)

diff -puN include/linux/swapops.h~migration_entry_wait-use-the-pte-lock-instead-of-the-anon_vma-lock include/linux/swapops.h
--- devel/include/linux/swapops.h~migration_entry_wait-use-the-pte-lock-instead-of-the-anon_vma-lock	2006-04-21 04:25:53.000000000 -0700
+++ devel-akpm/include/linux/swapops.h	2006-04-21 04:26:19.000000000 -0700
@@ -98,13 +98,15 @@ static inline struct page *migration_ent
 	return p;
 }
 
-extern void migration_entry_wait(swp_entry_t, pte_t *);
+extern void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd,
+					unsigned long address);
 #else
 
 #define make_migration_entry(page, write) swp_entry(0, 0)
 #define is_migration_entry(swp) 0
 #define migration_entry_to_page(swp) NULL
-static inline void migration_entry_wait(swp_entry_t entry, pte_t *ptep) { }
+static inline void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd,
+					 unsigned long address) { }
 
 #endif
 
diff -puN mm/memory.c~migration_entry_wait-use-the-pte-lock-instead-of-the-anon_vma-lock mm/memory.c
--- devel/mm/memory.c~migration_entry_wait-use-the-pte-lock-instead-of-the-anon_vma-lock	2006-04-21 04:25:53.000000000 -0700
+++ devel-akpm/mm/memory.c	2006-04-21 04:25:53.000000000 -0700
@@ -1881,7 +1881,7 @@ static int do_swap_page(struct mm_struct
 	entry = pte_to_swp_entry(orig_pte);
 
 	if (is_migration_entry(entry)) {
-		migration_entry_wait(entry, page_table);
+		migration_entry_wait(mm, pmd, address);
 		goto out;
 	}
 
diff -puN mm/migrate.c~migration_entry_wait-use-the-pte-lock-instead-of-the-anon_vma-lock mm/migrate.c
--- devel/mm/migrate.c~migration_entry_wait-use-the-pte-lock-instead-of-the-anon_vma-lock	2006-04-21 04:25:53.000000000 -0700
+++ devel-akpm/mm/migrate.c	2006-04-21 04:25:53.000000000 -0700
@@ -183,48 +183,35 @@ out:
  *
  * This function is called from do_swap_page().
  */
-void migration_entry_wait(swp_entry_t entry, pte_t *ptep)
+void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd,
+				unsigned long address)
 {
-	struct page *page = migration_entry_to_page(entry);
-	unsigned long mapping = (unsigned long)page->mapping;
-	struct anon_vma *anon_vma;
-	pte_t pte;
-
-	if (!mapping ||
-		(mapping & PAGE_MAPPING_ANON) == 0)
-			return;
-	/*
-	 * We hold the mmap_sem lock.
-	 */
-	anon_vma = (struct anon_vma *) (mapping - PAGE_MAPPING_ANON);
+	pte_t *ptep, pte;
+	spinlock_t *ptl;
+	swp_entry_t entry;
+	struct page *page;
 
-	/*
-	 * The anon_vma lock is also taken while removing the migration
-	 * entries. Take the lock here to insure that the migration pte
-	 * is not modified while we increment the page count.
-	 * This is similar to find_get_page().
-	 */
-	spin_lock(&anon_vma->lock);
+	ptep = pte_offset_map_lock(mm, pmd, address, &ptl);
 	pte = *ptep;
-	if (pte_present(pte) || pte_none(pte) || pte_file(pte)) {
-		spin_unlock(&anon_vma->lock);
-		return;
-	}
+	if (!is_swap_pte(pte))
+		goto out;
+
 	entry = pte_to_swp_entry(pte);
-	if (!is_migration_entry(entry) ||
-		migration_entry_to_page(entry) != page) {
-			/* Migration entry is gone */
-			spin_unlock(&anon_vma->lock);
-			return;
-	}
-	/* Pages with migration entries must be locked */
+	if (!is_migration_entry(entry))
+		goto out;
+
+	page = migration_entry_to_page(entry);
+
+	/* Pages with migration entries are always locked */
 	BUG_ON(!PageLocked(page));
 
-	/* Phew. Finally we can increment the refcount */
 	get_page(page);
-	spin_unlock(&anon_vma->lock);
+	pte_unmap_unlock(ptep, ptl);
 	wait_on_page_locked(page);
 	put_page(page);
+	return;
+out:
+	pte_unmap_unlock(ptep, ptl);
 }
 
 /*
_

Patches currently in -mm which might be from clameter@xxxxxxx are

origin.patch
page-migration-make-do_swap_page-redo-the-fault.patch
slab-extract-cache_free_alien-from-__cache_free.patch
migration-remove-unnecessary-pageswapcache-checks.patch
migration_entry_wait-use-the-pte-lock-instead-of-the-anon_vma-lock.patch
read-write-migration-entries-implement-correct-behavior-in-copy_one_pte.patch
read-write-migration-entries-make-mprotect-convert-write-migration.patch
read-write-migration-entries-make-mprotect-convert-write-migration-fix.patch

-
To unsubscribe from this list: send the line "unsubscribe mm-commits" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Kernel Newbies FAQ]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Photo]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux