[PATCH 1/2] mm: Allow single pagefault on mmap-write with VM_MIXEDMAP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Until now vma->vm_page_prot defines how a page/pfn is inserted into
the page table (see vma_wants_writenotify in mm/mmap.c).

Which meant that it was always inserted with read-only under the
assumption that we want to be notified when write access occurs.
This is not always true and adds an unnecessary page-fault on
every new mmap-write.

This patch adds a more granular approach and lets the fault handler
decide how it wants to map the mixmap pfn.

The old vm_insert_mixed() now receives a new pgprot_t prot and is
renamed to: vm_insert_mixed_prot().
A new inline vm_insert_mixed() is defined which is a wrapper over
vm_insert_mixed_prot(), with the vma->vm_page_prot default as before,
so to satisfy all current users.

CC: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
CC: "Kirill A. Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx>
CC: Oleg Nesterov <oleg@xxxxxxxxxx>
CC: Mel Gorman <mgorman@xxxxxxx>
CC: Johannes Weiner <hannes@xxxxxxxxxxx>
CC: Matthew Wilcox <willy@xxxxxxxxxxxxxxx>
CC: linux-mm@xxxxxxxxx (open list:MEMORY MANAGEMENT)

Reviewed-by: Yigal Korman <yigal@xxxxxxxxxxxxx>
Signed-off-by: Boaz Harrosh <boaz@xxxxxxxxxxxxx>
---
 include/linux/mm.h |  8 +++++++-
 mm/memory.c        | 10 +++++-----
 2 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 80001de..46a9a19 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2108,8 +2108,14 @@ int remap_pfn_range(struct vm_area_struct *, unsigned long addr,
 int vm_insert_page(struct vm_area_struct *, unsigned long addr, struct page *);
 int vm_insert_pfn(struct vm_area_struct *vma, unsigned long addr,
 			unsigned long pfn);
+int vm_insert_mixed_prot(struct vm_area_struct *vma, unsigned long addr,
+			 unsigned long pfn, pgprot_t prot);
+static inline
 int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
-			unsigned long pfn);
+		    unsigned long pfn)
+{
+	return vm_insert_mixed_prot(vma, addr, pfn, vma->vm_page_prot);
+}
 int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len);
 
 
diff --git a/mm/memory.c b/mm/memory.c
index deb679c..c716913 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1589,8 +1589,8 @@ int vm_insert_pfn(struct vm_area_struct *vma, unsigned long addr,
 }
 EXPORT_SYMBOL(vm_insert_pfn);
 
-int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
-			unsigned long pfn)
+int vm_insert_mixed_prot(struct vm_area_struct *vma, unsigned long addr,
+			 unsigned long pfn, pgprot_t prot)
 {
 	BUG_ON(!(vma->vm_flags & VM_MIXEDMAP));
 
@@ -1608,11 +1608,11 @@ int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
 		struct page *page;
 
 		page = pfn_to_page(pfn);
-		return insert_page(vma, addr, page, vma->vm_page_prot);
+		return insert_page(vma, addr, page, prot);
 	}
-	return insert_pfn(vma, addr, pfn, vma->vm_page_prot);
+	return insert_pfn(vma, addr, pfn, prot);
 }
-EXPORT_SYMBOL(vm_insert_mixed);
+EXPORT_SYMBOL(vm_insert_mixed_prot);
 
 /*
  * maps a range of physical memory into the requested pages. the old
-- 
1.9.3


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]