Re: [PATCH v7 04/24] mm: Dont assume page-table invariance during faults

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 08/02/2018 16:00, Matthew Wilcox wrote:
> On Thu, Feb 08, 2018 at 03:35:58PM +0100, Laurent Dufour wrote:
>> I reviewed that part of code, and I think I could now change the way
>> pte_unmap_safe() is checking for the pte's value. Since we now have all the
>> needed details in the vm_fault structure, I will pass it to
>> pte_unamp_same() and deal with the VMA checks when locking for the pte as
>> it is done in the other part of the page fault handler by calling
>> pte_spinlock().
> 
> This does indeed look much better!  Thank you!
> 
>> This means that this patch will be dropped, and pte_unmap_same() will become :
>>
>> static inline int pte_unmap_same(struct vm_fault *vmf, int *same)
>> {
>> 	int ret = 0;
>>
>> 	*same = 1;
>> #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT)
>> 	if (sizeof(pte_t) > sizeof(unsigned long)) {
>> 		if (pte_spinlock(vmf)) {
>> 			*same = pte_same(*vmf->pte, vmf->orig_pte);
>> 			spin_unlock(vmf->ptl);
>> 		}
>> 		else
>> 			ret = VM_FAULT_RETRY;
>> 	}
>> #endif
>> 	pte_unmap(vmf->pte);
>> 	return ret;
>> }
> 
> I'm not a huge fan of auxiliary return values.  Perhaps we could do this
> instead:
> 
> 	ret = pte_unmap_same(vmf);
> 	if (ret != VM_FAULT_NOTSAME) {
> 		if (page)
> 			put_page(page);
> 		goto out;
> 	}
> 	ret = 0;
> 
> (we have a lot of unused bits in VM_FAULT_, so adding a new one shouldn't
> be a big deal)

I do agree, using an auxiliary return value is not a good idea.

What about the following changes based on your suggestion ?

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 7de4323b9e89..0cd31a37bb3d 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1212,6 +1212,7 @@ static inline void clear_page_pfmemalloc(struct page *page)
 #define VM_FAULT_NEEDDSYNC  0x2000     /* ->fault did not modify page tables
                                         * and needs fsync() to complete (for
                                         * synchronous page faults in DAX) */
+#define VM_FAULT_PTNOTSAME 0x4000      /* Page table entries have changed */
 
 #define VM_FAULT_ERROR (VM_FAULT_OOM | VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV | \
                         VM_FAULT_HWPOISON | VM_FAULT_HWPOISON_LARGE | \
diff --git a/mm/memory.c b/mm/memory.c
index b7da99c74fef..c9b419f8e4c5 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2433,21 +2433,30 @@ static inline bool pte_map_lock(struct vm_fault *vmf)
  * parts, do_swap_page must check under lock before unmapping the pte and
  * proceeding (but do_wp_page is only called after already making such a check;
  * and do_anonymous_page can safely check later on).
+ *
+ * pte_unmap_same() returns:
+ *     0                       if the PTE are the same
+ *     VM_FAULT_PTNOTSAME      if the PTE are different
+ *     VM_FAULT_RETRY          if the VMA has changed in our back during
+ *                             a speculative page fault handling.
  */
-static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd,
-                               pte_t *page_table, pte_t orig_pte)
+static inline int pte_unmap_same(struct vm_fault *vmf)
 {
-       int same = 1;
+       int ret = 0;
+
 #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT)
        if (sizeof(pte_t) > sizeof(unsigned long)) {
-               spinlock_t *ptl = pte_lockptr(mm, pmd);
-               spin_lock(ptl);
-               same = pte_same(*page_table, orig_pte);
-               spin_unlock(ptl);
+               if (pte_spinlock(vmf)) {
+                       if (!pte_same(*vmf->pte, vmf->orig_pte))
+                               ret = VM_FAULT_PTNOTSAME;
+                       spin_unlock(vmf->ptl);
+               }
+               else
+                       ret = VM_FAULT_RETRY;
        }
 #endif
-       pte_unmap(page_table);
-       return same;
+       pte_unmap(vmf->pte);
+       return ret;
 }
 
 static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma)
@@ -3037,7 +3046,7 @@ int do_swap_page(struct vm_fault *vmf)
        pte_t pte;
        int locked;
        int exclusive = 0;
-       int ret = 0;
+       int ret;
        bool vma_readahead = swap_use_vma_readahead();
 
        if (vma_readahead) {
@@ -3045,9 +3054,16 @@ int do_swap_page(struct vm_fault *vmf)
                swapcache = page;
        }
 
-       if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte)) {
+       ret = pte_unmap_same(vmf);
+       if (ret) {
                if (page)
                        put_page(page);
+               /*
+                * In the case the PTE are different, meaning that the
+                * page has already been processed by another CPU, we return 0.
+                */
+               if (ret == VM_FAULT_PTNOTSAME)
+                       ret = 0;
                goto out;
        }

Thanks,
Laurent.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux