[PATCH 4/5] mm/swap_state: fix potential faulted in race in swap_ra_info()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



While we released the pte lock, somebody else might faulted in this pte.
So we should check whether it's swap pte first to guard against such race
or swp_type would be unexpected. And we can also avoid some unnecessary
readahead cpu cycles possibly.

Fixes: ec560175c0b6 ("mm, swap: VMA based swap readahead")
Signed-off-by: Miaohe Lin <linmiaohe@xxxxxxxxxx>
---
 mm/swap_state.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/mm/swap_state.c b/mm/swap_state.c
index 709c260d644a..3bf0d0c297bc 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -724,10 +724,10 @@ static void swap_ra_info(struct vm_fault *vmf,
 {
 	struct vm_area_struct *vma = vmf->vma;
 	unsigned long ra_val;
-	swp_entry_t entry;
+	swp_entry_t swap_entry;
 	unsigned long faddr, pfn, fpfn;
 	unsigned long start, end;
-	pte_t *pte, *orig_pte;
+	pte_t *pte, *orig_pte, entry;
 	unsigned int max_win, hits, prev_win, win, left;
 #ifndef CONFIG_64BIT
 	pte_t *tpte;
@@ -742,8 +742,13 @@ static void swap_ra_info(struct vm_fault *vmf,
 
 	faddr = vmf->address;
 	orig_pte = pte = pte_offset_map(vmf->pmd, faddr);
-	entry = pte_to_swp_entry(*pte);
-	if ((unlikely(non_swap_entry(entry)))) {
+	entry = *pte;
+	if (unlikely(!is_swap_pte(entry))) {
+		pte_unmap(orig_pte);
+		return;
+	}
+	swap_entry = pte_to_swp_entry(entry);
+	if ((unlikely(non_swap_entry(swap_entry)))) {
 		pte_unmap(orig_pte);
 		return;
 	}
-- 
2.19.1






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux