The patch titled Subject: mm: convert page_mapped_in_vma() to use page_vma_mapped_walk() has been removed from the -mm tree. Its filename was mm-convert-page_mapped_in_vma-to-use-page_vma_mapped_walk.patch This patch was dropped because it was merged into mainline or a subsystem tree ------------------------------------------------------ From: "Kirill A. Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx> Subject: mm: convert page_mapped_in_vma() to use page_vma_mapped_walk() For consistency, it worth converting all page_check_address() to page_vma_mapped_walk(), so we could drop the former. Link: http://lkml.kernel.org/r/20170129173858.45174-11-kirill.shutemov@xxxxxxxxxxxxxxx Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> Acked-by: Hillf Danton <hillf.zj@xxxxxxxxxxxxxxx> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Oleg Nesterov <oleg@xxxxxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxx> Cc: Srikar Dronamraju <srikar@xxxxxxxxxxxxxxxxxx> Cc: Vladimir Davydov <vdavydov.dev@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/page_vma_mapped.c | 30 ++++++++++++++++++++++++++++++ mm/rmap.c | 26 -------------------------- 2 files changed, 30 insertions(+), 26 deletions(-) diff -puN mm/page_vma_mapped.c~mm-convert-page_mapped_in_vma-to-use-page_vma_mapped_walk mm/page_vma_mapped.c --- a/mm/page_vma_mapped.c~mm-convert-page_mapped_in_vma-to-use-page_vma_mapped_walk +++ a/mm/page_vma_mapped.c @@ -186,3 +186,33 @@ next_pte: do { } } } + +/** + * page_mapped_in_vma - check whether a page is really mapped in a VMA + * @page: the page to test + * @vma: the VMA to test + * + * Returns 1 if the page is mapped into the page tables of the VMA, 0 + * if the page is not mapped into the page tables of this VMA. Only + * valid for normal file or anonymous VMAs. + */ +int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma) +{ + struct page_vma_mapped_walk pvmw = { + .page = page, + .vma = vma, + .flags = PVMW_SYNC, + }; + unsigned long start, end; + + start = __vma_address(page, vma); + end = start + PAGE_SIZE * (hpage_nr_pages(page) - 1); + + if (unlikely(end < vma->vm_start || start >= vma->vm_end)) + return 0; + pvmw.address = max(start, vma->vm_start); + if (!page_vma_mapped_walk(&pvmw)) + return 0; + page_vma_mapped_walk_done(&pvmw); + return 1; +} diff -puN mm/rmap.c~mm-convert-page_mapped_in_vma-to-use-page_vma_mapped_walk mm/rmap.c --- a/mm/rmap.c~mm-convert-page_mapped_in_vma-to-use-page_vma_mapped_walk +++ a/mm/rmap.c @@ -756,32 +756,6 @@ check: return NULL; } -/** - * page_mapped_in_vma - check whether a page is really mapped in a VMA - * @page: the page to test - * @vma: the VMA to test - * - * Returns 1 if the page is mapped into the page tables of the VMA, 0 - * if the page is not mapped into the page tables of this VMA. Only - * valid for normal file or anonymous VMAs. - */ -int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma) -{ - unsigned long address; - pte_t *pte; - spinlock_t *ptl; - - address = __vma_address(page, vma); - if (unlikely(address < vma->vm_start || address >= vma->vm_end)) - return 0; - pte = page_check_address(page, vma->vm_mm, address, &ptl, 1); - if (!pte) /* the page is not in this mm */ - return 0; - pte_unmap_unlock(pte, ptl); - - return 1; -} - #ifdef CONFIG_TRANSPARENT_HUGEPAGE /* * Check that @page is mapped at @address into @mm. In contrast to _ Patches currently in -mm which might be from kirill.shutemov@xxxxxxxxxxxxxxx are -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html