When using remap_pfn_range() from a fault handler, we are exposed to races between concurrent faults. Rather than hitting a BUG, report the error back to the caller, like vm_insert_pfn(). Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: "Kirill A. Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxx> Cc: Cyrill Gorcunov <gorcunov@xxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: linux-mm@xxxxxxxxx --- mm/memory.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 037b812a9531..6603a9e6a731 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2306,19 +2306,23 @@ static int remap_pte_range(struct mm_struct *mm, pmd_t *pmd, { pte_t *pte; spinlock_t *ptl; + int ret = 0; pte = pte_alloc_map_lock(mm, pmd, addr, &ptl); if (!pte) return -ENOMEM; arch_enter_lazy_mmu_mode(); do { - BUG_ON(!pte_none(*pte)); + if (!pte_none(*pte)) { + ret = -EBUSY; + break; + } set_pte_at(mm, addr, pte, pte_mkspecial(pfn_pte(pfn, prot))); pfn++; } while (pte++, addr += PAGE_SIZE, addr != end); arch_leave_lazy_mmu_mode(); pte_unmap_unlock(pte - 1, ptl); - return 0; + return ret; } static inline int remap_pmd_range(struct mm_struct *mm, pud_t *pud, -- 2.0.0 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx