pfn_t_to_page() honors the flags in the pfn_t value to determine if a pfn is backed by a page. However, vm_insert_mixed() was originally written to use pfn_valid() to make this determination. To restore the old/correct behavior, ignore the pfn_t flags in the !pfn_t_devmap() case and fallback to trusting pfn_valid(). Fixes: 01c8f1c44b83 ("mm, dax, gpu: convert vm_insert_mixed to pfn_t") Cc: Dave Hansen <dave@xxxxxxxx> Cc: David Airlie <airlied@xxxxxxxx> Reported-by: Julian Margetson <runaway@xxxxxxxx> Reported-by: Tomi Valkeinen <tomi.valkeinen@xxxxxx> Signed-off-by: Dan Williams <dan.j.williams@xxxxxxxxx> --- mm/memory.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 30991f83d0bf..93ce37989471 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1591,10 +1591,15 @@ int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr, * than insert_pfn). If a zero_pfn were inserted into a VM_MIXEDMAP * without pte special, it would there be refcounted as a normal page. */ - if (!HAVE_PTE_SPECIAL && pfn_t_valid(pfn)) { + if (!HAVE_PTE_SPECIAL && !pfn_t_devmap(pfn) && pfn_t_valid(pfn)) { struct page *page; - page = pfn_t_to_page(pfn); + /* + * At this point we are committed to insert_page() + * regardless of whether the caller specified flags that + * result in pfn_t_has_page() == false. + */ + page = pfn_to_page(pfn_t_to_pfn(pfn)); return insert_page(vma, addr, page, vma->vm_page_prot); } return insert_pfn(vma, addr, pfn, vma->vm_page_prot); -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>