Allow hmm_range_fault() to return success (0) when the CPU pagetable entry points to the special shared zero page. The caller can then handle the zero page by possibly clearing device private memory instead of DMAing a zero page. Signed-off-by: Ralph Campbell <rcampbell@xxxxxxxxxx> Reviewed-by: Christoph Hellwig <hch@xxxxxx> Cc: "Jérôme Glisse" <jglisse@xxxxxxxxxx> Cc: Jason Gunthorpe <jgg@xxxxxxxxxxxx> --- mm/hmm.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/hmm.c b/mm/hmm.c index 5df0dbf77e89..f62b119722a3 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -530,7 +530,9 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, return -EBUSY; } else if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL) && pte_special(pte)) { *pfn = range->values[HMM_PFN_SPECIAL]; - return -EFAULT; + if (!is_zero_pfn(pte_pfn(pte))) + return -EFAULT; + return 0; } *pfn = hmm_device_entry_from_pfn(range, pte_pfn(pte)) | cpu_flags; -- 2.20.1