From: Ralph Campbell <rcampbell@xxxxxxxxxx> The mmotm patch [1] adds hugetlbfs support for HMM but the initial PFN used to fill the HMM range->pfns[] array doesn't properly compute the starting PFN offset. This can be tested by running test-hugetlbfs-read from [2]. Fix the PFN offset by adjusting the page offset by the device's page size. Andrew, this should probably be squashed into Jerome's patch. [1] https://marc.info/?l=linux-mm&m=155432003506068&w=2 ("mm/hmm: mirror hugetlbfs (snapshoting, faulting and DMA mapping)") [2] https://gitlab.freedesktop.org/glisse/svm-cl-tests Signed-off-by: Ralph Campbell <rcampbell@xxxxxxxxxx> Cc: Jérôme Glisse <jglisse@xxxxxxxxxx> Cc: Ira Weiny <ira.weiny@xxxxxxxxx> Cc: John Hubbard <jhubbard@xxxxxxxxxx> Cc: Dan Williams <dan.j.williams@xxxxxxxxx> Cc: Arnd Bergmann <arnd@xxxxxxxx> Cc: Balbir Singh <bsingharora@xxxxxxxxx> Cc: Dan Carpenter <dan.carpenter@xxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Souptick Joarder <jrdr.linux@xxxxxxxxx> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/hmm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/hmm.c b/mm/hmm.c index def451a56c3e..fcf8e4fb5770 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -868,7 +868,7 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask, goto unlock; } - pfn = pte_pfn(entry) + (start & mask); + pfn = pte_pfn(entry) + ((start & mask) >> range->page_shift); for (; addr < end; addr += size, i++, pfn += pfn_inc) range->pfns[i] = hmm_device_entry_from_pfn(range, pfn) | cpu_flags; -- 2.20.1