On Wed, Jun 08, 2022 at 08:57:24PM +0800, Miaohe Lin wrote: ... > > > > I think that most of page table walker for user address space should first > > check is_vm_hugetlb_page() and call hugetlb specific walking code for vma > > with VM_HUGETLB. > > copy_page_range() is a good example. It calls copy_hugetlb_page_range() > > for vma with VM_HUGETLB and the function should support hwpoison entry. > > But I feel that I need testing for confirmation. > > Sorry, I missed it should be called from hugetlb variants. > > > > > And I'm not sure that all other are prepared for non-present pud-mapping, > > so I'll need somehow code inspection and testing for each. > > I browsed the code again, there still might be some problematic code paths: > > 1.For follow_pud_mask, !pud_present will mostly reach follow_pmd_mask(). This can be > called for hugetlb page. (Note gup_pud_range is fixed at 15494520b776 ("mm: fix gup_pud_range")) > > 2.Even for huge_pte_alloc, pud_offset will be called in pud_alloc. So pudp will be an invalid pointer. > And it will be de-referenced later. Yes, these paths need to support non-present pud entry, so I'll update/add the patches. It seems that I did the similar work for pmd few years ago (cbef8478bee5 ("mm/hugetlb: pmd_huge() returns true for non-present hugepage"). Thanks, Naoya Horiguchi