On 5/3/22 03:03, Gerald Schaefer wrote: > On Tue, 3 May 2022 10:19:46 +0800 > Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> wrote: > >> >> >> On 5/2/2022 10:02 PM, Gerald Schaefer wrote: >>> On Sat, 30 Apr 2022 11:22:33 +0800 >>> Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> wrote: >>> >>>> >>>> >>>> On 4/30/2022 4:02 AM, Gerald Schaefer wrote: >>>>> On Fri, 29 Apr 2022 16:14:43 +0800 >>>>> Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> wrote: >>>>> >>>>>> On some architectures (like ARM64), it can support CONT-PTE/PMD size >>>>>> hugetlb, which means it can support not only PMD/PUD size hugetlb: >>>>>> 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page >>>>>> size specified. >>>>>> >>>>>> When unmapping a hugetlb page, we will get the relevant page table >>>>>> entry by huge_pte_offset() only once to nuke it. This is correct >>>>>> for PMD or PUD size hugetlb, since they always contain only one >>>>>> pmd entry or pud entry in the page table. >>>>>> >>>>>> However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, >>>>>> since they can contain several continuous pte or pmd entry with >>>>>> same page table attributes, so we will nuke only one pte or pmd >>>>>> entry for this CONT-PTE/PMD size hugetlb page. >>>>>> >>>>>> And now we only use try_to_unmap() to unmap a poisoned hugetlb page, >>>>>> which means now we will unmap only one pte entry for a CONT-PTE or >>>>>> CONT-PMD size poisoned hugetlb page, and we can still access other >>>>>> subpages of a CONT-PTE or CONT-PMD size poisoned hugetlb page, >>>>>> which will cause serious issues possibly. >>>>>> >>>>>> So we should change to use huge_ptep_clear_flush() to nuke the >>>>>> hugetlb page table to fix this issue, which already considered >>>>>> CONT-PTE and CONT-PMD size hugetlb. >>>>>> >>>>>> Note we've already used set_huge_swap_pte_at() to set a poisoned >>>>>> swap entry for a poisoned hugetlb page. >>>>>> >>>>>> Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> >>>>>> --- >>>>>> mm/rmap.c | 34 +++++++++++++++++----------------- >>>>>> 1 file changed, 17 insertions(+), 17 deletions(-) >>>>>> >>>>>> diff --git a/mm/rmap.c b/mm/rmap.c >>>>>> index 7cf2408..1e168d7 100644 >>>>>> --- a/mm/rmap.c >>>>>> +++ b/mm/rmap.c >>>>>> @@ -1564,28 +1564,28 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, >>>>>> break; >>>>>> } >>>>>> } >>>>>> + pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); >>>>> >>>>> Unlike in your patch 2/3, I do not see that this (huge) pteval would later >>>>> be used again with set_huge_pte_at() instead of set_pte_at(). Not sure if >>>>> this (huge) pteval could end up at a set_pte_at() later, but if yes, then >>>>> this would be broken on s390, and you'd need to use set_huge_pte_at() >>>>> instead of set_pte_at() like in your patch 2/3. >>>> >>>> IIUC, As I said in the commit message, we will only unmap a poisoned >>>> hugetlb page by try_to_unmap(), and the poisoned hugetlb page will be >>>> remapped with a poisoned entry by set_huge_swap_pte_at() in >>>> try_to_unmap_one(). So I think no need change to use set_huge_pte_at() >>>> instead of set_pte_at() for other cases, since the hugetlb page will not >>>> hit other cases. >>>> >>>> if (PageHWPoison(subpage) && !(flags & TTU_IGNORE_HWPOISON)) { >>>> pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); >>>> if (folio_test_hugetlb(folio)) { >>>> hugetlb_count_sub(folio_nr_pages(folio), mm); >>>> set_huge_swap_pte_at(mm, address, pvmw.pte, pteval, >>>> vma_mmu_pagesize(vma)); >>>> } else { >>>> dec_mm_counter(mm, mm_counter(&folio->page)); >>>> set_pte_at(mm, address, pvmw.pte, pteval); >>>> } >>>> >>>> } >>> >>> OK, but wouldn't the pteval be overwritten here with >>> pteval = swp_entry_to_pte(make_hwpoison_entry(subpage))? >>> IOW, what sense does it make to save the returned pteval from >>> huge_ptep_clear_flush(), when it is never being used anywhere? >> >> Please see previous code, we'll use the original pte value to check if >> it is uffd-wp armed, and if need to mark it dirty though the hugetlbfs >> is set noop_dirty_folio(). >> >> pte_install_uffd_wp_if_needed(vma, address, pvmw.pte, pteval); > > Uh, ok, that wouldn't work on s390, but we also don't have > CONFIG_PTE_MARKER_UFFD_WP / HAVE_ARCH_USERFAULTFD_WP set, so > I guess we will be fine (for now). > > Still, I find it a bit unsettling that pte_install_uffd_wp_if_needed() > would work on a potential hugetlb *pte, directly de-referencing it > instead of using huge_ptep_get(). > > The !pte_none(*pte) check at the beginning would be broken in the > hugetlb case for s390 (not sure about other archs, but I think s390 > might be the only exception strictly requiring huge_ptep_get() > for de-referencing hugetlb *pte pointers). > Adding Peter Wu mostly for above as he is working uffd_wp. >> >> /* Set the dirty flag on the folio now the pte is gone. */ >> if (pte_dirty(pteval)) >> folio_mark_dirty(folio); > > Ok, that should work fine, huge_ptep_clear_flush() will return > a pteval properly de-referenced and converted with huge_ptep_get(), > and that would contain the hugetlb pmd/pud dirty information. > -- Mike Kravetz