On Sat, 30 Apr 2022 11:22:33 +0800 Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> wrote: > > > On 4/30/2022 4:02 AM, Gerald Schaefer wrote: > > On Fri, 29 Apr 2022 16:14:43 +0800 > > Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> wrote: > > > >> On some architectures (like ARM64), it can support CONT-PTE/PMD size > >> hugetlb, which means it can support not only PMD/PUD size hugetlb: > >> 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page > >> size specified. > >> > >> When unmapping a hugetlb page, we will get the relevant page table > >> entry by huge_pte_offset() only once to nuke it. This is correct > >> for PMD or PUD size hugetlb, since they always contain only one > >> pmd entry or pud entry in the page table. > >> > >> However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, > >> since they can contain several continuous pte or pmd entry with > >> same page table attributes, so we will nuke only one pte or pmd > >> entry for this CONT-PTE/PMD size hugetlb page. > >> > >> And now we only use try_to_unmap() to unmap a poisoned hugetlb page, > >> which means now we will unmap only one pte entry for a CONT-PTE or > >> CONT-PMD size poisoned hugetlb page, and we can still access other > >> subpages of a CONT-PTE or CONT-PMD size poisoned hugetlb page, > >> which will cause serious issues possibly. > >> > >> So we should change to use huge_ptep_clear_flush() to nuke the > >> hugetlb page table to fix this issue, which already considered > >> CONT-PTE and CONT-PMD size hugetlb. > >> > >> Note we've already used set_huge_swap_pte_at() to set a poisoned > >> swap entry for a poisoned hugetlb page. > >> > >> Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> > >> --- > >> mm/rmap.c | 34 +++++++++++++++++----------------- > >> 1 file changed, 17 insertions(+), 17 deletions(-) > >> > >> diff --git a/mm/rmap.c b/mm/rmap.c > >> index 7cf2408..1e168d7 100644 > >> --- a/mm/rmap.c > >> +++ b/mm/rmap.c > >> @@ -1564,28 +1564,28 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, > >> break; > >> } > >> } > >> + pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); > > > > Unlike in your patch 2/3, I do not see that this (huge) pteval would later > > be used again with set_huge_pte_at() instead of set_pte_at(). Not sure if > > this (huge) pteval could end up at a set_pte_at() later, but if yes, then > > this would be broken on s390, and you'd need to use set_huge_pte_at() > > instead of set_pte_at() like in your patch 2/3. > > IIUC, As I said in the commit message, we will only unmap a poisoned > hugetlb page by try_to_unmap(), and the poisoned hugetlb page will be > remapped with a poisoned entry by set_huge_swap_pte_at() in > try_to_unmap_one(). So I think no need change to use set_huge_pte_at() > instead of set_pte_at() for other cases, since the hugetlb page will not > hit other cases. > > if (PageHWPoison(subpage) && !(flags & TTU_IGNORE_HWPOISON)) { > pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); > if (folio_test_hugetlb(folio)) { > hugetlb_count_sub(folio_nr_pages(folio), mm); > set_huge_swap_pte_at(mm, address, pvmw.pte, pteval, > vma_mmu_pagesize(vma)); > } else { > dec_mm_counter(mm, mm_counter(&folio->page)); > set_pte_at(mm, address, pvmw.pte, pteval); > } > > } OK, but wouldn't the pteval be overwritten here with pteval = swp_entry_to_pte(make_hwpoison_entry(subpage))? IOW, what sense does it make to save the returned pteval from huge_ptep_clear_flush(), when it is never being used anywhere?