On Fri, Jun 14, 2024 at 11:10 PM David Hildenbrand <david@xxxxxxxxxx> wrote: > > >> I don't think that is required? We are only working with anon folios. Or > >> were you able to trigger this? (which would be weird) > > > > I didn't trigger this. but I am not sure if kfifo is always anon based on > > the code context. > > > > for page, it is 100% anon(otherwise "goto out"), but I am not quite > > sure about kpage > > by the code context. > > > > static int try_to_merge_one_page(struct vm_area_struct *vma, > > struct page *page, struct page *kpage) > > { > > pte_t orig_pte = __pte(0); > > int err = -EFAULT; > > > > if (page == kpage) /* ksm page forked */ > > return 0; > > > > if (!PageAnon(page)) > > goto out; > > .... > > } > > > > Then I saw this > > > > static int replace_page(struct vm_area_struct *vma, struct page *page, > > struct page *kpage, pte_t orig_pte) > > { > > ... > > VM_BUG_ON_PAGE(PageAnonExclusive(page), page); > > VM_BUG_ON_FOLIO(folio_test_anon(kfolio) && PageAnonExclusive(kpage), > > kfolio); > > } > > > > If kfolio is always anon, we should have used > > VM_BUG_ON_FOLIO(PageAnonExclusive(kpage), folio) > > just like > > VM_BUG_ON_PAGE(PageAnonExclusive(page), page); > > without "folio_test_anon(kfolio)". > > > > So I lost my way. > > try_to_merge_one_page() is either called with a KSM page (anon) from > try_to_merge_with_ksm_page() or with the shared zeropage (!anon and must > never become anon) from cmp_and_merge_page(). > > So in replace_page(), we either have an ksm/anon page or the shared > zeropage. > > We never updated the documentation of replace_page() to spell out that > "kpage" can also be the shared zeropage. > > Note how replace_page() calls folio_add_anon_rmap_pte() not for the > shared zeropage. > > If we would have to craft a new anon page things would be going terribly > wrong. > > So not, this (!anon -> anon) must not happen and if it were to happen, > it would be a real bug and your check in folio_add_anon_rmap_pte() > would catch it. Thanks very much for the explanation. I wonder if the below can help improve the doc. If yes, I may add a separate patch for it in v2. diff --git a/mm/ksm.c b/mm/ksm.c index d2641bc2efc9..56b10265e617 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1411,14 +1411,13 @@ static int replace_page(struct vm_area_struct *vma, struct page *page, goto out_mn; } VM_BUG_ON_PAGE(PageAnonExclusive(page), page); - VM_BUG_ON_FOLIO(folio_test_anon(kfolio) && PageAnonExclusive(kpage), - kfolio); - /* * No need to check ksm_use_zero_pages here: we can only have a * zero_page here if ksm_use_zero_pages was enabled already. */ if (!is_zero_pfn(page_to_pfn(kpage))) { + VM_BUG_ON_FOLIO(!folio_test_anon(kfolio) || PageAnonExclusive(kpage), + kfolio); folio_get(kfolio); folio_add_anon_rmap_pte(kfolio, kpage, vma, addr, RMAP_NONE); newpte = mk_pte(kpage, vma->vm_page_prot); > > -- > Cheers, > > David / dhildenb > Thanks Barry