On 24/11/2023 17:40, David Hildenbrand wrote: > On 22.11.23 17:29, Ryan Roberts wrote: >> In preparation for supporting anonymous small-sized THP, improve >> folio_add_new_anon_rmap() to allow a non-pmd-mappable, large folio to be >> passed to it. In this case, all contained pages are accounted using the >> order-0 folio (or base page) scheme. >> >> Reviewed-by: Yu Zhao <yuzhao@xxxxxxxxxx> >> Reviewed-by: Yin Fengwei <fengwei.yin@xxxxxxxxx> >> Signed-off-by: Ryan Roberts <ryan.roberts@xxxxxxx> >> --- >> mm/rmap.c | 28 ++++++++++++++++++++-------- >> 1 file changed, 20 insertions(+), 8 deletions(-) >> >> diff --git a/mm/rmap.c b/mm/rmap.c >> index 49e4d86a4f70..b086dc957b0c 100644 >> --- a/mm/rmap.c >> +++ b/mm/rmap.c >> @@ -1305,32 +1305,44 @@ void page_add_anon_rmap(struct page *page, struct >> vm_area_struct *vma, >> * This means the inc-and-test can be bypassed. >> * The folio does not have to be locked. >> * >> - * If the folio is large, it is accounted as a THP. As the folio >> + * If the folio is pmd-mappable, it is accounted as a THP. As the folio >> * is new, it's assumed to be mapped exclusively by a single process. >> */ >> void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, >> unsigned long address) >> { >> - int nr; >> + int nr = folio_nr_pages(folio); >> >> - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); >> + VM_BUG_ON_VMA(address < vma->vm_start || >> + address + (nr << PAGE_SHIFT) > vma->vm_end, vma); >> __folio_set_swapbacked(folio); >> + __folio_set_anon(folio, vma, address, true); > > Likely the changed order doesn't matter. Yes; the reason I moved __folio_set_anon() up here is because SetPageAnonExclusive() asserts that the page is anon, and SetPageAnonExclusive() has to be called differently for the 3 cases. I couldn't see any reason why it wouldn't be safe to call __folio_set_anon() before setting up the mapcounts. > > LGTM > > Reviewed-by: David Hildenbrand <david@xxxxxxxxxx> > Thanks!