On 6/27/23 15:08, Yu Zhao wrote: > On Mon, Jun 26, 2023 at 11:14 AM Ryan Roberts <ryan.roberts@xxxxxxx> wrote: >> >> Like folio_add_new_anon_rmap() but batch-rmaps a range of pages >> belonging to a folio, for effciency savings. All pages are accounted as >> small pages. >> >> Signed-off-by: Ryan Roberts <ryan.roberts@xxxxxxx> >> --- >> include/linux/rmap.h | 2 ++ >> mm/rmap.c | 43 +++++++++++++++++++++++++++++++++++++++++++ >> 2 files changed, 45 insertions(+) >> >> diff --git a/include/linux/rmap.h b/include/linux/rmap.h >> index a3825ce81102..15433a3d0cbf 100644 >> --- a/include/linux/rmap.h >> +++ b/include/linux/rmap.h >> @@ -196,6 +196,8 @@ void page_add_new_anon_rmap(struct page *, struct vm_area_struct *, >> unsigned long address); >> void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *, >> unsigned long address); >> +void folio_add_new_anon_rmap_range(struct folio *folio, struct page *page, >> + int nr, struct vm_area_struct *vma, unsigned long address); > > We should update folio_add_new_anon_rmap() to support large() && > !folio_test_pmd_mappable() folios instead. > > I double checked all places currently using folio_add_new_anon_rmap(), > and as expected, none actually allocates large() && > !folio_test_pmd_mappable() and maps it one by one, which makes the > cases simpler, i.e., > if (!large()) > // the existing basepage case > else if (!folio_test_pmd_mappable()) > // our new case > else > // the existing THP case I suppose we can merge the new case and existing THP case. Regards Yin, Fengwei > >> void page_add_file_rmap(struct page *, struct vm_area_struct *, >> bool compound); >> void folio_add_file_rmap_range(struct folio *, struct page *, unsigned int nr, >> diff --git a/mm/rmap.c b/mm/rmap.c >> index 1d8369549424..4050bcea7ae7 100644 >> --- a/mm/rmap.c >> +++ b/mm/rmap.c >> @@ -1305,6 +1305,49 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, >> __page_set_anon_rmap(folio, &folio->page, vma, address, 1); >> } >> >> +/** >> + * folio_add_new_anon_rmap_range - Add mapping to a set of pages within a new >> + * anonymous potentially large folio. >> + * @folio: The folio containing the pages to be mapped >> + * @page: First page in the folio to be mapped >> + * @nr: Number of pages to be mapped >> + * @vma: the vm area in which the mapping is added >> + * @address: the user virtual address of the first page to be mapped >> + * >> + * Like folio_add_new_anon_rmap() but batch-maps a range of pages within a folio >> + * using non-THP accounting. Like folio_add_new_anon_rmap(), the inc-and-test is >> + * bypassed and the folio does not have to be locked. All pages in the folio are >> + * individually accounted. >> + * >> + * As the folio is new, it's assumed to be mapped exclusively by a single >> + * process. >> + */ >> +void folio_add_new_anon_rmap_range(struct folio *folio, struct page *page, >> + int nr, struct vm_area_struct *vma, unsigned long address) >> +{ >> + int i; >> + >> + VM_BUG_ON_VMA(address < vma->vm_start || >> + address + (nr << PAGE_SHIFT) > vma->vm_end, vma); > > BTW, VM_BUG_ON* shouldn't be used in new code: > Documentation/process/coding-style.rst