On 8 Aug 2024, at 0:14, Huang, Ying wrote: > Zi Yan <ziy@xxxxxxxxxx> writes: > >> do_numa_page() and do_huge_pmd_numa_page() share a lot of common code. To >> reduce redundancy, move common code to numa_migrate_prep() and rename >> the function to numa_migrate_check() to reflect its functionality. >> >> Now do_huge_pmd_numa_page() also checks shared folios to set TNF_SHARED >> flag. >> >> Suggested-by: David Hildenbrand <david@xxxxxxxxxx> >> Signed-off-by: Zi Yan <ziy@xxxxxxxxxx> >> --- >> mm/huge_memory.c | 14 ++------- >> mm/internal.h | 5 ++-- >> mm/memory.c | 76 ++++++++++++++++++++++++------------------------ >> 3 files changed, 44 insertions(+), 51 deletions(-) >> >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >> index a3c018f2b554..9b312cae6775 100644 >> --- a/mm/huge_memory.c >> +++ b/mm/huge_memory.c >> @@ -1699,18 +1699,10 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) >> if (!folio) >> goto out_map; >> >> - /* See similar comment in do_numa_page for explanation */ >> - if (!writable) >> - flags |= TNF_NO_GROUP; >> - >> nid = folio_nid(folio); > > NITPICK: It appears that we can remove "nid" local variable. I thought about it, but 1. if folio is NULL from vm_normal_folio_pmd(), nid remains NUMA_NO_NODE, 2. if migration succeeds, nid is changed to target_nid, 3. if migration fails, nid is the folio node id, all three are consumed by if (nid != NUMA_NO_NODE) before task_numa_fault(). I will give it a try in next version and see if I can remove it. > >> - /* >> - * For memory tiering mode, cpupid of slow memory page is used >> - * to record page access time. So use default value. >> - */ >> - if (!folio_use_access_time(folio)) >> - last_cpupid = folio_last_cpupid(folio); >> - target_nid = numa_migrate_prep(folio, vmf, haddr, nid, &flags); >> + >> + target_nid = numa_migrate_check(folio, vmf, haddr, &flags, writable, >> + &last_cpupid); >> if (target_nid == NUMA_NO_NODE) >> goto out_map; >> if (migrate_misplaced_folio_prepare(folio, vma, target_nid)) { >> diff --git a/mm/internal.h b/mm/internal.h >> index 52f7fc4e8ac3..fb16e18c9761 100644 >> --- a/mm/internal.h >> +++ b/mm/internal.h >> @@ -1191,8 +1191,9 @@ void vunmap_range_noflush(unsigned long start, unsigned long end); >> >> void __vunmap_range_noflush(unsigned long start, unsigned long end); >> >> -int numa_migrate_prep(struct folio *folio, struct vm_fault *vmf, >> - unsigned long addr, int page_nid, int *flags); >> +int numa_migrate_check(struct folio *folio, struct vm_fault *vmf, >> + unsigned long addr, int *flags, bool writable, >> + int *last_cpupid); >> >> void free_zone_device_folio(struct folio *folio); >> int migrate_device_coherent_page(struct page *page); >> diff --git a/mm/memory.c b/mm/memory.c >> index 503d493263df..b093df652c11 100644 >> --- a/mm/memory.c >> +++ b/mm/memory.c >> @@ -5368,16 +5368,43 @@ static vm_fault_t do_fault(struct vm_fault *vmf) >> return ret; >> } >> >> -int numa_migrate_prep(struct folio *folio, struct vm_fault *vmf, >> - unsigned long addr, int page_nid, int *flags) >> +int numa_migrate_check(struct folio *folio, struct vm_fault *vmf, >> + unsigned long addr, int *flags, >> + bool writable, int *last_cpupid) >> { >> struct vm_area_struct *vma = vmf->vma; >> >> + /* >> + * Avoid grouping on RO pages in general. RO pages shouldn't hurt as >> + * much anyway since they can be in shared cache state. This misses >> + * the case where a mapping is writable but the process never writes >> + * to it but pte_write gets cleared during protection updates and >> + * pte_dirty has unpredictable behaviour between PTE scan updates, >> + * background writeback, dirty balancing and application behaviour. >> + */ >> + if (!writable) >> + *flags |= TNF_NO_GROUP; >> + >> + /* >> + * Flag if the folio is shared between multiple address spaces. This >> + * is later used when determining whether to group tasks together >> + */ >> + if (folio_likely_mapped_shared(folio) && (vma->vm_flags & VM_SHARED)) >> + *flags |= TNF_SHARED; >> + /* >> + * For memory tiering mode, cpupid of slow memory page is used >> + * to record page access time. So use default value. >> + */ >> + if (folio_use_access_time(folio)) >> + *last_cpupid = (-1 & LAST_CPUPID_MASK); >> + else >> + *last_cpupid = folio_last_cpupid(folio); >> + >> /* Record the current PID acceesing VMA */ >> vma_set_access_pid_bit(vma); >> >> count_vm_numa_event(NUMA_HINT_FAULTS); >> - if (page_nid == numa_node_id()) { >> + if (folio_nid(folio) == numa_node_id()) { >> count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL); >> *flags |= TNF_FAULT_LOCAL; >> } >> @@ -5442,13 +5469,13 @@ static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct vm_area_stru >> static vm_fault_t do_numa_page(struct vm_fault *vmf) >> { >> struct vm_area_struct *vma = vmf->vma; >> + pte_t old_pte = vmf->orig_pte; > > The usage of old_pte is different from other use cases. Where, > > old_pte = *pte; > check old_pte and orig_pte > generate new_pte from old_pte > set new_pte > > We have used this before in do_numa_page(), but not do that now. But I > still think that it's better to follow the convention partly if > possible. This makes code easier to be read. I notices that we don't > follow it in do_huge_pmd_numa_page(), we may change that? Sure, since I am trying to make do_numa_page() and do_huge_pmd_numa_page() use the same pattern. > >> + pte_t pte; >> struct folio *folio = NULL; >> int nid = NUMA_NO_NODE; >> bool writable = false, ignore_writable = false; >> bool pte_write_upgrade = vma_wants_manual_pte_write_upgrade(vma); >> - int last_cpupid; >> - int target_nid; >> - pte_t pte, old_pte; >> + int target_nid, last_cpupid = (-1 & LAST_CPUPID_MASK); > > Because we will initialize last_cpupid in numa_migrate_check(), we don't > need to initialize it here? Will remove it. Thanks for the review. Best Regards, Yan, Zi
Attachment:
signature.asc
Description: OpenPGP digital signature