"Zi Yan" <ziy@xxxxxxxxxx> writes: > On Wed Jun 26, 2024 at 12:49 PM EDT, David Hildenbrand wrote: >> On 21.06.24 22:48, Zi Yan wrote: >> > On 21 Jun 2024, at 16:18, David Hildenbrand wrote: >> > >> >> On 21.06.24 15:44, Zi Yan wrote: >> >>> On 20 Jun 2024, at 17:29, David Hildenbrand wrote: >> >>> >> >>>> Currently we always take a folio reference even if migration will not >> >>>> even be tried or isolation failed, requiring us to grab+drop an additional >> >>>> reference. >> >>>> >> >>>> Further, we end up calling folio_likely_mapped_shared() while the folio >> >>>> might have already been unmapped, because after we dropped the PTL, that >> >>>> can easily happen. We want to stop touching mapcounts and friends from >> >>>> such context, and only call folio_likely_mapped_shared() while the folio >> >>>> is still mapped: mapcount information is pretty much stale and unreliable >> >>>> otherwise. >> >>>> >> >>>> So let's move checks into numamigrate_isolate_folio(), rename that >> >>>> function to migrate_misplaced_folio_prepare(), and call that function >> >>>> from callsites where we call migrate_misplaced_folio(), but still with >> >>>> the PTL held. >> >>>> >> >>>> We can now stop taking temporary folio references, and really only take >> >>>> a reference if folio isolation succeeded. Doing the >> >>>> folio_likely_mapped_shared() + golio isolation under PT lock is now similar >> >>>> to how we handle MADV_PAGEOUT. >> >>>> >> >>>> While at it, combine the folio_is_file_lru() checks. >> >>>> >> >>>> Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> >> >>>> --- >> >>>> include/linux/migrate.h | 7 ++++ >> >>>> mm/huge_memory.c | 8 ++-- >> >>>> mm/memory.c | 9 +++-- >> >>>> mm/migrate.c | 81 +++++++++++++++++++---------------------- >> >>>> 4 files changed, 55 insertions(+), 50 deletions(-) >> >>> >> >>> LGTM. Reviewed-by: Zi Yan <ziy@xxxxxxxxxx> >> >>> >> >>> One nit below: >> >>> >> >>> <snip> >> >>> >> >>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >> >>>> index fc27dabcd8e3..4b2817bb2c7d 100644 >> >>>> --- a/mm/huge_memory.c >> >>>> +++ b/mm/huge_memory.c >> >>>> @@ -1688,11 +1688,13 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) >> >>>> if (node_is_toptier(nid)) >> >>>> last_cpupid = folio_last_cpupid(folio); >> >>>> target_nid = numa_migrate_prep(folio, vmf, haddr, nid, &flags); >> >>>> - if (target_nid == NUMA_NO_NODE) { >> >>>> - folio_put(folio); >> >>>> + if (target_nid == NUMA_NO_NODE) >> >>>> + goto out_map; >> >>>> + if (migrate_misplaced_folio_prepare(folio, vma, target_nid)) { >> >>>> + flags |= TNF_MIGRATE_FAIL; >> >>>> goto out_map; >> >>>> } >> >>>> - >> >>>> + /* The folio is isolated and isolation code holds a folio reference. */ >> >>>> spin_unlock(vmf->ptl); >> >>>> writable = false; >> >>>> >> >>>> diff --git a/mm/memory.c b/mm/memory.c >> >>>> index 118660de5bcc..4fd1ecfced4d 100644 >> >>>> --- a/mm/memory.c >> >>>> +++ b/mm/memory.c >> >>> >> >>> <snip> >> >>> >> >>>> @@ -5345,10 +5343,13 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) >> >>>> else >> >>>> last_cpupid = folio_last_cpupid(folio); >> >>>> target_nid = numa_migrate_prep(folio, vmf, vmf->address, nid, &flags); >> >>>> - if (target_nid == NUMA_NO_NODE) { >> >>>> - folio_put(folio); >> >>>> + if (target_nid == NUMA_NO_NODE) >> >>>> + goto out_map; >> >>>> + if (migrate_misplaced_folio_prepare(folio, vma, target_nid)) { >> >>>> + flags |= TNF_MIGRATE_FAIL; >> >>>> goto out_map; >> >>>> } >> >>> >> >>> These two locations are repeated code, maybe just merge the ifs into >> >>> numa_migrate_prep(). Feel free to ignore if you are not going to send >> >>> another version. :) >> >> >> >> I went back and forth a couple of times and >> >> >> >> a) Didn't want to move numa_migrate_prep() into >> >> migrate_misplaced_folio_prepare(), because having that code in >> >> mm/migrate.c felt a bit odd. >> > >> > I agree after checking the actual code, since the code is just >> > updating NUMA fault stats and checking where the folio should be. >> > >> >> >> >> b) Didn't want to move migrate_misplaced_folio_prepare() because I enjoy >> >> seeing the migrate_misplaced_folio_prepare() and >> >> migrate_misplaced_folio() calls in the same callercontext. >> >> >> >> I also considered renaming numa_migrate_prep(), but wasn't really able to come up with a good name. >> > >> > How about numa_migrate_check()? Since it tells whether a folio should be >> > migrated or not. >> > >> >> >> >> But maybe a) is not too bad? >> >> >> >> We'd have migrate_misplaced_folio_prepare() consume &flags and &target_nid, and perform the "flags |= TNF_MIGRATE_FAIL;" internally. >> >> >> >> What would be your take? >> > >> > I would either rename numa_migrate_prep() or just do nothing. I have to admit >> > that the "prep" and "prepare" in both function names motivated me to propose >> > the merge, but now the actual code tells me they should be separate. >> >> Let's leave it like that for now. Renaming to numa_migrate_check() makes >> sense, and likely moving more numa handling stuff in there. >> >> Bit I yet have to figure out why some of the memory.c vs. huge_memory.c >> code differences exist, so we can unify them. >> >> For example, why did 33024536bafd9 introduce slightly different >> last_cpupid handling in do_huge_pmd_numa_page(), whereby it seems like >> some subtle difference in handling NUMA_BALANCING_MEMORY_TIERING? Maybe >> I am missing something obvious. :) > > It seems to me that a sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING > check is missing in do_huge_pmd_numa_page(). So the > > if (node_is_toptier(nid)) > > should be > > if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) || > node_is_toptier(nid)) > > to be consistent with other checks. Add Ying to confirm. Yes. It should be so. Sorry for my mistake and confusing. > I also think a function like > > bool folio_has_cpupid(folio) > { > return !(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) > || node_is_toptier(folio_nid(folio)); > } > > would be better than the existing checks. Yes. This looks better. Even better, we can add some comments to the function too. -- Best Regards, Huang, Ying