On 2024/2/19 16:11, Oscar Salvador wrote:
On Mon, Feb 19, 2024 at 02:25:11PM +0800, Baolin Wang wrote:
This means that there is no memory on the target node? if so, we can add a
check at the beginning to avoid calling unnecessary
migrate_misplaced_folio().
diff --git a/mm/memory.c b/mm/memory.c
index e95503d7544e..a64a1aac463f 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5182,7 +5182,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
else
last_cpupid = folio_last_cpupid(folio);
target_nid = numa_migrate_prep(folio, vma, vmf->address, nid,
&flags);
- if (target_nid == NUMA_NO_NODE) {
+ if (target_nid == NUMA_NO_NODE || !node_state(target_nid, N_MEMORY))
{
folio_put(folio);
goto out_map;
}
(similar changes for do_huge_pmd_numa_page())
With the check in place from [1], numa_migrate_prep() will also return
NUMA_NO_NODE, so no need for this one here.
And I did not check, but I assume that do_huge_pmd_numa_page() also ends
up calling numa_migrate_prep().
[1] https://lore.kernel.org/lkml/20240219041920.1183-1-byungchul@xxxxxx/
Right. I missed this patch before. So with checking in
should_numa_migrate_memory(), I guess current changes in
numamigrate_isolate_folio() can also be dropped, it will never hit a
memoryless node after the patch [1], no?