Add a guarantee for Anon pages that pin_user_page*() ensures the user-mapping of these pages stay preserved. In order to ensure this all rmap users have been audited: vmscan: already fails eviction due to page_maybe_dma_pinned() migrate: migration will fail on pinned pages due to expected_page_refs() not matching, however that is *after* try_to_migrate() has already destroyed the user mapping of these pages. Add an early exit for this case. numa-balance: as per the above, pinned pages cannot be migrated, however numa balancing scanning will happily PROT_NONE them to get usage information on these pages. Avoid this for pinned pages. None of the other rmap users (damon,page-idle,mlock,..) unmap the page, they mostly just muck about with reference,dirty flags etc. This same guarantee cannot be provided for Shared (file) pages due to dirty page tracking. Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> --- mm/migrate.c | 10 +++++++++- mm/mprotect.c | 6 ++++++ 2 files changed, 15 insertions(+), 1 deletion(-) --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1472,7 +1472,15 @@ int migrate_pages(struct list_head *from nr_subpages = thp_nr_pages(page); cond_resched(); - if (PageHuge(page)) + /* + * If the page has a pin then expected_page_refs() will + * not match and the whole migration will fail later + * anyway, fail early and preserve the mappings. + */ + if (page_maybe_dma_pinned(page)) + rc = -EAGAIN; + + else if (PageHuge(page)) rc = unmap_and_move_huge_page(get_new_page, put_new_page, private, page, pass > 2, mode, reason, --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -106,6 +106,12 @@ static unsigned long change_pte_range(st continue; /* + * Can't migrate pinned pages, avoid touching them. + */ + if (page_maybe_dma_pinned(page)) + continue; + + /* * Don't mess with PTEs if page is already on the node * a single-threaded process is running on. */