From: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> The xas_load() in question also originated in "e286781: mm: speculative page references" as a radix_tree_deref_slot(), the only one in the tree at the time. I'm thoroughly confused why it is needed, though. A page's slot in the page cache should be stabilized by lock_page() being held. So, first of all, add a VM_BUG_ON_ONCE() to make it totally clear that the page is locked. But, even if the page was truncated, we normally check: page_mapping(page) != mapping to check for truncation. This would seem to imply that we are looking for some kind of state change that can happen to the xarray slot for a page, but without changing page->mapping. I'm at a loss for that that might be. Stick a WARN_ON_ONCE() in there to see if we ever actually hit this. Signed-off-by: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> Cc: Nicholas Piggin <npiggin@xxxxxxxxx> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx> Cc: linux-mm@xxxxxxxxx Cc: linux-kernel@xxxxxxxxxxxxxxx --- b/mm/migrate.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff -puN mm/migrate.c~remove_extra_xas_load_check mm/migrate.c --- a/mm/migrate.c~remove_extra_xas_load_check 2020-05-01 14:00:43.377525921 -0700 +++ b/mm/migrate.c 2020-05-01 14:00:43.381525921 -0700 @@ -407,6 +407,8 @@ int migrate_page_move_mapping(struct add int dirty; int expected_count = expected_page_refs(mapping, page) + extra_count; + VM_WARN_ONCE(!PageLocked(page)); + if (!mapping) { /* Anonymous page without mapping */ if (page_count(page) != expected_count) @@ -425,7 +427,13 @@ int migrate_page_move_mapping(struct add newzone = page_zone(newpage); xas_lock_irq(&xas); + /* + * 'mapping' was established under the page lock, which + * prevents the xarray slot for 'page' from being changed. + * Thus, xas_load() failure here is unexpected. + */ if (xas_load(&xas) != page) { + WARN_ON_ONCE(1); xas_unlock_irq(&xas); return -EAGAIN; } _