Quoting Kirill A. Shutemov (2019-06-12 02:46:34) > On Sun, Jun 02, 2019 at 10:47:35PM +0100, Chris Wilson wrote: > > Quoting Matthew Wilcox (2019-03-07 15:30:51) > > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > > > index 404acdcd0455..aaf88f85d492 100644 > > > --- a/mm/huge_memory.c > > > +++ b/mm/huge_memory.c > > > @@ -2456,6 +2456,9 @@ static void __split_huge_page(struct page *page, struct list_head *list, > > > if (IS_ENABLED(CONFIG_SHMEM) && PageSwapBacked(head)) > > > shmem_uncharge(head->mapping->host, 1); > > > put_page(head + i); > > > + } else if (!PageAnon(page)) { > > > + __xa_store(&head->mapping->i_pages, head[i].index, > > > + head + i, 0); > > > > Forgiving the ignorant copy'n'paste, this is required: > > > > + } else if (PageSwapCache(page)) { > > + swp_entry_t entry = { .val = page_private(head + i) }; > > + __xa_store(&swap_address_space(entry)->i_pages, > > + swp_offset(entry), > > + head + i, 0); > > } > > } > > > > The locking is definitely wrong. > > Does it help with the problem, or it's just a possible lead? It definitely solves the problem we encountered of the bad VM_PAGE leading to RCU stalls in khugepaged. The locking is definitely wrong though :) -Chris