When splitting a huge migrating PMD, we'll transfer the soft dirty bit from the huge page to the small pages. However we're possibly using a wrong data since when fetching the bit we're using pmd_soft_dirty() upon a migration entry. Fix it up. CC: Andrea Arcangeli <aarcange@xxxxxxxxxx> CC: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> CC: "Kirill A. Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx> CC: Matthew Wilcox <willy@xxxxxxxxxxxxx> CC: Michal Hocko <mhocko@xxxxxxxx> CC: Dave Jiang <dave.jiang@xxxxxxxxx> CC: "Aneesh Kumar K.V" <aneesh.kumar@xxxxxxxxxxxxxxxxxx> CC: Souptick Joarder <jrdr.linux@xxxxxxxxx> CC: Konstantin Khlebnikov <khlebnikov@xxxxxxxxxxxxxx> CC: linux-mm@xxxxxxxxx CC: linux-kernel@xxxxxxxxxxxxxxx Signed-off-by: Peter Xu <peterx@xxxxxxxxxx> --- I noticed this during code reading. Only compile tested. I'm sending a patch directly for review comments since it's relatively straightforward and not easy to test. Please have a look, thanks. --- mm/huge_memory.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index f2d19e4fe854..fb0787c3dd3b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2161,7 +2161,10 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, SetPageDirty(page); write = pmd_write(old_pmd); young = pmd_young(old_pmd); - soft_dirty = pmd_soft_dirty(old_pmd); + if (unlikely(pmd_migration)) + soft_dirty = pmd_swp_soft_dirty(old_pmd); + else + soft_dirty = pmd_soft_dirty(old_pmd); /* * Withdraw the table only after we mark the pmd entry invalid. -- 2.17.1