migrate_vma_collect_pmd() will opportunisticly install migration PTEs if it is able to lock the migrating folio. This involves clearing the PTE, which also requires updating page flags such as PageDirty based on the PTE value when it was cleared. This was fixed by fd35ca3d12cc ("mm/migrate_device.c: copy pte dirty bit to page"). However that fix will also copy the pte dirty bit from a non-present PTE, which is meaningless. However it so happens that on a default x86 configuration pte_dirty(make_writable_device_private_entry(0)) is true. This masks issues where drivers may not be correctly setting the destination page as dirty when migrating from a device-private page, because effectively the device-private page is always considered dirty if it was mapped as writable. In practice not marking the pages correctly is unlikely to cause issues, because currently only anonymous memory is supported for device private pages. Therefore the dirty bit is only read when there is a swap file that has an uptodate copy of a writable page. Signed-off-by: Alistair Popple <apopple@xxxxxxxxxx> Fixes: fd35ca3d12cc ("mm/migrate_device.c: copy pte dirty bit to page") --- mm/migrate_device.c | 15 ++++++++++----- mm/rmap.c | 2 +- 2 files changed, 11 insertions(+), 6 deletions(-) diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 9cf2659..afc033b 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -215,10 +215,6 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, migrate->cpages++; - /* Set the dirty flag on the folio now the pte is gone. */ - if (pte_dirty(pte)) - folio_mark_dirty(folio); - /* Setup special migration page table entry */ if (mpfn & MIGRATE_PFN_WRITE) entry = make_writable_migration_entry( @@ -232,8 +228,17 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, if (pte_present(pte)) { if (pte_young(pte)) entry = make_migration_entry_young(entry); - if (pte_dirty(pte)) + if (pte_dirty(pte)) { + /* + * Mark the folio dirty now the pte is + * gone because + * make_migration_entry_dirty() won't + * store the dirty bit if there isn't + * room. + */ + folio_mark_dirty(folio); entry = make_migration_entry_dirty(entry); + } } swp_pte = swp_entry_to_pte(entry); if (pte_present(pte)) { diff --git a/mm/rmap.c b/mm/rmap.c index c6c4d4e..df88674 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2176,7 +2176,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, } /* Set the dirty flag on the folio now the pte is gone. */ - if (pte_dirty(pteval)) + if (pte_present(pteval) && pte_dirty(pteval)) folio_mark_dirty(folio); /* Update high watermark before we lower rss */ -- git-series 0.9.1