> On 14 Jul 2021, at 17:08, Peter Xu <peterx@xxxxxxxxxx> wrote: > > On Wed, Jul 14, 2021 at 03:24:26PM +0000, Tiberiu Georgescu wrote: >> >> static pagemap_entry_t pte_to_pagemap_entry(struct pagemapread *pm, >> struct vm_area_struct *vma, unsigned long addr, pte_t pte) >> { >> u64 frame = 0, flags = 0; >> struct page *page = NULL; >> >> + if (vma->vm_flags & VM_SOFTDIRTY) >> + flags |= PM_SOFT_DIRTY; >> + >> if (pte_present(pte)) { >> if (pm->show_pfn) >> frame = pte_pfn(pte); >> @@ -1374,13 +1387,22 @@ static pagemap_entry_t pte_to_pagemap_entry(struct pagemapread *pm, >> flags |= PM_SOFT_DIRTY; >> if (pte_uffd_wp(pte)) >> flags |= PM_UFFD_WP; >> - } else if (is_swap_pte(pte)) { >> + } else if (is_swap_pte(pte) || shmem_file(vma->vm_file)) { >> swp_entry_t entry; >> - if (pte_swp_soft_dirty(pte)) >> - flags |= PM_SOFT_DIRTY; >> - if (pte_swp_uffd_wp(pte)) >> - flags |= PM_UFFD_WP; >> - entry = pte_to_swp_entry(pte); >> + if (is_swap_pte(pte)) { >> + entry = pte_to_swp_entry(pte); >> + if (pte_swp_soft_dirty(pte)) >> + flags |= PM_SOFT_DIRTY; >> + if (pte_swp_uffd_wp(pte)) >> + flags |= PM_UFFD_WP; >> + } else { >> + void *xa_entry = get_xa_entry_at_vma_addr(vma, addr); >> + >> + if (xa_is_value(xa_entry)) >> + entry = radix_to_swp_entry(xa_entry); >> + else >> + goto out; >> + } >> if (pm->show_pfn) >> frame = swp_type(entry) | >> (swp_offset(entry) << MAX_SWAPFILES_SHIFT); >> @@ -1393,9 +1415,8 @@ static pagemap_entry_t pte_to_pagemap_entry(struct pagemapread *pm, >> flags |= PM_FILE; >> if (page && page_mapcount(page) == 1) >> flags |= PM_MMAP_EXCLUSIVE; >> - if (vma->vm_flags & VM_SOFTDIRTY) >> - flags |= PM_SOFT_DIRTY; > > IMHO moving this to the entry will only work for the initial iteration, however > it won't really help anything, as soft-dirty should always be used in pair with > clear_refs written with value "4" first otherwise all pages will be marked > soft-dirty then the pagemap data is meaningless. > > After the "write 4" op VM_SOFTDIRTY will be cleared and I expect the test case > to see all zeros again even with the patch. Indeed, the SOFT_DIRTY bit gets cleared and does not get set when we dirty the page and swap it out again. However, the pagemap entries are not completely zeroed out. The patch mostly deals with adding the swap frame offset on the pagemap entries of swappable, non-syncable pages, even if they are MAP_SHARED. Example output post-patch, after writing 4 to clear_refs and dirtying the pages: $ dd if=/proc/$PID/pagemap ibs=8 skip=$(($VADDR / $PAGESIZE)) count=256 | hexdump -C 00000000 80 13 01 00 00 00 00 40 a0 13 01 00 00 00 00 40 |.......@.......@| ...........more swapped-out entries............ 000005e0 e0 2a 01 00 00 00 00 40 00 2b 01 00 00 00 00 40 |.*.....@.+.....@| 000005f0 20 2b 01 00 00 00 00 40 40 2b 01 00 00 00 00 40 | +.....@@+.....@| 00000600 72 6c 1d 00 00 00 80 a1 c1 34 12 00 00 00 80 a1 |rl.......4......| ...........more in-memory entries............ 000007f0 3c 21 18 00 00 00 80 a1 69 ec 17 00 00 00 80 a1 |<!......i.......| You may find the pre-patch example output on the RFC cover letter, for reference: https://lkml.org/lkml/2021/7/14/594 > I think one way to fix this is to do something similar to uffd-wp: we leave a > marker in pte showing that this is soft-dirtied pte even if swapped out. > However we don't have a mechanism for that yet in current linux, and the > uffd-wp series is the first one trying to introduce something like that. I am taking a look at the uffd-wp patch today. Hope it gets upstreamed soon, so I can adapt one of the mechanisms in there to keep track of the SOFT_DIRTY bit on the PTE after swap. Kind regards, Tibi