On 09/06/2023 21:11, Hugh Dickins wrote: > On Fri, 9 Jun 2023, Andrew Morton wrote: >> On Thu, 8 Jun 2023 18:43:38 -0700 (PDT) Hugh Dickins <hughd@xxxxxxxxxx> wrote: >> >>> copy_pte_range(): use pte_offset_map_nolock(), and allow for it to fail; >>> but with a comment on some further assumptions that are being made there. >>> >>> zap_pte_range() and zap_pmd_range(): adjust their interaction so that >>> a pte_offset_map_lock() failure in zap_pte_range() leads to a retry in >>> zap_pmd_range(); remove call to pmd_none_or_trans_huge_or_clear_bad(). >>> >>> Allow pte_offset_map_lock() to fail in many functions. Update comment >>> on calling pte_alloc() in do_anonymous_page(). Remove redundant calls >>> to pmd_trans_unstable(), pmd_devmap_trans_unstable(), pmd_none() and >>> pmd_bad(); but leave pmd_none_or_clear_bad() calls in free_pmd_range() >>> and copy_pmd_range(), those do simplify the next level down. >>> >>> ... >>> >>> @@ -3728,11 +3737,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>> vmf->page = pfn_swap_entry_to_page(entry); >>> vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, >>> vmf->address, &vmf->ptl); >>> - if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte))) { >>> - spin_unlock(vmf->ptl); >>> - goto out; >>> - } >>> - >>> + if (unlikely(!vmf->pte || >>> + !pte_same(*vmf->pte, vmf->orig_pte))) >>> + goto unlock; >>> /* >>> * Get a page reference while we know the page can't be >>> * freed. >> >> This hunk falls afoul of >> https://lkml.kernel.org/r/20230602092949.545577-5-ryan.roberts@xxxxxxx. >> >> I did this: >> >> @@ -3729,7 +3738,8 @@ vm_fault_t do_swap_page(struct vm_fault >> vmf->page = pfn_swap_entry_to_page(entry); >> vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, >> vmf->address, &vmf->ptl); >> - if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte))) >> + if (unlikely(!vmf->pte || >> + !pte_same(*vmf->pte, vmf->orig_pte))) >> goto unlock; >> >> /* > > Yes, that's exactly right: thanks, Andrew. FWIW, I agree. Thanks, Ryan > > Hugh