When handling faults for anon shmem finish_fault() will attempt to install ptes for the entire folio. Unfortunately if it encounters a single non-pte_none entry in that range it will bail, even if the pte that triggered the fault is still pte_none. When this situation happens the fault will be retried endlessly never making forward progress. This patch fixes this behavior and if it detects that a pte in the range is not pte_none it will fall back to setting just the pte for the address that triggered the fault. Cc: stable@xxxxxxxxxxxxxxx Cc: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Fixes: 43e027e41423 ("mm: memory: extend finish_fault() to support large folio") Reported-by: Marek Maslanka <mmaslanka@xxxxxxxxxx> Signed-off-by: Brian Geffon <bgeffon@xxxxxxxxxx> --- mm/memory.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index b4d3d4893267..32de626ec1da 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5258,9 +5258,22 @@ vm_fault_t finish_fault(struct vm_fault *vmf) ret = VM_FAULT_NOPAGE; goto unlock; } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) { - update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages); - ret = VM_FAULT_NOPAGE; - goto unlock; + /* + * We encountered a set pte, let's just try to install the + * pte for the original fault if that pte is still pte none. + */ + pgoff_t idx = (vmf->address - addr) / PAGE_SIZE; + + if (!pte_none(ptep_get_lockless(vmf->pte + idx))) { + update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages); + ret = VM_FAULT_NOPAGE; + goto unlock; + } + + vmf->pte = vmf->pte + idx; + page = folio_page(folio, idx); + addr = vmf->address; + nr_pages = 1; } folio_ref_add(folio, nr_pages - 1); -- 2.48.1.711.g2feabab25a-goog