On s390x, we actually need a pte_mkyoung() / pte_mkdirty() instead of going via the page and leaving the PTE unmodified. E.g., if we only mark the page accessed via mark_page_accessed() when doing a FOLL_TOUCH, we'll miss to clear the HW invalid bit in the pte and subsequent accesses via the MMU would still require a pagefault. Otherwise, buffered I/O will loop forever because it will keep stumling over the set HW invalid bit, requiring a page fault. Reported-by: Andreas Gruenbacher <agruenba@xxxxxxxxxx> Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> --- mm/gup.c | 32 +++++++++++++++++++++++++------- 1 file changed, 25 insertions(+), 7 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index a9d4d724aef7..de3311feb377 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -587,15 +587,33 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, } } if (flags & FOLL_TOUCH) { - if ((flags & FOLL_WRITE) && - !pte_dirty(pte) && !PageDirty(page)) - set_page_dirty(page); /* - * pte_mkyoung() would be more correct here, but atomic care - * is needed to avoid losing the dirty bit: it is easier to use - * mark_page_accessed(). + * We have to be careful with updating the PTE on architectures + * that have a HW dirty bit: while updating the PTE we might + * lose that bit again and we'd need an atomic update: it is + * easier to leave the PTE untouched for these architectures. + * + * s390x doesn't have a hw referenced / dirty bit and e.g., sets + * the hw invalid bit in pte_mkold(), to catch further + * references. We have to update the PTE here to e.g., clear the + * invalid bit; otherwise, callers that rely on not requiring + * an MMU fault once GUP(FOLL_TOUCH) succeeded will loop forever + * because the page won't actually be accessible via the MMU. */ - mark_page_accessed(page); + if (IS_ENABLED(CONFIG_S390)) { + pte = pte_mkyoung(pte); + if (flags & FOLL_WRITE) + pte = pte_mkdirty(pte); + if (!pte_same(pte, *ptep)) { + set_pte_at(vma->vm_mm, address, ptep, pte); + update_mmu_cache(vma, address, ptep); + } + } else { + if ((flags & FOLL_WRITE) && + !pte_dirty(pte) && !PageDirty(page)) + set_page_dirty(page); + mark_page_accessed(page); + } } if ((flags & FOLL_MLOCK) && (vma->vm_flags & VM_LOCKED)) { /* Do not mlock pte-mapped THP */ -- 2.35.1 -- Thanks, David / dhildenb