On Tue, 9 May 2023 22:01:16 -0700 (PDT) Hugh Dickins <hughd@xxxxxxxxxx> wrote:
In rare transient cases, not yet made possible, pte_offset_map() and pte_offset_map_lock() may not find a page table: handle appropriately. Signed-off-by: Hugh Dickins <hughd@xxxxxxxxxx> --- arch/s390/kernel/uv.c | 2 ++ arch/s390/mm/gmap.c | 2 ++ arch/s390/mm/pgtable.c | 12 +++++++++--- 3 files changed, 13 insertions(+), 3 deletions(-) diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c index cb2ee06df286..3c62d1b218b1 100644 --- a/arch/s390/kernel/uv.c +++ b/arch/s390/kernel/uv.c @@ -294,6 +294,8 @@ int gmap_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb) rc = -ENXIO; ptep = get_locked_pte(gmap->mm, uaddr, &ptelock); + if (!ptep) + goto out; if (pte_present(*ptep) && !(pte_val(*ptep) & _PAGE_INVALID) && pte_write(*ptep)) { page = pte_page(*ptep); rc = -EAGAIN; diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index dc90d1eb0d55..d198fc9475a2 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -2549,6 +2549,8 @@ static int __zap_zero_pages(pmd_t *pmd, unsigned long start, spinlock_t *ptl; ptep = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); + if (!ptep) + break;
so if pte_offset_map_lock fails, we abort and skip both the failed entry and the rest of the entries? can pte_offset_map_lock be retried immediately if it fails? (consider that we currently don't allow THP with KVM guests) Would something like this: do { ptep = pte_offset_map_lock(...); mb(); /* maybe? */ } while (!ptep); make sense? otherwise maybe it's better to return an error and retry the whole walk_page_range() in s390_enable_sie() ? it's a slow path anyway.
if (is_zero_pfn(pte_pfn(*ptep))) ptep_xchg_direct(walk->mm, addr, ptep, __pte(_PAGE_INVALID)); pte_unmap_unlock(ptep, ptl);
[...]