When ptep_get_and_clear_full is called for a mm teardown, we will now attempt to destroy the secure pages. This will be faster than export. In case it was not a teardown, or if for some reason the destroy page UVC failed, we try with an export page, like before. Signed-off-by: Claudio Imbrenda <imbrenda@xxxxxxxxxxxxx> Acked-by: Janosch Frank <frankja@xxxxxxxxxxxxx> Reviewed-by: Nico Boehr <nrb@xxxxxxxxxxxxx> Link: https://lore.kernel.org/r/20220628135619.32410-11-imbrenda@xxxxxxxxxxxxx Message-Id: <20220628135619.32410-11-imbrenda@xxxxxxxxxxxxx> Signed-off-by: Janosch Frank <frankja@xxxxxxxxxxxxx> --- arch/s390/include/asm/pgtable.h | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index f16403ba81ec..cf81acf3879c 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -1182,9 +1182,22 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm, } else { res = ptep_xchg_lazy(mm, addr, ptep, __pte(_PAGE_INVALID)); } - /* At this point the reference through the mapping is still present */ - if (mm_is_protected(mm) && pte_present(res)) - uv_convert_owned_from_secure(pte_val(res) & PAGE_MASK); + /* Nothing to do */ + if (!mm_is_protected(mm) || !pte_present(res)) + return res; + /* + * At this point the reference through the mapping is still present. + * The notifier should have destroyed all protected vCPUs at this + * point, so the destroy should be successful. + */ + if (full && !uv_destroy_owned_page(pte_val(res) & PAGE_MASK)) + return res; + /* + * If something went wrong and the page could not be destroyed, or + * if this is not a mm teardown, the slower export is used as + * fallback instead. + */ + uv_convert_owned_from_secure(pte_val(res) & PAGE_MASK); return res; } -- 2.36.1