As soon as storage keys are enabled we need to stop working on zero page mappings to prevent inconsistencies between storage keys and pgste. Otherwise following data corruption could happen: 1) guest enables storage key 2) guest sets storage key for not mapped page X -> change goes to PGSTE 3) guest reads from page X -> as X was not dirty before, the page will be zero page backed, storage key from PGSTE for X will go to storage key for zero page 4) guest sets storage key for not mapped page Y (same logic as above 5) guest reads from page Y -> as Y was not dirty before, the page will be zero page backed, storage key from PGSTE for Y will got to storage key for zero page overwriting storage key for X While holding the mmap sem, we are safe against changes on entries we already fixed, as every fault would need to take the mmap_sem (read). As sske and host large pages are also mutual exclusive we do not even need to retry the fixup_user_fault. As use_skey is already the condition on which we call s390_enable_skey we need to introduce a new flag for the mm->context on which we decide if zero page mapping is allowed. Signed-off-by: Dominik Dingel <dingel@xxxxxxxxxxxxxxxxxx> --- arch/s390/include/asm/mmu.h | 2 ++ arch/s390/include/asm/pgtable.h | 14 ++++++++++++++ arch/s390/mm/pgtable.c | 12 ++++++++++++ 3 files changed, 28 insertions(+) diff --git a/arch/s390/include/asm/mmu.h b/arch/s390/include/asm/mmu.h index a5e6562..0f38469 100644 --- a/arch/s390/include/asm/mmu.h +++ b/arch/s390/include/asm/mmu.h @@ -18,6 +18,8 @@ typedef struct { unsigned int has_pgste:1; /* The mmu context uses storage keys. */ unsigned int use_skey:1; + /* The mmu context forbids zeropage mappings. */ + unsigned int forbids_zeropage:1; } mm_context_t; #define INIT_MM_CONTEXT(name) \ diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index 1e991f6a..fe3cfdf 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -481,6 +481,20 @@ static inline int mm_has_pgste(struct mm_struct *mm) return 0; } +/* + * In the case that a guest uses storage keys + * faults should no longer be backed by zero pages + */ +#define mm_forbids_zeropage mm_forbids_zeropage +static inline int mm_forbids_zeropage(struct mm_struct *mm) +{ +#ifdef CONFIG_PGSTE + if (mm->context.forbids_zeropage) + return 1; +#endif + return 0; +} + static inline int mm_use_skey(struct mm_struct *mm) { #ifdef CONFIG_PGSTE diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c index ab55ba8..1e06fbc 100644 --- a/arch/s390/mm/pgtable.c +++ b/arch/s390/mm/pgtable.c @@ -1309,6 +1309,15 @@ static int __s390_enable_skey(pte_t *pte, unsigned long addr, pgste_t pgste; pgste = pgste_get_lock(pte); + /* + * Remove all zero page mappings, + * after establishing a policy to forbid zero page mappings + * following faults for that page will get fresh anonymous pages + */ + if (is_zero_pfn(pte_pfn(*pte))) { + ptep_flush_direct(walk->mm, addr, pte); + pte_val(*pte) = _PAGE_INVALID; + } /* Clear storage key */ pgste_val(pgste) &= ~(PGSTE_ACC_BITS | PGSTE_FP_BIT | PGSTE_GR_BIT | PGSTE_GC_BIT); @@ -1327,6 +1336,9 @@ void s390_enable_skey(void) down_write(&mm->mmap_sem); if (mm_use_skey(mm)) goto out_up; + + mm->context.forbids_zeropage = 1; + walk.mm = mm; walk_page_range(0, TASK_SIZE, &walk); mm->context.use_skey = 1; -- 1.8.5.5 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>