On Wed, 15 Jan 2025 13:17:01 +0100 Janosch Frank <frankja@xxxxxxxxxxxxx> wrote: > On 1/8/25 7:14 PM, Claudio Imbrenda wrote: > > Shadow page tables use page->index to keep the g2 address of the guest > > page table being shadowed. > > > > Instead of keeping the information in page->index, split the address > > and smear it over the 16-bit softbits areas of 4 PGSTEs. > > > > This removes the last s390 user of page->index. > > > > Signed-off-by: Claudio Imbrenda <imbrenda@xxxxxxxxxxxxx> > > --- > > arch/s390/include/asm/gmap.h | 1 + > > arch/s390/include/asm/pgtable.h | 15 +++++++++++++++ > > arch/s390/kvm/gaccess.c | 6 ++++-- > > arch/s390/mm/gmap.c | 22 ++++++++++++++++++++-- > > 4 files changed, 40 insertions(+), 4 deletions(-) > > > > diff --git a/arch/s390/include/asm/gmap.h b/arch/s390/include/asm/gmap.h > > index 5ebc65ac78cc..28c5bf097268 100644 > > --- a/arch/s390/include/asm/gmap.h > > +++ b/arch/s390/include/asm/gmap.h > > @@ -177,4 +177,5 @@ static inline int s390_uv_destroy_range_interruptible(struct mm_struct *mm, unsi > > { > > return __s390_uv_destroy_range(mm, start, end, true); > > } > > + > > Stray \n yep, I had already noticed it myself (of course _after_ sending the series) > > > #endif /* _ASM_S390_GMAP_H */ > > diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h > > index 151488bb9ed7..948100a8fa7e 100644 > > --- a/arch/s390/include/asm/pgtable.h > > +++ b/arch/s390/include/asm/pgtable.h > > @@ -419,6 +419,7 @@ static inline int is_module_addr(void *addr) > > #define PGSTE_HC_BIT 0x0020000000000000UL > > #define PGSTE_GR_BIT 0x0004000000000000UL > > #define PGSTE_GC_BIT 0x0002000000000000UL > > +#define PGSTE_ST2_MASK 0x0000ffff00000000UL > > #define PGSTE_UC_BIT 0x0000000000008000UL /* user dirty (migration) */ > > #define PGSTE_IN_BIT 0x0000000000004000UL /* IPTE notify bit */ > > #define PGSTE_VSIE_BIT 0x0000000000002000UL /* ref'd in a shadow table */ > > @@ -2001,4 +2002,18 @@ extern void s390_reset_cmma(struct mm_struct *mm); > > #define pmd_pgtable(pmd) \ > > ((pgtable_t)__va(pmd_val(pmd) & -sizeof(pte_t)*PTRS_PER_PTE)) > > > > +static inline unsigned long gmap_pgste_get_index(unsigned long *pgt) > > +{ > > + unsigned long *pgstes, res; > > + > > + pgstes = pgt + _PAGE_ENTRIES; > > + > > + res = (pgstes[0] & PGSTE_ST2_MASK) << 16; > > + res |= pgstes[1] & PGSTE_ST2_MASK; > > + res |= (pgstes[2] & PGSTE_ST2_MASK) >> 16; > > + res |= (pgstes[3] & PGSTE_ST2_MASK) >> 32; > > + > > + return res; > > +} > > I have to think about that change for a bit before I post an opinion. it's not pretty, but it can (and will, in upcoming patches) be generalized to hold arbitrary data in the PGSTEs (up to 512 bytes per page table)