On Fri, Nov 30, 2018 at 05:59:59PM +0000, Catalin Marinas wrote: > On Wed, Nov 14, 2018 at 01:39:19PM +0000, Steve Capper wrote: > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > > index 50b1ef8584c0..19736520b724 100644 > > --- a/arch/arm64/include/asm/pgtable.h > > +++ b/arch/arm64/include/asm/pgtable.h > > @@ -616,11 +616,21 @@ static inline phys_addr_t pgd_page_paddr(pgd_t pgd) > > #define pgd_ERROR(pgd) __pgd_error(__FILE__, __LINE__, pgd_val(pgd)) > > > > /* to find an entry in a page-table-directory */ > > -#define pgd_index(addr) (((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1)) > > +#define pgd_index(addr, ptrs) (((addr) >> PGDIR_SHIFT) & ((ptrs) - 1)) > > +#define _pgd_offset_raw(pgd, addr, ptrs) ((pgd) + pgd_index(addr, ptrs)) > > +#define pgd_offset_raw(pgd, addr) (_pgd_offset_raw(pgd, addr, PTRS_PER_PGD)) > > > > -#define pgd_offset_raw(pgd, addr) ((pgd) + pgd_index(addr)) > > +static inline pgd_t *pgd_offset(const struct mm_struct *mm, unsigned long addr) > > +{ > > + pgd_t *ret; > > + > > + if (IS_ENABLED(CONFIG_ARM64_52BIT_VA) && (mm != &init_mm)) > > + ret = _pgd_offset_raw(mm->pgd, addr, 1ULL << (vabits_user - PGDIR_SHIFT)); > > I think we can make this a constant since the additional 4 bits of the > user address should be 0 on a 48-bit VA. Once we get the 52-bit kernel > VA supported, we can probably revert back to a single macro. Yeah, I see what you mean. > > Another option is to change PTRS_PER_PGD etc. to cover the whole > 52-bit, including the swapper_pg_dir, but with offsetting the TTBR1_EL1 > setting to keep the 48-bit kernel VA (for the time being). > I've got a 52-bit PTRS_PER_PGD working now. I will clean things up, run more tests and then post. Cheers, -- Steve