On Wed, Nov 14, 2018 at 01:39:19PM +0000, Steve Capper wrote: > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > index 50b1ef8584c0..19736520b724 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -616,11 +616,21 @@ static inline phys_addr_t pgd_page_paddr(pgd_t pgd) > #define pgd_ERROR(pgd)__pgd_error(__FILE__, __LINE__, pgd_val(pgd)) > > /* to find an entry in a page-table-directory */ > -#define pgd_index(addr)(((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1)) > +#define pgd_index(addr, ptrs)(((addr) >> PGDIR_SHIFT) & ((ptrs) - 1)) > +#define _pgd_offset_raw(pgd, addr, ptrs) ((pgd) + pgd_index(addr, ptrs)) > +#define pgd_offset_raw(pgd, addr)(_pgd_offset_raw(pgd, addr, PTRS_PER_PGD)) > > -#define pgd_offset_raw(pgd, addr)((pgd) + pgd_index(addr)) > +static inline pgd_t *pgd_offset(const struct mm_struct *mm, unsigned long addr) > +{ > +pgd_t *ret; > + > +if (IS_ENABLED(CONFIG_ARM64_52BIT_VA) && (mm != &init_mm)) > +ret = _pgd_offset_raw(mm->pgd, addr, 1ULL << (vabits_user - PGDIR_SHIFT)); I think we can make this a constant since the additional 4 bits of the user address should be 0 on a 48-bit VA. Once we get the 52-bit kernel VA supported, we can probably revert back to a single macro. Another option is to change PTRS_PER_PGD etc. to cover the whole 52-bit, including the swapper_pg_dir, but with offsetting the TTBR1_EL1 setting to keep the 48-bit kernel VA (for the time being). -- Catalin IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.