Hi, On Wed, Jun 19, 2024 at 05:45:29PM +0100, Catalin Marinas wrote: > On Tue, May 28, 2024 at 12:24:57PM +0530, Amit Daniel Kachhap wrote: > > On 5/3/24 18:31, Joey Gouly wrote: > > > diff --git a/arch/arm64/include/asm/mman.h b/arch/arm64/include/asm/mman.h > > > index 5966ee4a6154..ecb2d18dc4d7 100644 > > > --- a/arch/arm64/include/asm/mman.h > > > +++ b/arch/arm64/include/asm/mman.h > > > @@ -7,7 +7,7 @@ > > > #include <uapi/asm/mman.h> > > > static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot, > > > - unsigned long pkey __always_unused) > > > + unsigned long pkey) > > > { > > > unsigned long ret = 0; > > > @@ -17,6 +17,12 @@ static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot, > > > if (system_supports_mte() && (prot & PROT_MTE)) > > > ret |= VM_MTE; > > > +#if defined(CONFIG_ARCH_HAS_PKEYS) > > > > Should there be system_supports_poe() check like above? > > I think it should, otherwise we end up with these bits in the pte even > when POE is not supported. I think it can't get here due to the flow of the code, but I will add it to be defensive (since it's just an alternative that gets patched). I still need the defined(CONFIG_ARCH_HAS_PKEYS) check, since the VM_PKEY_BIT* are only defined then. > > > > + ret |= pkey & 0x1 ? VM_PKEY_BIT0 : 0; > > > + ret |= pkey & 0x2 ? VM_PKEY_BIT1 : 0; > > > + ret |= pkey & 0x4 ? VM_PKEY_BIT2 : 0; > > > +#endif > > > + > > > return ret; > > > } > > > #define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey) Thanks, Joey