Re: [PATCH v4 02/14] arm64: Allow mismatched 32-bit EL0 support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Nov 27, 2020 at 10:25:06AM +0000, Marc Zyngier wrote:
> On 2020-11-24 15:50, Will Deacon wrote:
> > When confronted with a mixture of CPUs, some of which support 32-bit
> > applications and others which don't, we quite sensibly treat the system
> > as 64-bit only for userspace and prevent execve() of 32-bit binaries.
> > 
> > Unfortunately, some crazy folks have decided to build systems like this
> > with the intention of running 32-bit applications, so relax our
> > sanitisation logic to continue to advertise 32-bit support to userspace
> > on these systems and track the real 32-bit capable cores in a cpumask
> > instead. For now, the default behaviour remains but will be tied to
> > a command-line option in a later patch.
> > 
> > Signed-off-by: Will Deacon <will@xxxxxxxxxx>
> > ---
> >  arch/arm64/include/asm/cpucaps.h    |   2 +-
> >  arch/arm64/include/asm/cpufeature.h |   8 ++-
> >  arch/arm64/kernel/cpufeature.c      | 106 ++++++++++++++++++++++++++--
> >  3 files changed, 107 insertions(+), 9 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/cpucaps.h
> > b/arch/arm64/include/asm/cpucaps.h
> > index e7d98997c09c..e6f0eb4643a0 100644
> > --- a/arch/arm64/include/asm/cpucaps.h
> > +++ b/arch/arm64/include/asm/cpucaps.h
> > @@ -20,7 +20,7 @@
> >  #define ARM64_ALT_PAN_NOT_UAO			10
> >  #define ARM64_HAS_VIRT_HOST_EXTN		11
> >  #define ARM64_WORKAROUND_CAVIUM_27456		12
> > -#define ARM64_HAS_32BIT_EL0			13
> > +#define ARM64_HAS_32BIT_EL0_DO_NOT_USE		13
> >  #define ARM64_HARDEN_EL2_VECTORS		14
> >  #define ARM64_HAS_CNP				15
> >  #define ARM64_HAS_NO_FPSIMD			16
> > diff --git a/arch/arm64/include/asm/cpufeature.h
> > b/arch/arm64/include/asm/cpufeature.h
> > index 97244d4feca9..f447d313a9c5 100644
> > --- a/arch/arm64/include/asm/cpufeature.h
> > +++ b/arch/arm64/include/asm/cpufeature.h
> > @@ -604,9 +604,15 @@ static inline bool
> > cpu_supports_mixed_endian_el0(void)
> >  	return id_aa64mmfr0_mixed_endian_el0(read_cpuid(ID_AA64MMFR0_EL1));
> >  }
> > 
> > +const struct cpumask *system_32bit_el0_cpumask(void);
> > +DECLARE_STATIC_KEY_FALSE(arm64_mismatched_32bit_el0);
> > +
> >  static inline bool system_supports_32bit_el0(void)
> >  {
> > -	return cpus_have_const_cap(ARM64_HAS_32BIT_EL0);
> > +	u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
> > +
> > +	return id_aa64pfr0_32bit_el0(pfr0) ||
> > +	       static_branch_unlikely(&arm64_mismatched_32bit_el0);
> 
> nit: swapping the two sides of this expression has the potential
> for slightly better code, resulting in better performance on
> these asymmetric systems. Not a big real, but since this lands
> on the fast path on vcpu exit, I'll take every bit of optimisation.

I'll swap 'em, thanks.

Will



[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux