Re: [PATCH v8 02/19] arm64: Allow mismatched 32-bit EL0 support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jun 04, 2021 at 10:38:08AM +0100, Mark Rutland wrote:
> On Thu, Jun 03, 2021 at 06:44:14PM +0100, Will Deacon wrote:
> > On Thu, Jun 03, 2021 at 01:37:15PM +0100, Mark Rutland wrote:
> > > On Wed, Jun 02, 2021 at 05:47:02PM +0100, Will Deacon wrote:
> > > > diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> > > > index 338840c00e8e..603bf4160cd6 100644
> > > > --- a/arch/arm64/include/asm/cpufeature.h
> > > > +++ b/arch/arm64/include/asm/cpufeature.h
> > > > @@ -630,9 +630,15 @@ static inline bool cpu_supports_mixed_endian_el0(void)
> > > >  	return id_aa64mmfr0_mixed_endian_el0(read_cpuid(ID_AA64MMFR0_EL1));
> > > >  }
> > > >  
> > > > +const struct cpumask *system_32bit_el0_cpumask(void);
> > > > +DECLARE_STATIC_KEY_FALSE(arm64_mismatched_32bit_el0);
> > > > +
> > > >  static inline bool system_supports_32bit_el0(void)
> > > >  {
> > > > -	return cpus_have_const_cap(ARM64_HAS_32BIT_EL0);
> > > > +	u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
> > > > +
> > > > +	return static_branch_unlikely(&arm64_mismatched_32bit_el0) ||
> > > > +	       id_aa64pfr0_32bit_el0(pfr0);
> > > >  }
> > > 
> > > Note that read_sanitised_ftr_reg() has to do a bsearch() to find the
> > > arm64_ftr_reg, so this will make system_32bit_el0_cpumask() a fair
> > > amount more expensive than it needs to be.
> > 
> > I seriously doubt that it matters, but it did come up before and I proposed
> > a potential solution if it's actually a concern:
> > 
> > https://lore.kernel.org/r/20201202172727.GC29813@willie-the-truck
> > 
> > so if you can show that it's a problem, we can resurrect something like
> > that.
> 
> I'm happy to leave that for future. I raised this because elsewhere this
> is an issue when we need to avoid instrumentation; if that's not a
> concern here on any path then I am not aware of a functional issue.

I can't think of a reason why instrumentation would be an issue for any of
the current callers, but that's a good point to bear in mind.

> > > That said. I reckon this could be much cleaner if we maintained separate
> > > caps:
> > > 
> > > ARM64_ALL_CPUS_HAVE_32BIT_EL0
> > > ARM64_SOME_CPUS_HAVE_32BIT_EL0
> > > 
> > > ... and allow arm64_mismatched_32bit_el0 to be set dependent on
> > > ARM64_SOME_CPUS_HAVE_32BIT_EL0. With that, this can be simplified to:
> > > 
> > > static inline bool system_supports_32bit_el0(void)
> > > {
> > > 	return (cpus_have_const_cap(ARM64_ALL_CPUS_HAVE_32BIT_EL0)) ||
> > > 		static_branch_unlikely(&arm64_mismatched_32bit_el0))
> > 
> > Something similar was discussed in November last year but this falls
> > apart with late onlining because its not generally possible to tell whether
> > you've seen all the CPUs or not.
> 
> Ah; is that for when your boot CPU set is all AArch32-capable, but a
> late-onlined CPU is not?
> 
> I assume that we require at least one of the set of boot CPUs to be
> AArch32 cpable, and don't settle the compat hwcaps after userspace has
> started.

Heh, you assume wrong :)

When we allow the mismatch, then we do actually defer initialisation of
the compat hwcaps until we see a 32-bit CPU. That's fine, as they won't
be visible to userspace until then anyway (PER_LINUX32 is unavailable).

Will



[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux