Re: [PATCH V5 5/7] arm64: mm: Prevent mismatched 52-bit VA support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Dec 10, 2018 at 01:36:40PM +0000, Will Deacon wrote:
> On Fri, Dec 07, 2018 at 05:28:58PM +0000, Suzuki K Poulose wrote:
> > 
> > 
> > On 07/12/2018 15:26, Will Deacon wrote:
> > > On Fri, Dec 07, 2018 at 10:47:57AM +0000, Suzuki K Poulose wrote:
> > > > On 12/06/2018 10:50 PM, Steve Capper wrote:
> > > > > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> > > > > index f60081be9a1b..58fcc1edd852 100644
> > > > > --- a/arch/arm64/kernel/head.S
> > > > > +++ b/arch/arm64/kernel/head.S
> > > > > @@ -707,6 +707,7 @@ secondary_startup:
> > > > >    	/*
> > > > >    	 * Common entry point for secondary CPUs.
> > > > >    	 */
> > > > > +	bl	__cpu_secondary_check52bitva
> > > > >    	bl	__cpu_setup			// initialise processor
> > > > >    	adrp	x1, swapper_pg_dir
> > > > >    	bl	__enable_mmu
> > > > > @@ -785,6 +786,31 @@ ENTRY(__enable_mmu)
> > > > >    	ret
> > > > >    ENDPROC(__enable_mmu)
> > > > > +ENTRY(__cpu_secondary_check52bitva)
> > > > > +#ifdef CONFIG_ARM64_52BIT_VA
> > > > > +	ldr_l	x0, vabits_user
> > > > > +	cmp	x0, #52
> > > > > +	b.ne	2f > +
> > > > > +	mrs_s	x0, SYS_ID_AA64MMFR2_EL1
> > > > > +	and	x0, x0, #(0xf << ID_AA64MMFR2_LVA_SHIFT)
> > > > > +	cbnz	x0, 2f
> > > > > +
> > > > > +	adr_l	x0, va52mismatch
> > > > > +	mov	w1, #1
> > > > > +	strb	w1, [x0]
> > > > > +	dmb	sy
> > > > > +	dc	ivac, x0	// Invalidate potentially stale cache line
> > > > 
> > > > You may have to clear this variable before a CPU is brought up to avoid
> > > > raising a false error message when another secondary CPU doesn't boot
> > > > for some other reason (say granule support) after a CPU failed with lack
> > > > of 52bitva. It is really a crazy corner case.
> > > 
> > > Can't we just follow the example set by the EL2 setup in the way that is
> > > uses __boot_cpu_mode? In that case, we only need one variable and you can
> > > detect a problem by comparing the two halves.
> > 
> > The only difference here is, the support is bolted at boot CPU time and hence
> > we need to verify each and every CPU, unlike the __boot_cpu_mode where we
> > check for mismatch after the SMP CPUs are brought up. If we decide to make
> > the choice later, something like that could work. The only caveat is the 52bit
> > kernel VA will have to do something like the above.
> 
> So looking at this a bit more, I think we're better off repurposing the
> upper bits of the early boot status word to contain a reason code, rather
> than introducing new variables for every possible mismatch.
> 
> Does the untested diff below look remotely sane to you?
> 
> Will
> 

Thanks Will,
This looks good to me, I will test now and fold this into a patch.

Cheers,
-- 
Steve





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux