Re: [RESEND RFC PATCH v1 2/5] arm64: Add BBM Level 2 cpu feature

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ah, so this is where this is hiding. I missed it in my review of patch
#1 yesterday.

On Wed, 11 Dec 2024 16:01:38 +0000,
Mikołaj Lenczewski <miko.lenczewski@xxxxxxx> wrote:
> 
> The Break-Before-Make cpu feature supports multiple levels (levels 0-2),
> and this commit adds a dedicated BBML2 cpufeature to test against
> support for.
> 
> In supporting BBM level 2, we open ourselves up to potential TLB
> Conflict Abort Exceptions during expected execution, instead of only
> in exceptional circumstances. In the case of an abort, it is
> implementation defined at what stage the abort is generated, and

*IF* stage-2 is enabled. Also, in the case of the EL2&0 translation
regime, no stage-2 applies, so it can only be a stage-1 abort.

> the minimal set of required invalidations is also implementation
> defined. The maximal set of invalidations is to do a `tlbi vmalle1`
> or `tlbi vmalls12e1`, depending on the stage.
> 
> Such aborts should not occur on Arm hardware, and were not seen in
> benchmarked systems, so unless performance concerns arise, implementing

Which systems? Given that you have deny-listed *all* half recent ARM
Ltd implementations, I'm a bit puzzled.

> the abort handlers with the worst-case invalidations seems like an
> alright hack.
> 
> Signed-off-by: Mikołaj Lenczewski <miko.lenczewski@xxxxxxx>
> ---
>  arch/arm64/include/asm/cpufeature.h | 14 ++++++++++++++
>  arch/arm64/kernel/cpufeature.c      |  7 +++++++
>  arch/arm64/mm/fault.c               | 27 ++++++++++++++++++++++++++-
>  arch/arm64/tools/cpucaps            |  1 +
>  4 files changed, 48 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 8b4e5a3cd24c..a9f2ac335392 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -866,6 +866,20 @@ static __always_inline bool system_supports_mpam_hcr(void)
>  	return alternative_has_cap_unlikely(ARM64_MPAM_HCR);
>  }
>  
> +static inline bool system_supports_bbml2(void)
> +{
> +	/* currently, BBM is only relied on by code touching the userspace page
> +	 * tables, and as such we are guaranteed that caps have been finalised.
> +	 *
> +	 * if later we want to use BBM for kernel mappings, particularly early
> +	 * in the kernel, this may return 0 even if BBML2 is actually supported,
> +	 * which means unnecessary break-before-make sequences, but is still
> +	 * correct

Comment style, capitalisation, punctuation.

> +	 */
> +
> +	return alternative_has_cap_unlikely(ARM64_HAS_BBML2);
> +}
> +
>  int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt);
>  bool try_emulate_mrs(struct pt_regs *regs, u32 isn);
>  
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 6ce71f444ed8..7cc94bd5da24 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -2917,6 +2917,13 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
>  		.matches = has_cpuid_feature,
>  		ARM64_CPUID_FIELDS(ID_AA64MMFR2_EL1, EVT, IMP)
>  	},
> +	{
> +		.desc = "BBM Level 2 Support",
> +		.capability = ARM64_HAS_BBML2,
> +		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
> +		.matches = has_cpuid_feature,
> +		ARM64_CPUID_FIELDS(ID_AA64MMFR2_EL1, BBM, 2)
> +	},
>  	{
>  		.desc = "52-bit Virtual Addressing for KVM (LPA2)",
>  		.capability = ARM64_HAS_LPA2,
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index ef63651099a9..dc119358cbc1 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -844,6 +844,31 @@ static int do_tag_check_fault(unsigned long far, unsigned long esr,
>  	return 0;
>  }
>  
> +static int do_conflict_abort(unsigned long far, unsigned long esr,
> +			     struct pt_regs *regs)
> +{
> +	if (!system_supports_bbml2())
> +		return do_bad(far, esr, regs);
> +
> +	/* if we receive a TLB conflict abort, we know that there are multiple
> +	 * TLB entries that translate the same address range. the minimum set
> +	 * of invalidations to clear these entries is implementation defined.
> +	 * the maximum set is defined as either tlbi(vmalls12e1) or tlbi(alle1).
> +	 *
> +	 * if el2 is enabled and stage 2 translation enabled, this may be
> +	 * raised as a stage 2 abort. if el2 is enabled but stage 2 translation
> +	 * disabled, or if el2 is disabled, it will be raised as a stage 1
> +	 * abort.
> +	 *
> +	 * local_flush_tlb_all() does a tlbi(vmalle1), which is enough to
> +	 * handle a stage 1 abort.

Same comment about comments.

> +	 */
> +
> +	local_flush_tlb_all();

The elephant in the room: if TLBs are in such a sorry state, what
guarantees we can make it this far?

I honestly don't think you can reliably handle a TLB Conflict abort in
the same translation regime as the original fault, given that we don't
know the scope of that fault. You are probably making an educated
guess that it is good enough on the CPUs you know of, but I don't see
anything in the architecture that indicates the "blast radius" of a
TLB conflict.

Which makes me think that your KVM patch is equally broken on nVHE and
hVHE. Such fault should probably be handled while at EL2, not after
returning to EL1.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.





[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux