Re: [PATCH v3 5/7] KVM: arm64: Allow user to set CCSIDR_EL1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 18 Dec 2022 05:14:10 +0000,
Akihiko Odaki <akihiko.odaki@xxxxxxxxxx> wrote:
> 
> Allow the userspace to set CCSIDR_EL1 so that if the kernel changes the
> default values of CCSIDR_EL1, the userspace can restore the old values
> from an old saved VM context.
> 
> Suggested-by: Marc Zyngier <maz@xxxxxxxxxx>
> Signed-off-by: Akihiko Odaki <akihiko.odaki@xxxxxxxxxx>
> ---
>  arch/arm64/include/asm/kvm_host.h |   3 +
>  arch/arm64/kvm/reset.c            |   1 +
>  arch/arm64/kvm/sys_regs.c         | 116 ++++++++++++++++++++----------
>  3 files changed, 83 insertions(+), 37 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index cc2ede0eaed4..cfc6930efe1b 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -417,6 +417,9 @@ struct kvm_vcpu_arch {
>  		u64 last_steal;
>  		gpa_t base;
>  	} steal;
> +
> +	/* Per-vcpu CCSIDR override or NULL */
> +	u32 *ccsidr;
>  };
>  
>  /*
> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index 5ae18472205a..7980983dbad7 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -157,6 +157,7 @@ void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu)
>  	if (sve_state)
>  		kvm_unshare_hyp(sve_state, sve_state + vcpu_sve_state_size(vcpu));
>  	kfree(sve_state);
> +	kfree(vcpu->arch.ccsidr);
>  }
>  
>  static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu)
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index f4a7c5abcbca..f48a3cc38d24 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -87,11 +87,27 @@ static u32 cache_levels;
>  /* CSSELR values; used to index KVM_REG_ARM_DEMUX_ID_CCSIDR */
>  #define CSSELR_MAX 14
>  
> +static u8 get_min_cache_line_size(u32 csselr)
> +{
> +	u64 ctr_el0;
> +	int field;
> +
> +	ctr_el0 = read_sanitised_ftr_reg(SYS_CTR_EL0);
> +	field = csselr & CSSELR_EL1_InD ? CTR_EL0_IminLine_SHIFT : CTR_EL0_DminLine_SHIFT;
> +
> +	return cpuid_feature_extract_unsigned_field(ctr_el0, field) - 2;
> +}
> +
>  /* Which cache CCSIDR represents depends on CSSELR value. */
> -static u32 get_ccsidr(u32 csselr)
> +static u32 get_ccsidr(struct kvm_vcpu *vcpu, u32 csselr)
>  {
> +	u32 ccsidr_index = csselr & (CSSELR_EL1_Level | CSSELR_EL1_InD);
>  	u32 ccsidr;
>  
> +	if (vcpu->arch.ccsidr && is_valid_cache(ccsidr_index) &&
> +	    !(kvm_has_mte(vcpu->kvm) && (csselr & CSSELR_EL1_TnD)))
> +		return vcpu->arch.ccsidr[ccsidr_index];
> +

I really don't understand this logic. If the requested cache level is
invalid, or the MTE setup doesn't match, you return something that is
the part of the HW hierarchy, despite having a userspace-provided
hierarchy.

The other problem I can see here is that you're still relying on the
host CLIDR_EL1 (aka cache_levels), while restoring a guest cache
hierarchy must include a CLIDR_EL1. Otherwise, you cannot really
evaluate the validity of that hierarchy, nor return consistent
results.

I was expecting something like (totally untested, but you'll get what
I mean):

	if (vcpu->arch.cssidr) {
		if (!is_valid_cache(vcpu, csselr))
			return 0; // UNKNOWN value

		return vcpu->arch.ccsidr[ccsidr_index];
	}

and with is_valid_cache() written as:

bool is_valid_cache(struct kvm_vcpu *vcpu, u64 csselr)
{
	u64 clidr = __vcpu_sys_reg(vcpu, CLIDR_EL1);
	u64 idx = FIELD_GET(CSSELR_EL1_Level, csselr);
	u64 ttype = FIELD_GET(GENMASK(CLIDR_EL1_Ttypen_SHIFT + idx * 2 + 1,
				      CLIDR_EL1_Ttypen_SHIFT + idx * 2),
			      clidr);
	u64 ctype = FIELD_GET(CLIDR_EL1_Ctype1 << (idx * 3), clidr);

	// !MTE or InD make TnD RES0
	if (!kvm_has_mte(vcpu->kvm) || (csselr & CSSELR_EL1_InD))
		csselr &= ~CSSELR_EL1_TnD;

	// If TnD is set, the cache level must be purely for tags
	if (csselr & CSSELR_EL1_TnD)
		return (ttype == 0b01);

	// Otherwise, check for a match against the InD value
	switch (ctype) {
	case 0: /* No cache */
		return false;
	case 1: /* Instruction cache only */
		return (csselr & CSSELR_EL1_InD);
	case 2: /* Data cache only */
	case 4: /* Unified cache */
		return !(csselr & CSSELR_EL1_InD);
	case 3: /* Separate instruction and data caches */
		return true;
	default: /* Reserved: we can't know instruction or data. */
		return false;
	}
}

which implies that CLIDR_EL1 isn't an invariant anymore. You have that
in your last patch, but this needs to be brought in this one.

It should be validated on userspace write, making sure that
LoU/LoUIS/LoC are compatible with the state of CTR+FWB+CLIDR on the
host.

And then cache_levels disappears totally here.

[...]

> +static int set_ccsidr(struct kvm_vcpu *vcpu, u32 csselr, u32 val)
> +{
> +	u8 line_size = (val & CCSIDR_EL1_LineSize) >> CCSIDR_EL1_LineSize_SHIFT;

Better written as

	u8 line_size = FIELD_GET(CCSIDR_EL1_LineSize, val);

> +	u32 *ccsidr = vcpu->arch.ccsidr;
> +	u32 i;
> +
> +	if ((val & CCSIDR_EL1_RES0) || line_size < get_min_cache_line_size(csselr))
> +		return -EINVAL;
> +
> +	if (!ccsidr) {
> +		if (val == get_ccsidr(vcpu, csselr))
> +			return 0;
> +
> +		ccsidr = kmalloc_array(CSSELR_MAX, sizeof(u32), GFP_KERNEL);
> +		if (!ccsidr)
> +			return -ENOMEM;
> +
> +		for (i = 0; i < CSSELR_MAX; i++)
> +			if (is_valid_cache(i))
> +				ccsidr[i] = get_ccsidr(vcpu, i);
> +
> +		vcpu->arch.ccsidr = ccsidr;
> +	}
> +
> +	ccsidr[csselr] = val;
> +
> +	return 0;
> +}

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm



[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux