On 2022-11-08 05:36, Reiji Watanabe wrote:
Hi Marc,
> BTW, if we have no intention of supporting a mix of vCPUs with and
> without PMU, I think it would be nice if we have a clear comment on
> that in the code. Or I'm hoping to disallow it if possible though.
I'm not sure we're in a position to do this right now. The current API
has always (for good or bad reasons) been per-vcpu as it is tied to
the vcpu initialisation.
Thank you for your comments!
Then, when a guest that has a mix of vCPUs with and without PMU,
userspace can set kvm->arch.dfr0_pmuver to zero or IMPDEF, and the
PMUVER for vCPUs with PMU will become 0 or IMPDEF as I mentioned.
For instance, on the host whose PMUVER==1, if vCPU#0 has no
PMU(PMUVER==0),
vCPU#1 has PMU(PMUVER==1), if the guest is migrated to another host
with
same CPU features (PMUVER==1), if SET_ONE_REG of ID_AA64DFR0_EL1 for
vCPU#0
is done after for vCPU#1, kvm->arch.dfr0_pmuver will be set to 0, and
the guest will see PMUVER==0 even for vCPU1.
Should we be concerned about this case?
Yeah, this is a real problem. The issue is that we want to keep
track of two separate bits of information:
- what is the revision of the PMU when the PMU is supported?
- what is the PMU unsupported or IMPDEF?
and we use the same field for both, which clearly cannot work
if we allow vcpus with and without PMUs in the same VM.
I've now switched to an implementation where I track both
the architected version as well as the version exposed when
no PMU is supported, see below.
We still cannot track both no-PMU *and* impdef-PMU, nor can we
track multiple PMU revisions. But that's not a thing as far as
I am concerned.
Thanks,
M.
diff --git a/arch/arm64/include/asm/kvm_host.h
b/arch/arm64/include/asm/kvm_host.h
index 90c9a2dd3f26..cc44e3bc528d 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -163,7 +163,10 @@ struct kvm_arch {
u8 pfr0_csv2;
u8 pfr0_csv3;
- u8 dfr0_pmuver;
+ struct {
+ u8 imp:4;
+ u8 unimp:4;
+ } dfr0_pmuver;
/* Hypercall features firmware registers' descriptor */
struct kvm_smccc_features smccc_feat;
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 6b3ed524630d..f956aab438c7 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -168,7 +168,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long
type)
* Initialise the default PMUver before there is a chance to
* create an actual PMU.
*/
- kvm->arch.dfr0_pmuver = kvm_arm_pmu_get_pmuver_limit();
+ kvm->arch.dfr0_pmuver.imp = kvm_arm_pmu_get_pmuver_limit();
return ret;
out_free_stage2_pgd:
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 95100896de72..615cb148e22a 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1069,14 +1069,9 @@ static bool access_arch_timer(struct kvm_vcpu
*vcpu,
static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu)
{
if (kvm_vcpu_has_pmu(vcpu))
- return vcpu->kvm->arch.dfr0_pmuver;
+ return vcpu->kvm->arch.dfr0_pmuver.imp;
- /* Special case for IMPDEF PMUs that KVM has exposed in the past... */
- if (vcpu->kvm->arch.dfr0_pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF)
- return ID_AA64DFR0_EL1_PMUVer_IMP_DEF;
-
- /* The real "no PMU" */
- return 0;
+ return vcpu->kvm->arch.dfr0_pmuver.unimp;
}
static u8 perfmon_to_pmuver(u8 perfmon)
@@ -1295,7 +1290,10 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu
*vcpu,
if (val)
return -EINVAL;
- vcpu->kvm->arch.dfr0_pmuver = pmuver;
+ if (valid_pmu)
+ vcpu->kvm->arch.dfr0_pmuver.imp = pmuver;
+ else
+ vcpu->kvm->arch.dfr0_pmuver.unimp = pmuver;
return 0;
}
@@ -1332,7 +1330,10 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
if (val)
return -EINVAL;
- vcpu->kvm->arch.dfr0_pmuver = perfmon_to_pmuver(perfmon);
+ if (valid_pmu)
+ vcpu->kvm->arch.dfr0_pmuver.imp = perfmon_to_pmuver(perfmon);
+ else
+ vcpu->kvm->arch.dfr0_pmuver.unimp = perfmon_to_pmuver(perfmon);
return 0;
}
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 3d526df9f3c5..628775334d5e 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -93,7 +93,7 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
* Evaluates as true when emulating PMUv3p5, and false otherwise.
*/
#define kvm_pmu_is_3p5(vcpu) \
- (vcpu->kvm->arch.dfr0_pmuver >= ID_AA64DFR0_EL1_PMUVer_V3P5)
+ (vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P5)
u8 kvm_arm_pmu_get_pmuver_limit(void);
--
Jazz is not dead. It just smells funny...