Re: [PATCH 6/9] KVM: arm64: PMU: Move the ID_AA64DFR0_EL1.PMUver limit to VM creation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Reiji,

Sorry it took so long to get back to this.

On Fri, 26 Aug 2022 07:02:21 +0100,
Reiji Watanabe <reijiw@xxxxxxxxxx> wrote:
> 
> Hi Marc,
> 
> On Thu, Aug 25, 2022 at 9:34 PM Reiji Watanabe <reijiw@xxxxxxxxxx> wrote:
> >
> > Hi Marc,
> >
> > On Fri, Aug 5, 2022 at 6:58 AM Marc Zyngier <maz@xxxxxxxxxx> wrote:
> > >
> > > As further patches will enable the selection of a PMU revision
> > > from userspace, sample the supported PMU revision at VM creation
> > > time, rather than building each time the ID_AA64DFR0_EL1 register
> > > is accessed.
> > >
> > > This shouldn't result in any change in behaviour.
> > >
> > > Signed-off-by: Marc Zyngier <maz@xxxxxxxxxx>
> > > ---
> > >  arch/arm64/include/asm/kvm_host.h |  1 +
> > >  arch/arm64/kvm/arm.c              |  6 ++++++
> > >  arch/arm64/kvm/pmu-emul.c         | 11 +++++++++++
> > >  arch/arm64/kvm/sys_regs.c         | 26 +++++++++++++++++++++-----
> > >  include/kvm/arm_pmu.h             |  6 ++++++
> > >  5 files changed, 45 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > > index f38ef299f13b..411114510634 100644
> > > --- a/arch/arm64/include/asm/kvm_host.h
> > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > @@ -163,6 +163,7 @@ struct kvm_arch {
> > >
> > >         u8 pfr0_csv2;
> > >         u8 pfr0_csv3;
> > > +       u8 dfr0_pmuver;
> > >
> > >         /* Hypercall features firmware registers' descriptor */
> > >         struct kvm_smccc_features smccc_feat;
> > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > > index 8fe73ee5fa84..e4f80f0c1e97 100644
> > > --- a/arch/arm64/kvm/arm.c
> > > +++ b/arch/arm64/kvm/arm.c
> > > @@ -164,6 +164,12 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
> > >         set_default_spectre(kvm);
> > >         kvm_arm_init_hypercalls(kvm);
> > >
> > > +       /*
> > > +        * Initialise the default PMUver before there is a chance to
> > > +        * create an actual PMU.
> > > +        */
> > > +       kvm->arch.dfr0_pmuver = kvm_arm_pmu_get_host_pmuver();
> > > +
> > >         return ret;
> > >  out_free_stage2_pgd:
> > >         kvm_free_stage2_pgd(&kvm->arch.mmu);
> > > diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> > > index ddd79b64b38a..33a88ca7b7fd 100644
> > > --- a/arch/arm64/kvm/pmu-emul.c
> > > +++ b/arch/arm64/kvm/pmu-emul.c
> > > @@ -1021,3 +1021,14 @@ int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
> > >
> > >         return -ENXIO;
> > >  }
> > > +
> > > +u8 kvm_arm_pmu_get_host_pmuver(void)
> >
> > Nit: Since this function doesn't simply return the host's pmuver, but the
> > pmuver limit for guests, perhaps "kvm_arm_pmu_get_guest_pmuver_limit"
> > might be more clear (closer to what it does) ?

Maybe a bit verbose, but I'll work something out.

> >
> > > +{
> > > +       u64 tmp;
> > > +
> > > +       tmp = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
> > > +       tmp = cpuid_feature_cap_perfmon_field(tmp,
> > > +                                             ID_AA64DFR0_PMUVER_SHIFT,
> > > +                                             ID_AA64DFR0_PMUVER_8_4);
> > > +       return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), tmp);
> > > +}
> > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > > index 333efddb1e27..55451f49017c 100644
> > > --- a/arch/arm64/kvm/sys_regs.c
> > > +++ b/arch/arm64/kvm/sys_regs.c
> > > @@ -1062,6 +1062,22 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
> > >         return true;
> > >  }
> > >
> > > +static u8 pmuver_to_perfmon(const struct kvm_vcpu *vcpu)
> > > +{
> > > +       if (!kvm_vcpu_has_pmu(vcpu))
> > > +               return 0;
> > > +
> > > +       switch (vcpu->kvm->arch.dfr0_pmuver) {
> > > +       case ID_AA64DFR0_PMUVER_8_0:
> > > +               return ID_DFR0_PERFMON_8_0;
> > > +       case ID_AA64DFR0_PMUVER_IMP_DEF:
> > > +               return 0;
> > > +       default:
> > > +               /* Anything ARMv8.4+ has the same value. For now. */
> > > +               return vcpu->kvm->arch.dfr0_pmuver;
> > > +       }
> > > +}
> > > +
> > >  /* Read a sanitised cpufeature ID register by sys_reg_desc */
> > >  static u64 read_id_reg(const struct kvm_vcpu *vcpu,
> > >                 struct sys_reg_desc const *r, bool raz)
> > > @@ -1112,10 +1128,10 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
> > >                 /* Limit debug to ARMv8.0 */
> > >                 val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER);
> > >                 val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER), 6);
> > > -               /* Limit guests to PMUv3 for ARMv8.4 */
> > > -               val = cpuid_feature_cap_perfmon_field(val,
> > > -                                                     ID_AA64DFR0_PMUVER_SHIFT,
> > > -                                                     kvm_vcpu_has_pmu(vcpu) ? ID_AA64DFR0_PMUVER_8_4 : 0);
> > > +               /* Set PMUver to the required version */
> > > +               val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER);
> > > +               val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER),
> > > +                                 kvm_vcpu_has_pmu(vcpu) ? vcpu->kvm->arch.dfr0_pmuver : 0);
> 
> I've just noticed one issue in this patch while I'm reviewing patch-7.
> 
> I would think that this patch makes PMUVER and PERFMON inconsistent
> when PMU is not enabled for the vCPU, and the host's sanitised PMUVER
> is IMP_DEF.
> 
> Previously, when PMU is not enabled for the vCPU and the host's
> sanitized value of PMUVER is IMP_DEF(0xf), the vCPU's PMUVER and PERFMON
> are set to IMP_DEF due to a bug of cpuid_feature_cap_perfmon_field().
> (https://lore.kernel.org/all/20220214065746.1230608-11-reijiw@xxxxxxxxxx/)
> 
> With this patch, the vCPU's PMUVER will be 0 for the same case,
> while the vCPU's PERFMON will stay the same (IMP_DEF).
> I guess you unintentionally corrected only the PMUVER value of the VCPU.

I think that with this patch both PMUVer and Perfmon values get set to
0 (pmuver_to_perfmon() returns 0 for both ID_AA64DFR0_PMUVER_IMP_DEF
and no PMU at all). Am I missing anything here?

However, you definitely have a point that we should handle a guest
being restored with an IMPDEF PMU. Which means I need to revisit this
patch and the userspace accessors. Oh well...

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm



[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux