Re: [PATCH] KVM: arm64: Disabling disabled PMU counters wastes a lot of time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 29 Jun 2021 14:16:55 +0100,
Alexandre Chartre <alexandre.chartre@xxxxxxxxxx> wrote:
> 
> 
> Hi Marc,
> 
> On 6/29/21 11:06 AM, Marc Zyngier wrote:
> > Hi Alexandre,

[...]

> > So the sysreg is the only thing we should consider, and I think we
> > should drop the useless masking. There is at least another instance of
> > this in the PMU code (kvm_pmu_overflow_status()), and apart from
> > kvm_pmu_vcpu_reset(), only the sysreg accessors should care about the
> > masking to sanitise accesses.
> > 
> > What do you think?
> > 
> 
> I think you are right. PMCNTENSET_EL0 is already masked with
> kvm_pmu_valid_counter_mask() so there's effectively no need to mask
> it again when we use it. I will send an additional patch (on top of
> this one) to remove useless masking. Basically, changes would be:
> 
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index bab4b735a0cf..e0dfd7ce4ba0 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -373,7 +373,6 @@ static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
>                 reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
>                 reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
>                 reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
> -               reg &= kvm_pmu_valid_counter_mask(vcpu);
>         }
>          return reg;
> @@ -564,21 +563,22 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
>   */
>  void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
>  {
> -       unsigned long mask = kvm_pmu_valid_counter_mask(vcpu);
> +       unsigned long mask;
>         int i;
>          if (val & ARMV8_PMU_PMCR_E) {
>                 kvm_pmu_enable_counter_mask(vcpu,
> -                      __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
> +                      __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
>         } else {
>                 kvm_pmu_disable_counter_mask(vcpu,
> -                      __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
> +                      __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
>         }
>          if (val & ARMV8_PMU_PMCR_C)
>                 kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 0);
>          if (val & ARMV8_PMU_PMCR_P) {
> +               mask = kvm_pmu_valid_counter_mask(vcpu);

Careful here, this clashes with a fix from Alexandru that is currently
in -next (PMCR_EL0.P shouldn't reset the cycle counter) and aimed at
5.14. And whilst you're at it, consider moving the 'mask' declaration
here too.

>                 for_each_set_bit(i, &mask, 32)
>                         kvm_pmu_set_counter_value(vcpu, i, 0);
>         }
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 1a7968ad078c..2e406905760e 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -845,7 +845,7 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>                         kvm_pmu_disable_counter_mask(vcpu, val);
>                 }
>         } else {
> -               p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
> +               p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
>         }
>          return true;

If you are cleaning up the read-side of sysregs, access_pminten() and
access_pmovs() could have some of your attention too.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm



[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux