On 28/01/2019 14:28, Andrew Murray wrote:
On Tue, Jan 22, 2019 at 10:12:22PM +0000, Suzuki K Poulose wrote:
Hi Andrew,
On 01/22/2019 10:49 AM, Andrew Murray wrote:
To prevent re-creating perf events everytime the counter registers
are changed, let's instead lazily create the event when the event
is first enabled and destroy it when it changes.
Signed-off-by: Andrew Murray <andrew.murray@xxxxxxx>
---
virt/kvm/arm/pmu.c | 114 ++++++++++++++++++++++++++++++++++++-----------------
1 file changed, 78 insertions(+), 36 deletions(-)
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 4464899..1921ca9 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -24,8 +24,11 @@
#include <kvm/arm_pmu.h>
#include <kvm/arm_vgic.h>
-static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
- u64 select_idx);
+static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, u64 pair);
I find the approach good. However the function names are a bit odd and
it makes the code read a bit difficult.
Thanks - the odd naming probably came about as I started with a patch that
added chained PMU support and worked backward to split it into smaller patches
that each made individual sense. The _single suffix was the counterpart of
_pair.
I think we could :
1) Rename the existing
kvm_pmu_{enable/disable}_counter => kvm_pmu_{enable/disable}_[mask or
counters ]
as they operate on a set of counters (as a mask) instead of a single
counter.
And then you may be able to drop "_single" from
kvm_pmu_{enable/disable}_counter"_single() functions below, which makes
better sense for what they do.
Thanks for this suggestion. I like this.
+static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
+ u64 select_idx);
Could we simply keep kvm_pmu_counter_create_event() and add a comment above
the function explaining that the events are enabled as they are
created lazily ?
OK.
+ * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ */
+static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
+ u64 select_idx)
+{
+ u64 mask = kvm_pmu_valid_counter_mask(vcpu);
+ u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
+
+ if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
+ return;
+
+ if (set & BIT(select_idx))
+ kvm_pmu_enable_counter_single(vcpu, select_idx);
Could we not reuse kvm_pmu_enable_counter() here :
i.e,
static inline void kvm_pmu_reenable_counter(struct kvm_vcpu *vcpu, u64
select_idx)
{
kvm_pmu_enable_counter(vcpu, BIT(select_idx));
}
Not quite - when we call kvm_pmu_reenable_enabled_single the individual
counter may or may not be enabled. We only want to recreate the perf event
if it was previously enabled.
But we can do better, e.g.
static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
u64 select_idx)
nit: could we use the name kvm_pmu_reenable_counter() ?
{
u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
if (set & BIT(select_idx))
kvm_pmu_enable_counter_mask(vcpu, BIT(select_idx));
}
Yep, thats correct. Alternatively, you could move the CNTENSET check
into the kvm_pmu_enable_counter_mask().
Suzuki
_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm