Re: [PATCH v3 04/17] perf: x86/ds: Handle guest PEBS overflow PMI and inject it to guest

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2021/1/14 2:22, Peter Zijlstra wrote:
On Mon, Jan 04, 2021 at 09:15:29PM +0800, Like Xu wrote:

diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index b47cc4226934..c499bdb58373 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -1721,6 +1721,65 @@ intel_pmu_save_and_restart_reload(struct perf_event *event, int count)
  	return 0;
  }
+/*
+ * We may be running with guest PEBS events created by KVM, and the
+ * PEBS records are logged into the guest's DS and invisible to host.
+ *
+ * In the case of guest PEBS overflow, we only trigger a fake event
+ * to emulate the PEBS overflow PMI for guest PBES counters in KVM.
+ * The guest will then vm-entry and check the guest DS area to read
+ * the guest PEBS records.
+ *
+ * The guest PEBS overflow PMI may be dropped when both the guest and
+ * the host use PEBS. Therefore, KVM will not enable guest PEBS once
+ * the host PEBS is enabled since it may bring a confused unknown NMI.
+ *
+ * The contents and other behavior of the guest event do not matter.
+ */
+static int intel_pmu_handle_guest_pebs(struct cpu_hw_events *cpuc,
+				       struct pt_regs *iregs,
+				       struct debug_store *ds)
+{
+	struct perf_sample_data data;
+	struct perf_event *event = NULL;
+	u64 guest_pebs_idxs = cpuc->pebs_enabled & ~cpuc->intel_ctrl_host_mask;
+	int bit;
+
+	/*
+	 * Ideally, we should check guest DS to understand if it's
+	 * a guest PEBS overflow PMI from guest PEBS counters.
+	 * However, it brings high overhead to retrieve guest DS in host.
+	 * So we check host DS instead for performance.
Again; for the virt illiterate people here (me); why is it expensive to
check guest DS?

We are not checking the guest DS here for two reasons:
- it brings additional kvm mem locking operations and guest page table traversal,
   which is very expensive for guests with large memory (if we have cached the
   mapped values, we still need to check whether the cached ones are still valid);
- the current interface kvm_read_guest_*() might sleep and is not irq safe;

If you still need me to try this guest DS check approach, please let me know,
I will provide more performance data.


Why do we need to? Can't we simply always forward the PMI if the guest
has bits set in MSR_IA32_PEBS_ENABLE ? Surely we can access the guest
MSRs at a reasonable rate..

Sure, it'll send too many PMIs, but is that really a problem?

More vPMI means more guest irq handler calls and
more PMI virtualization overhead. In addition,

the correctness of some workloads (RR?) depends on
the correct number of PMIs and the PMI trigger times
and virt may not want to break this assumption.


+	 *
+	 * If PEBS interrupt threshold on host is not exceeded in a NMI, there
+	 * must be a PEBS overflow PMI generated from the guest PEBS counters.
+	 * There is no ambiguity since the reported event in the PMI is guest
+	 * only. It gets handled correctly on a case by case base for each event.
+	 *
+	 * Note: KVM disables the co-existence of guest PEBS and host PEBS.
Where; I need a code reference here.

How about:

Note: KVM will disable the co-existence of guest PEBS and host PEBS.
In the intel_guest_get_msrs(), when we have host PEBS ctrl bit(s) enabled,
KVM will clear the guest PEBS ctrl enable bit(s) before vm-entry.
The guest PEBS users should be notified of this runtime restriction.


+	 */
+	if (!guest_pebs_idxs || !in_nmi() ||
All the other code uses !iregs instead of !in_nmi(), also your
indentation is broken.

Sure, I'll use !iregs and fix the indentation in the next version.

---
thx,likexu
+		ds->pebs_index >= ds->pebs_interrupt_threshold)
+		return 0;
+
+	for_each_set_bit(bit, (unsigned long *)&guest_pebs_idxs,
+			INTEL_PMC_IDX_FIXED + x86_pmu.num_counters_fixed) {
+
+		event = cpuc->events[bit];
+		if (!event->attr.precise_ip)
+			continue;
+
+		perf_sample_data_init(&data, 0, event->hw.last_period);
+		if (perf_event_overflow(event, &data, iregs))
+			x86_pmu_stop(event, 0);
+
+		/* Inject one fake event is enough. */
+		return 1;
+	}
+
+	return 0;
+}
+
  static __always_inline void
  __intel_pmu_pebs_event(struct perf_event *event,
  		       struct pt_regs *iregs,
@@ -1965,6 +2024,9 @@ static void intel_pmu_drain_pebs_icl(struct pt_regs *iregs, struct perf_sample_d
  	if (!x86_pmu.pebs_active)
  		return;
+ if (intel_pmu_handle_guest_pebs(cpuc, iregs, ds))
+		return;
+
  	base = (struct pebs_basic *)(unsigned long)ds->pebs_buffer_base;
  	top = (struct pebs_basic *)(unsigned long)ds->pebs_index;
--
2.29.2





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux