Re: [PATCH] KVM: arm64: Allow in-atomic injection of SPIs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2020-06-09 08:48, Auger Eric wrote:
Hi Marc,

On 6/8/20 7:19 PM, Marc Zyngier wrote:
Hi Eric,

On 2020-06-08 17:58, Auger Eric wrote:
Hi Marc,

On 5/26/20 6:11 PM, Marc Zyngier wrote:
On a system that uses SPIs to implement MSIs (as it would be
the case on a GICv2 system exposing a GICv2m to its guests),
we deny the possibility of injecting SPIs on the in-atomic
fast-path.

This results in a very large amount of context-switches
(roughly equivalent to twice the interrupt rate) on the host,
and suboptimal performance for the guest (as measured with
a test workload involving a virtio interface backed by vhost-net).
Given that GICv2 systems are usually on the low-end of the spectrum
performance wise, they could do without the aggravation.

We solved this for GICv3+ITS by having a translation cache. But
SPIs do not need any extra infrastructure, and can be immediately
injected in the virtual distributor as the locking is already
heavy enough that we don't need to worry about anything.

This halves the number of context switches for the same workload.

Signed-off-by: Marc Zyngier <maz@xxxxxxxxxx>
---
 arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
 arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
 2 files changed, 17 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c
b/arch/arm64/kvm/vgic/vgic-irqfd.c
index d8cdfea5cc96..11a9f81115ab 100644
--- a/arch/arm64/kvm/vgic/vgic-irqfd.c
+++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
There is still a comment above saying
 * Currently only direct MSI injection is supported.

I believe this comment to be correct. There is no path other
than MSI injection that leads us here. Case in point, we only
ever inject a rising edge through this API, never a falling one.

Isn't this path entered whenever you have either of the combo being used?
KVM_SET_MSI_ROUTING + KVM_IRQFD
KVM_SET_GSI_ROUTING + KVM_IRQFD

I've always considered these to be MSIs, but maybe I'm narrow minded... ;-)

I had in mind direct MSI injection was KVM_SIGNAL_MSI/
kvm_send_userspace_msi/kvm_set_msi

Fair enough. Zengui was also unhappy about this comment, so I'll
remove it. After all, we shouldn't care whether that's a MSI or not.


@@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct
kvm_kernel_irq_routing_entry *e,
                   struct kvm *kvm, int irq_source_id, int level,
                   bool line_status)
 {
-    if (e->type == KVM_IRQ_ROUTING_MSI && vgic_has_its(kvm) && level) {
+    if (!level)
+        return -EWOULDBLOCK;
+
+    switch (e->type) {
+    case KVM_IRQ_ROUTING_MSI: {
         struct kvm_msi msi;

+        if (!vgic_has_its(kvm))
+            return -EINVAL;
Shouldn't we return -EWOULDBLOCK by default?
QEMU does not use that path with GICv2m but in kvm_set_routing_entry() I
don't see any check related to the ITS.

Fair enough. I really don't anticipate anyone to be using
KVM_IRQ_ROUTING_MSI with anything but the ITS, but who knows,
people are crazy! ;-)

+
         kvm_populate_msi(e, &msi);
-        if (!vgic_its_inject_cached_translation(kvm, &msi))
-            return 0;
+        return vgic_its_inject_cached_translation(kvm, &msi);

     }

-    return -EWOULDBLOCK;
+    case KVM_IRQ_ROUTING_IRQCHIP:
+        /* Injecting SPIs is always possible in atomic context */
+        return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1,
line_status);
what about the     mutex_lock(&kvm->lock) called from within
vgic_irqfd_set_irq/kvm_vgic_inject_irq/vgic_lazy_init

Holy crap. The lazy GIC init strikes again :-(.
How about this on top of the existing patch:

diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c
b/arch/arm64/kvm/vgic/vgic-irqfd.c
index 11a9f81115ab..6e5ca04d5589 100644
--- a/arch/arm64/kvm/vgic/vgic-irqfd.c
+++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
@@ -115,19 +115,23 @@ int kvm_arch_set_irq_inatomic(struct
kvm_kernel_irq_routing_entry *e,
         struct kvm_msi msi;

         if (!vgic_has_its(kvm))
-            return -EINVAL;
+            break;

         kvm_populate_msi(e, &msi);
         return vgic_its_inject_cached_translation(kvm, &msi);
     }

     case KVM_IRQ_ROUTING_IRQCHIP:
-        /* Injecting SPIs is always possible in atomic context */
+        /*
+         * Injecting SPIs is always possible in atomic context
+         * as long as the damn vgic is initialized.
+         */
+        if (unlikely(!vgic_initialized(kvm)))
+            break;
Yes this should prevent the wait situation. But may be worth to deep
into the call stack. Won't analyzers complain at some point?

I have gone all the way in the call stack, to be sure. The init is
the only point we use a mutex, and we're under a hard spinlock just
after (all the injection proper is happening with interrupt disabled).

As for things like lockdep, they track the dynamic state, and not
whether certain branches are simply "possible". A code analyser
that wouldn't take control flow into account would be pretty broken! ;-)

Thanks,

        M.
--
Jazz is not dead. It just smells funny...



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux