Re: Event channels in KVM?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Kapadia, Vivek wrote:
I came across this thread looking for an efficient event channel mechanism between two guests (running on different cpu cores).

While I can use available emulated IO mechanism (guest1->host kernel driver->Qemu1->Qemu2) in conjunction with interrupt mechanism (Qemu2->host kernel driver->guest2) in KVM, this involves several context switches. Xen handles notifications in hypervisor via hypercall and hence is likely more efficient.

They almost certainly aren't more efficient.

An event channel notification involves a hypercall to the hypervisor. When using VT, the performance difference between a vmcall exit vs. a pio exit is quite small (especially compared to the overhead of the exit). We're talking in the order of nanoseconds compared to microseconds.

What makes KVM particularly different from Xen is that in KVM, the PIO operation results in a direct transition to QEMU. In Xen, typically event channel notifications result in a bit being set in a bitmap which then results in an interrupt injection depending on the next opportunity the hypervisor has to schedule/run the receiving domain. This is not deterministic and can potentially be a very long period of time.

Event channels are inherently asynchronous whereas PIO notifications in KVM are synchronous. Since the scheduler isn't involved and control never leaves the CPU, the KVM PIO notifications are actually extremely efficient. IMHO, it's one of KVM's best design features.

It used to be in HVM that since things like PIO operations are inherently synchronous, and there's not point in a VM waiting around for the asynchronous event channel notification to result in qemu-dm invocation,, there was a very special code path in the hypervisor to ensure that Domain-0 was scheduled immediately when receiving an event channel notification from an HVM domain. This was an important optimization because event channel notification latency was prohibitively high.

Now in the context of the stub domain, I'm not sure what changes they've made. In the earliest prototypes of stub domain, the same short cutting logic was maintained but the stub domain was executed instead of domain-0.

Is there a way I can perform direct notification (guest1->host kernel driver->guest2) in kvm?

Between guests, we don't have a notification framework today. You can use IPC from two QEMU processes and I'd expect that to perform pretty well. I'm not sure you can get much advantages from doing things in the kernel because you cannot avoid the heavy weight exit.

Regards,

Anthony Liguori
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux