Il 02/06/2014 15:06, Ming Lei ha scritto:
>
> If you're running SMP under an emulator where exits are expensive, then
> this wins. Under KVM it's marginal at best.
Both my tests on arm64 and x86 are under KVM, and looks the
patch can improve performance a lot. IMO, even though under
KVM, virtio-blk performance still depends how well hypervisor(
qemu, ...) emulates the device, and basically speaking, it is
expensive to switch from guest to host and let host handle the
notification.
The difference is that virtio-pci supports ioeventfd and virtio-mmio
doesn't.
With ioeventfd you can tell KVM "I don't care about the value that is
written to a memory location, only that it is accessed". Then when the
write happens, KVM doesn't do an expensive userspace exit; it just
writes 1 to an eventfd.
It then returns to the guest, userspace picks up the eventfd via its
poll() loop and services the device.
This is already useful for throughput on UP, and the small latency cost
(because of the cost of the event loop in the I/O thread, and possibly
the cost of waking up the thread) is usually offset by the benefit.
But on SMP you get double benefit. Obviously, the kernel doesn't have
to spin while userspace does its stuff. On top of this, there is also a
latency improvement from ioeventfd, because QEMU processes
virtqueue_notify under its "big QEMU lock". With ioeventfd, serialized
virtqueue processing can be a bottleneck, but it doesn't affect latency.
Without ioeventfd it affects the VCPUs' latency and negates a lot of
the benefit of Ming Lei's patch.
You can try disabling ioeventfd with "-global
virtio-blk-pci.ioeventfd=off" on the QEMU command line. Performance
will plummet. :)
Paolo
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization