Re: [RFC PATCH]vhost-blk: In-kernel accelerator for virtio block device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi
On 07/29/2011 12:48 PM, Stefan Hajnoczi wrote:
On Thu, Jul 28, 2011 at 4:44 PM, Stefan Hajnoczi<stefanha@xxxxxxxxx>  wrote:
On Thu, Jul 28, 2011 at 3:29 PM, Liu Yuan<namei.unix@xxxxxxxxx>  wrote:

Did you investigate userspace virtio-blk performance?  If so, what
issues did you find?

I have a hacked up world here that basically implements vhost-blk in userspace:
http://repo.or.cz/w/qemu/stefanha.git/blob/refs/heads/virtio-blk-data-plane:/hw/virtio-blk.c

  * A dedicated virtqueue thread sleeps on ioeventfd
  * Guest memory is pre-mapped and accessed directly (not using QEMU's
usually memory access functions)
  * Linux AIO is used, the QEMU block layer is bypassed
  * Completion interrupts are injected from the virtqueue thread using ioctl

I will try to rebase onto qemu-kvm.git/master (this work is several
months old).  Then we can compare to see how much of the benefit can
be gotten in userspace.
Here is the rebased virtio-blk-data-plane tree:
http://repo.or.cz/w/qemu-kvm/stefanha.git/shortlog/refs/heads/virtio-blk-data-plane

When I run it on my laptop with an Intel X-25M G2 SSD I see a latency
reduction compared to mainline userspace virtio-blk.  I'm not posting
results because I did quick fio runs without ensuring a quiet
benchmarking environment.

There are a couple of things that could be modified:
  * I/O request merging is done to mimic bdrv_aio_multiwrite() - but
vhost-blk does not do this.  Try turning it off?

I noted bdrv_aio_multiwrite() do the murging job, but I am not sure if this trick is really needed since we have an io scheduler down the path that is in a much more better position to murge requests. I think the duplicate *pre-mature* merging of bdrv_aio_multiwrite is the result of laio_submit()'s lack of submitting the requests in a batch mode. io_submit() in the fs/aio.c says that every time we call laio_submit(), it will submit the very request into the driver's request queue, which would be run when we blk_finish_plug(). IMHO, you can simply batch io_submit() requests instead of this tricks if you already bypass the QEMU block layer.

  * epoll(2) is used but perhaps select(2)/poll(2) have lower latency
for this use case.  Try another event mechanism.

Let's see how it compares to vhost-blk first.  I can tweak it if we
want to investigate further.

Yuan: Do you want to try the virtio-blk-data-plane tree?  You don't
need to change the qemu-kvm command-line options.

Stefan
Yes, please, sounds interesting. BTW, I think the user space would achieve the same performance gain if you bypass qemu io layer all the way down to the system calls in a request handling cycle, compared to the current vhost-blk implementation that uses linux AIO. But hey, I would go further to optimise it with block layer and other resources in the mind. ;) and I don't add complexity to the current qemu io layer.

Yuan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux