Re: [RFC PATCH]vhost-blk: In-kernel accelerator for virtio block device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 28, 2011 at 3:29 PM, Liu Yuan <namei.unix@xxxxxxxxx> wrote:

Did you investigate userspace virtio-blk performance?  If so, what
issues did you find?

I have a hacked up world here that basically implements vhost-blk in userspace:
http://repo.or.cz/w/qemu/stefanha.git/blob/refs/heads/virtio-blk-data-plane:/hw/virtio-blk.c

 * A dedicated virtqueue thread sleeps on ioeventfd
 * Guest memory is pre-mapped and accessed directly (not using QEMU's
usually memory access functions)
 * Linux AIO is used, the QEMU block layer is bypassed
 * Completion interrupts are injected from the virtqueue thread using ioctl

I will try to rebase onto qemu-kvm.git/master (this work is several
months old).  Then we can compare to see how much of the benefit can
be gotten in userspace.

> [performance]
>
>        Currently, the fio benchmarking number is rather promising. The seq read is imporved as much as 16% for throughput and the latency is dropped up to 14%. For seq write, 13.5% and 13% respectively.
>
> sequential read:
> +-------------+-------------+---------------+---------------+
> | iodepth     | 1           |   2           |   3           |
> +-------------+-------------+---------------+----------------
> | virtio-blk  | 4116(214)   |   7814(222)   |   8867(306)   |
> +-------------+-------------+---------------+---------------+
> | vhost-blk   | 4755(183)   |   8645(202)   |   10084(266)  |
> +-------------+-------------+---------------+---------------+
>
> 4116(214) means 4116 IOPS/s, the it is completion latency is 214 us.
>
> seqeuential write:
> +-------------+-------------+----------------+--------------+
> | iodepth     |  1          |    2           |  3           |
> +-------------+-------------+----------------+--------------+
> | virtio-blk  | 3848(228)   |   6505(275)    |  9335(291)   |
> +-------------+-------------+----------------+--------------+
> | vhost-blk   | 4370(198)   |   7009(249)    |  9938(264)   |
> +-------------+-------------+----------------+--------------+
>
> the fio command for sequential read:
>
> sudo fio -name iops -readonly -rw=read -runtime=120 -iodepth 1 -filename /dev/vda -ioengine libaio -direct=1 -bs=512
>
> and config file for sequential write is:
>
> dev@taobao:~$ cat rw.fio
> -------------------------
> [test]
>
> rw=rw
> size=200M
> directory=/home/dev/data
> ioengine=libaio
> iodepth=1
> direct=1
> bs=512
> -------------------------

512 byte blocksize is very small, given that you can expect a file
system to have 4 KB or so block sizes.  It would be interesting to
measure a wider range of block sizes: 4 KB, 64 KB, and 128 KB for
example.

Stefan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux