Re: [RFC] vhost-blk implementation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Avi Kivity wrote:
On 03/23/2010 03:00 AM, Badari Pulavarty wrote:
Forgot to CC: KVM list earlier
[RFC] vhost-blk implementation.eml

Subject:
[RFC] vhost-blk implementation
From:
Badari Pulavarty <pbadari@xxxxxxxxxx>
Date:
Mon, 22 Mar 2010 17:34:06 -0700

To:
virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx, qemu-devel@xxxxxxxxxx


Hi,

Inspired by vhost-net implementation, I did initial prototype
of vhost-blk to see if it provides any benefits over QEMU virtio-blk.
I haven't handled all the error cases, fixed naming conventions etc.,
but the implementation is stable to play with. I tried not to deviate
from vhost-net implementation where possible.

NOTE:  Only change I had to make to vhost core code is to
increase VHOST_NET_MAX_SG to 130 (128+2) in vhost.h

Performance:
=============

I have done simple tests to see how it performs. I got very
encouraging results on sequential read tests. But on sequential
write tests, I see degrade over virtio-blk. I can't figure out and
explain why. Can some one shed light on whats happening here ?

Read Results:
=============
Test does read of 84GB file from the host (through virtio). I unmount
and mount the filesystem on the host to make sure there is nothing
in the page cache..

+#define VHOST_BLK_VQ_MAX 1
+
+struct vhost_blk {
+    struct vhost_dev dev;
+    struct vhost_virtqueue vqs[VHOST_BLK_VQ_MAX];
+    struct vhost_poll poll[VHOST_BLK_VQ_MAX];
+};
+
+static int do_handle_io(struct file *file, uint32_t type, uint64_t sector,
+            struct iovec *iov, int in)
+{
+    loff_t pos = sector<<  8;
+    int ret = 0;
+
+    if (type&  VIRTIO_BLK_T_FLUSH)  {
+        ret = vfs_fsync(file, file->f_path.dentry, 1);
+    } else if (type&  VIRTIO_BLK_T_OUT) {
+        ret = vfs_writev(file, iov, in,&pos);
+    } else {
+        ret = vfs_readv(file, iov, in,&pos);
+    }
+    return ret;
+}

This should be done asynchronously. That is likely the cause of write performance degradation. For reads, readahead means that that you're async anyway, but writes/syncs are still synchronous.
I am not sure what you mean by async here. Even if I use f_op->aio_write() its still synchronous (except for DIO). Since we are writing to pagecache and not waiting for write()
to complete, this is the best we can do here.

Do you mean offload write() handling to another thread ?

I also think it should be done at the bio layer.
I am not sure what you meant here. Do you want to do submit_bio() directly ? Its not going to be that simple. Since the sector# is offset within the file, one have to do getblocks() on it to find the real-disk-block#s + we have to do get_user_pages() on these iovecs before submitting them to bio.. All of this work is done by vfs_write()/vfs_read() anyway.. I am not
sure what you are suggesting here..

File I/O is going to be slower, if we do vhost-blk we should concentrate on maximum performance. The block layer also exposes more functionality we can use (asynchronous barriers for example).


btw, for fairness, cpu measurements should be done from the host side and include the vhost thread.

Will do.

Thanks,
Badari

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux