Re: [RFC] vhost-blk implementation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2010-03-29 at 23:37 +0300, Avi Kivity wrote:
> On 03/29/2010 09:20 PM, Chris Wright wrote:
> > * Badari Pulavarty (pbadari@xxxxxxxxxx) wrote:
> >    
> >> I modified my vhost-blk implementation to offload work to
> >> work_queues instead of doing synchronously. Infact, I tried
> >> to spread the work across all the CPUs. But to my surprise,
> >> this did not improve the performance compared to virtio-blk.
> >>
> >> I see vhost-blk taking more interrupts and context switches
> >> compared to virtio-blk. What is virtio-blk doing which I
> >> am not able to from vhost-blk ???
> >>      
> > Your io wait time is twice as long and your throughput is about half.
> > I think the qmeu block submission does an extra attempt at merging
> > requests.  Does blktrace tell you anything interesting?
> >    

Yes. I see that in my testcase (2M writes) - QEMU is pickup 512K
requests from the virtio ring and merging them back to 2M before
submitting them. 

Unfortunately, I can't do that quite easily in vhost-blk. QEMU
does re-creates iovecs for the merged IO. I have to come up with
a scheme to do this :(

> It does.  I suggest using fio O_DIRECT random access patterns to avoid 
> such issues.

Well, I am not trying to come up with a test case where vhost-blk
performs better than virtio-blk. I am trying to understand where
and why vhost-blk performnce worse than virtio-blk.


Thanks,
Badari


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux