Re: vhost-blk development

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04/12/2012 12:52 AM, Michael Baysek wrote:

> In this particular case, I did intend to deploy these instances directly to 
> the ramdisk.  I want to squeeze every drop of performance out of these 
> instances for use cases with lots of concurrent accesses.   I thought it 
> would be possible to achieve improvements an order of magnitude or more 
> over SSD, but it seems not to be the case (so far).  


Last year I tried virtio-blk over vhost, which originally planned to put
virtio-blk driver into kernel to reduce system call overhead and shorten
the code path.

I think in your particular case (ramdisk), virtio-blk will hit the best
performance improvement because biggest time-hogger IO is ripped out in
the path, at least would be expected much better than my last test
numbers (+15% for throughput and -10% for latency) which runs on local disk.

But unfortunately, virtio-blk was not considered to be useful enough at
that time, Qemu folks think it is better to optimize the IO stack in
QEMU instead of setting up another code path for it.

I remember that I developed virtio-blk at Linux 3.0 base, So I think it
is not hard to rebase it on latest kernel code or porting it back to
RHEL 6's modified 2.6.32 kernel.

Thanks,
Yuan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux