Re: [RFC PATCH]vhost-blk: In-kernel accelerator for virtio block device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/29/2011 10:45 PM, Liu Yuan wrote:
On 07/29/2011 08:50 PM, Stefan Hajnoczi wrote:
I hit a weirdness yesterday, just want to mention it in case you notice it too.

When running vanilla qemu-kvm I forgot to use aio=native.  When I
compared the results against virtio-blk-data-plane (which *always*
uses Linux AIO) I was surprised to find average 4k read latency was
lower and the standard deviation was also lower.

So from now on I will run tests both with and without aio=native.
aio=native should be faster and if I can reproduce the reverse I'll
try to figure out why.

Stefan
On my laptop, I don't meet this weirdo. the emulated POSIX AIO is much worse than the Linux AIO as expected. If iodepth goes deeper, the gap gets wider.

If not set aio=none, qemu uses emulated posix aio interface to do the IO. I peek at the posix-aio-compat.c,it uses thread pool and sync preadv/pwritev to emulate the AIO behaviour. The sync IO interface would even cause much poorer performance for random rw, since io-scheduler would possibly never get a chance to merge the requests stream. (blk_finish_plug->queue_unplugged->__blk_run_queue)

Yuan
Typo. not merge, I mean *sort* the reqs
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux