Re: [RFC] vhost-blk implementation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 24, 2010 at 01:22:37PM -0700, Badari Pulavarty wrote:
> Yes. This is with default (writeback) cache model. As mentioned earlier,  
> readhead is helping here
> and most cases, data would be ready in the pagecache.

Ok.  cache=writeback performance is something I haven't bothered looking
at at all.  For cache=none any streaming write or random workload with
large enough record sizes got basically the same performance as native
using kernel aio, and same for write but slightly degraded for reads
using the thread pool.  See my attached JLS presentation for some
numbers.

>>   
> iovecs and buffers are user-space pointers (from the host kernel point  
> of view). They are
> guest address. So, I don't need to do any set_fs tricks.

Right now they're not ceclared as such, so sparse would complain.

> Yes. QEMU virtio-blk is batching up all the writes and handing of the  
> work to another
> thread.

If using the thread pool.  If using kernel aio it performs the io_submit
system call directly.

> What do should I do here ? I can create bunch of kernel threads to do  
> the IO for me.
> Or some how fit and reuse AIO io_submit() mechanism. Whats the best way  
> here ?
> I hate do duplicate all the code VFS is doing.

The only thing you can do currently is adding a thread pool.  It might
be able to use io_submit for the O_DIRECT case with some refactoring.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux