KVM/QEMU rbd read latency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>We also need to support >1 librbd/librados-internal IO
>>thread for outbound/inbound paths.

Could be worderfull !
multiple iothread by disk is coming for qemu too. (I have seen Paolo Bonzini sending a lot of patches this month)



----- Mail original -----
De: "Jason Dillaman" <jdillama at redhat.com>
?: "aderumier" <aderumier at odiso.com>
Cc: "Phil Lacroute" <lacroute at skyportsystems.com>, "ceph-users" <ceph-users at lists.ceph.com>
Envoy?: Vendredi 17 F?vrier 2017 15:16:39
Objet: Re: [ceph-users] KVM/QEMU rbd read latency

On Fri, Feb 17, 2017 at 2:14 AM, Alexandre DERUMIER <aderumier at odiso.com> wrote: 
> and I have good hope than this new feature 
> "RBD: Add support readv,writev for rbd" 
> http://marc.info/?l=ceph-devel&m=148726026914033&w=2 

Definitely will eliminate 1 unnecessary data copy -- but sadly it 
still will make a single copy within librbd immediately since librados 
*might* touch the IO memory after it has ACKed the op. Once that issue 
is addressed, librbd can eliminate that copy if the librbd cache is 
disabled. We also need to support >1 librbd/librados-internal IO 
thread for outbound/inbound paths. 

-- 
Jason 



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux