Re: rbd_cache, limiting read on high iops around 40k

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>At high queue-depths and high IOPS, I would suspect that the bottleneck is the single, coarse-grained mutex protecting the cache data structures. It's been a back burner item to refactor the current cache mutex into finer->>grained locks. 
>>
>>Jason 

Thanks for the explain Jason.

Anyway, inside qemu, I'm around 35-40k with or without rbd_cache, so it's make not too much difference currently.
(maybe some other qemu bottleneck).
 

----- Mail original -----
De: "Jason Dillaman" <dillaman@xxxxxxxxxx>
À: "Mark Nelson" <mnelson@xxxxxxxxxx>
Cc: "aderumier" <aderumier@xxxxxxxxx>, "pushpesh sharma" <pushpesh.eck@xxxxxxxxx>, "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Mardi 9 Juin 2015 15:39:50
Objet: Re:  rbd_cache, limiting read on high iops around 40k

> In the past we've hit some performance issues with RBD cache that we've 
> fixed, but we've never really tried pushing a single VM beyond 40+K read 
> IOPS in testing (or at least I never have). I suspect there's a couple 
> of possibilities as to why it might be slower, but perhaps joshd can 
> chime in as he's more familiar with what that code looks like. 
> 

At high queue-depths and high IOPS, I would suspect that the bottleneck is the single, coarse-grained mutex protecting the cache data structures. It's been a back burner item to refactor the current cache mutex into finer-grained locks. 

Jason 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux