Re: [ceph-users] rbd_cache, limiting read on high iops around 40k

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>At high queue-depths and high IOPS, I would suspect that the bottleneck is the single, coarse-grained mutex protecting the cache data structures. It's been a back burner item to refactor the current cache mutex into finer->>grained locks. 
>>
>>Jason 

Thanks for the explain Jason.

Anyway, inside qemu, I'm around 35-40k with or without rbd_cache, so it's make not too much difference currently.
(maybe some other qemu bottleneck).
 

----- Mail original -----
De: "Jason Dillaman" <dillaman@xxxxxxxxxx>
À: "Mark Nelson" <mnelson@xxxxxxxxxx>
Cc: "aderumier" <aderumier@xxxxxxxxx>, "pushpesh sharma" <pushpesh.eck@xxxxxxxxx>, "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Mardi 9 Juin 2015 15:39:50
Objet: Re: [ceph-users] rbd_cache, limiting read on high iops around 40k

> In the past we've hit some performance issues with RBD cache that we've 
> fixed, but we've never really tried pushing a single VM beyond 40+K read 
> IOPS in testing (or at least I never have). I suspect there's a couple 
> of possibilities as to why it might be slower, but perhaps joshd can 
> chime in as he's more familiar with what that code looks like. 
> 

At high queue-depths and high IOPS, I would suspect that the bottleneck is the single, coarse-grained mutex protecting the cache data structures. It's been a back burner item to refactor the current cache mutex into finer-grained locks. 

Jason 

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux