Re: optimizing recovery throughput

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 07/21/13 09:05, Dan van der Ster wrote:
This is with a 10Gb network -- and we can readily get 2-3GBytes/s in
"normal" rados bench tests across many hosts in the cluster. I wasn't
too concerned with the overall MBps throughput in my question, but
rather the objects/s recovery rate --

they are necessarily linked, and that was too close to 1G for not mentioning it. But then it means there's another cap somewhere...

I assumed that since we have so
many small objects in this test system that the per-object overhead is
dominating here... or maybe I am way off??

I'm not sure, I haven't read ceph's source code, but I think the overhead is (mostly) to the PG count in a cluster. Do you monitor your cluster with something like munin ? What remains after the network is the drive utilisation/latency (look at "iostat -mx 5" for instance) and the CPU usage (using top for instance).
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux