Fluctuating I/O speed degrading over time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have a Ceph cluster, currently with 5 osd servers and around 22 OSDs with SSD drives and I noted that the I/O speed, especially write access to the cluster is degrading over time. When we first started the cluster, we can get up to 250-300 MB/s write speed to the SSD cluster but now we can only get up to half the mark. Furthermore, it now fluctuates so sometimes I can get slightly better speed but on another time I get very bad result.

We started with 3 osd servers and 12 OSDs and gradually add more servers. We are using KVM hypervisors as the Ceph clients and connection between clients and servers and between the servers are through 10 GBps switch with jumbo frames enabled on all interfaces.

Any advice on how can I start to troubleshoot what might have caused the degradation of the I/O speed? Does utilisation contributes to it (since now we have more users compared to last time when we started)? Any optimisation we can do to improve the I/O performance?

Appreciate any advice, thank you.

Cheers.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux