High fs_apply_latency on one node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've got a 3 node cluster where ceph osd perf reports reasonable fs_apply_latency for 2 out of 3 of my nodes (~30ms). But on the third node I've got latencies averaging 15000+ms for all OSDs.

Running ceph 72.2 on Ubuntu 10.13. Each node has 30 HDDs with 6 SSDs for journals. iperf reports full bidirectional 10Gbps. fio locally on any of the nodes across all OSDs gives me ~4000MBps with clat's ~60ms. At first I thought this was an issue with my client running rbd but rados bench is also very slow for writes.

ceph osd perf, rados bench, and fio output at: http://pastebin.com/Kze0AKnr
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux