Re: High fs_apply_latency on one node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The apply latency is how long it's taking for the backing filesystem to ack (not sync to disk) writes from the OSD. Either it's getting a lot more writes than the other OSDs (you can check by seeing how many PGs are mapped to each) and then just apply standard local fs debugging techniques to that local node.
-Greg

On Monday, March 3, 2014, Chris Kitzmiller <ckitzmiller@xxxxxxxxxxxxx> wrote:
I've got a 3 node cluster where ceph osd perf reports reasonable fs_apply_latency for 2 out of 3 of my nodes (~30ms). But on the third node I've got latencies averaging 15000+ms for all OSDs.

Running ceph 72.2 on Ubuntu 10.13. Each node has 30 HDDs with 6 SSDs for journals. iperf reports full bidirectional 10Gbps. fio locally on any of the nodes across all OSDs gives me ~4000MBps with clat's ~60ms. At first I thought this was an issue with my client running rbd but rados bench is also very slow for writes.

ceph osd perf, rados bench, and fio output at: http://pastebin.com/Kze0AKnr
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux