Re: High fs_apply_latency on one node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[ Re-adding the list. ]

On Mon, Mar 3, 2014 at 3:28 PM, Chris Kitzmiller
<ckitzmiller@xxxxxxxxxxxxx> wrote:
> On Mar 3, 2014, at 4:19 PM, Gregory Farnum wrote:
>> The apply latency is how long it's taking for the backing filesystem to ack (not sync to disk) writes from the OSD. Either it's getting a lot more writes than the other OSDs (you can check by seeing how many PGs are mapped to each) and then just apply standard local fs debugging techniques to that local node.
>> -Greg
>
> When I do:
>         ceph pg dump summary | grep ^3\. | awk '{print $14}' | tr -d '[]' | cut -d ',' -f 1 | sort -V | uniq -c
>
> I get a frequency count for how many PGs list each OSD as their first OSD in ceph pg dump summary (am I doing this right?). That shows reasonably even distribution across all PGs with no real variation between nodes. I've got 90 OSDs and 4096 PGs and I'm seeing values between 32 and 62 with a good clumping around 45.
>
> Was there something else which might be wrong? If fio is running fine on the OSD drives for that node what can I do to test the filesystem (ext4)?

Sounds like maybe you don't have the right sort of rio test to mimic
the OSD. Make sure it includes syncing to disk, and try using 4K write
sizes or similar. *shrug*
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


>
>> On Monday, March 3, 2014, Chris Kitzmiller <ckitzmiller@xxxxxxxxxxxxx> wrote:
>> I've got a 3 node cluster where ceph osd perf reports reasonable fs_apply_latency for 2 out of 3 of my nodes (~30ms). But on the third node I've got latencies averaging 15000+ms for all OSDs.
>>
>> Running ceph 72.2 on Ubuntu 10.13. Each node has 30 HDDs with 6 SSDs for journals. iperf reports full bidirectional 10Gbps. fio locally on any of the nodes across all OSDs gives me ~4000MBps with clat's ~60ms. At first I thought this was an issue with my client running rbd but rados bench is also very slow for writes.
>>
>> ceph osd perf, rados bench, and fio output at: http://pastebin.com/Kze0AKnr
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>> --
>> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux