Re: cephfs performance issue MDSs report slow requests and osd memory usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm experiencing the same issue with this setting in ceph.conf:
        osd op queue = wpq
        osd op queue cut off = high

Furthermore I cannot read any old data in the relevant pool that is
serving CephFS.
However, I can write new data and read this new data.

Regards
Thomas

Am 24.09.2019 um 10:24 schrieb Yoann Moulin:
> Hello,
>
>>> I have a Ceph Nautilus Cluster 14.2.1 for cephfs only on 40x 1.8T SAS disk (no SSD) in 20 servers.
>>>
>>> I often get "MDSs report slow requests" and plenty of "[WRN] 3 slow requests, 0 included below; oldest blocked for > 60281.199503 secs"
>>>
>>> After a few investigations, I saw that ALL ceph-osd process eat a lot of memory, up to 130GB RSS each. It this value normal? May this related to
>>> slow requests? Is disk only increasing the probability to get slow requests?
>> If you haven't set:
>>
>> osd op queue cut off = high
>>
>> in /etc/ceph/ceph.conf on your OSDs, I'd give that a try. It should
>> help quite a bit with pure HDD clusters.
> OK I'll try this, thanks.
>
> If I want to add this my ceph-ansible playbook parameters, in which files I should add it and what is the best way to do it ?
>
> Add those 3 lines in all.yml or osds.yml ?
>
> ceph_conf_overrides:
>   global:
>     osd_op_queue_cut_off: high
>
> Is there another (better?) way to do that?
>
> Thanks for your help.
>
> Best regards,
>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux