Re: cephfs mds millions of caps

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Zheng,

Thanks for opening that issue on the bug tracker.

Also thanks for that tip. Caps dropped from 1.6M to 600k for that client.
Is it safe to run in a cronjob? Let's say, once or twice a day during production?

Thanks!


Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
Belo Horizonte - Brasil
IRC NICK - WebertRLZ

On Thu, Dec 21, 2017 at 11:55 AM, Yan, Zheng <ukernel@xxxxxxxxx> wrote:
On Thu, Dec 21, 2017 at 7:33 PM, Webert de Souza Lima
<webert.boss@xxxxxxxxx> wrote:
> I have upgraded the kernel on a client node (one that has close-to-zero
> traffic) used for tests.
>
>    {
>       "reconnecting" : false,
>       "id" : 1620266,
>       "num_leases" : 0,
>       "inst" : "client.1620266 10.0.0.111:0/3921220890",
>       "state" : "open",
>       "completed_requests" : 0,
>       "num_caps" : 1402490,
>       "client_metadata" : {
>          "kernel_version" : "4.4.0-104-generic",
>          "hostname" : "suppressed",
>          "entity_id" : "admin"
>       },
>       "replay_requests" : 0
>    },
>
> still 1.4M caps used.
>
> is upgrading the client kernel enough ?
>

See http://tracker.ceph.com/issues/22446. We haven't implemented that
feature.  "echo 3 >/proc/sys/vm/drop_caches"  should drop most caps.

>
>
> Regards,
>
> Webert Lima
> DevOps Engineer at MAV Tecnologia
> Belo Horizonte - Brasil
> IRC NICK - WebertRLZ
>
> On Fri, Dec 15, 2017 at 11:16 AM, Webert de Souza Lima
> <webert.boss@xxxxxxxxx> wrote:
>>
>> So,
>>
>> On Fri, Dec 15, 2017 at 10:58 AM, Yan, Zheng <ukernel@xxxxxxxxx> wrote:
>>>
>>>
>>> 300k are ready quite a lot. opening them requires long time. does you
>>> mail server really open so many files?
>>
>>
>> Yes, probably. It's a commercial solution. A few thousand domains, dozens
>> of thousands of users and god knows how any mailboxes.
>> From the daemonperf you can see the write workload is high, so yes, too
>> much files opening (dovecot mdbox stores multiple e-mails per file, split
>> into many files).
>>
>>> I checked 4.4 kernel, it includes the code that trim cache when mds
>>> recovers.
>>
>>
>> Ok, all nodes are running 4.4.0-75-generic. The fix might have been
>> included in a newer version.
>> I'll upgrade it asap.
>>
>>
>> Regards,
>>
>> Webert Lima
>> DevOps Engineer at MAV Tecnologia
>> Belo Horizonte - Brasil
>> IRC NICK - WebertRLZ
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux