Re: kernel cephfs - too many caps used by client

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Lei,

On Thu, Oct 17, 2019 at 8:43 PM Lei Liu <liul.stone@xxxxxxxxx> wrote:
>
> Hi cephers,
>
> We have some ceph clusters use cephfs in production(mount with kernel cephfs), but several of clients often keep a lot of caps(millions) unreleased.
> I know this is due to the client's inability to complete the cache release, errors might have been encountered, but no logs.
>
> client kernel version is 3.10.0-957.21.3.el7.x86_64
> ceph version is mostly v12.2.8
>
> ceph status shows:
>
> x clients failing to respond to cache pressure
>
> client kernel debug shows:
>
> # cat /sys/kernel/debug/ceph/a00cc99c-f9f9-4dd9-9281-43cd12310e41.client11291811/caps
> total 23801585
> avail 1074
> used 23800511
> reserved 0
> min 1024
>
> mds config:
> [mds]
> mds_max_caps_per_client = 10485760
> # 50G
> mds_cache_memory_limit = 53687091200
>
> I want to know if some ceph configurations can solve this problem ?

mds_max_caps_per_client is new in Luminous 12.2.12. See [1]. You need
to upgrade.

[1] https://tracker.ceph.com/issues/38130

-- 
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux