Re: kernel cephfs - too many caps used by client

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It's not clear what the problem is to me. Please try increasing the
debugging on your MDS and share a snippet (privately to me if you
wish). Other information would also be helpful like `ceph status` and
what kind of workloads these clients are running.

On Fri, Oct 18, 2019 at 7:22 PM Lei Liu <liul.stone@xxxxxxxxx> wrote:
>
> Only osds is v12.2.8, all of mds and mon used v12.2.12
>
> # ceph versions
> {
>     "mon": {
>         "ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)": 3
>     },
>     "mgr": {
>         "ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)": 4
>     },
>     "osd": {
>         "ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)": 24,
>         "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) luminous (stable)": 203
>     },
>     "mds": {
>         "ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)": 5
>     },
>     "rgw": {
>         "ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)": 1
>     },
>     "overall": {
>         "ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)": 37,
>         "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) luminous (stable)": 203
>     }
> }
>
> Lei Liu <liul.stone@xxxxxxxxx> 于2019年10月19日周六 上午10:09写道:
>>
>> Thanks for your reply.
>>
>> Yes, Already set it.
>>
>>> [mds]
>>> mds_max_caps_per_client = 10485760     # default is 1048576
>>
>>
>> I think the current configuration is big enough for per client. Do I need to continue to increase this value?
>>
>> Thanks.
>>
>> Patrick Donnelly <pdonnell@xxxxxxxxxx> 于2019年10月19日周六 上午6:30写道:
>>>
>>> Hello Lei,
>>>
>>> On Thu, Oct 17, 2019 at 8:43 PM Lei Liu <liul.stone@xxxxxxxxx> wrote:
>>> >
>>> > Hi cephers,
>>> >
>>> > We have some ceph clusters use cephfs in production(mount with kernel cephfs), but several of clients often keep a lot of caps(millions) unreleased.
>>> > I know this is due to the client's inability to complete the cache release, errors might have been encountered, but no logs.
>>> >
>>> > client kernel version is 3.10.0-957.21.3.el7.x86_64
>>> > ceph version is mostly v12.2.8
>>> >
>>> > ceph status shows:
>>> >
>>> > x clients failing to respond to cache pressure
>>> >
>>> > client kernel debug shows:
>>> >
>>> > # cat /sys/kernel/debug/ceph/a00cc99c-f9f9-4dd9-9281-43cd12310e41.client11291811/caps
>>> > total 23801585
>>> > avail 1074
>>> > used 23800511
>>> > reserved 0
>>> > min 1024
>>> >
>>> > mds config:
>>> > [mds]
>>> > mds_max_caps_per_client = 10485760
>>> > # 50G
>>> > mds_cache_memory_limit = 53687091200
>>> >
>>> > I want to know if some ceph configurations can solve this problem ?
>>>
>>> mds_max_caps_per_client is new in Luminous 12.2.12. See [1]. You need
>>> to upgrade.
>>>
>>> [1] https://tracker.ceph.com/issues/38130
>>>
>>> --
>>> Patrick Donnelly, Ph.D.
>>> He / Him / His
>>> Senior Software Engineer
>>> Red Hat Sunnyvale, CA
>>> GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
>>>


-- 
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux