Re: cephfs: massive drop in MDS requests per second with increasing number of caps

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dietmar,

thanks for that. I reduced the value and, indeed, the number of caps clients were holding started going down.

A question about the particular value of 64K. Did you run several tests and found this one to be optimal, or was it just a lucky guess?

Thanks and best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Dietmar Rieder <dietmar.rieder@xxxxxxxxxxx>
Sent: 19 January 2021 13:24:15
To: Frank Schilder; ceph-users@xxxxxxx
Subject: Re:  Re: cephfs: massive drop in MDS requests per second with increasing number of caps

Hi Frank,

you don't need to remount the fs. The kernel driver should react to the
change on the MDS.

Best
   Dietmar

On 1/19/21 9:07 AM, Frank Schilder wrote:
> Hi Dietmar,
>
> thanks for discovering this. I also observed in the past that clients can become unbearably slow for no apparent reason. I never managed to reproduce this and, therefore, didn't report it here.
>
> A question about setting these flags on an existing mount. Will a "mount -o remount /mnt/cephfs" update client settings from the cluster without interrupting I/O? I couldn't find anything regarding updating config settings in the manual pages.
>
> I would be most interested in further updates in this matter and also if you find other flags with positive performance impact.
>
> Best regards,
> =================
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
> ________________________________________
> From: Dietmar Rieder <dietmar.rieder@xxxxxxxxxxx>
> Sent: 18 January 2021 21:01:50
> To: ceph-users@xxxxxxx
> Subject:  Re: cephfs: massive drop in MDS requests per second with increasing number of caps
>
> Hi Burkhard, hi list,
>
>
> I checked the 'mds_max_caps_per_client' setting and it turned out that
> it was set to the default value of 1 million. The
> 'mds_cache_memory_limit' setting, however I had previously set to 40GB.
>
>
> Given this, I now started to play around with the max_caps and set
> 'mds_max_caps_per_client' 64k:
>
> # ceph config set mds mds_max_caps_per_client 65536
>
> And this resulted in to a much better an stable performance of ~1.4k
> req/seq from one client and ~2.9k req/sec when running 2 clients in
> parallel. Remember it was max ~660 req/sec before using the 1M default
> and it gradually decreased to ~60 req/sec after some minute never
> getting higher again unless manually dropping the dentries and inodes
> from the vm cache on the client. (I guess this is because only 5000 caps
> are recalled after reaching the mds_max_caps_per_client)
>
> I'll keep this for now and observe if it has any impact on other
> operations or situations.
>
> Still I wonder why a higher number (i.e. >64k) of caps on the client
> destroys the performance completely.
>
> Thanks again
>     Dietmar
>
> On 1/18/21 6:20 PM, Dietmar Rieder wrote:
>> Hi Burkhard,
>>
>> thanks so much for the quick reply and the explanation and suggestions.
>> I'll check these settings and eventually change them and report back.
>>
>> Best
>>     Dietmar
>>
>> On 1/18/21 6:00 PM, Burkhard Linke wrote:
>>> Hi,
>>>
>>> On 1/18/21 5:46 PM, Dietmar Rieder wrote:
>>>> Hi all,
>>>>
>>>> we noticed a massive drop in requests per second a cephfs client is
>>>> able to perform when we do a recursive chown over a directory with
>>>> millions of files. As soon as we see about 170k caps on the MDS, the
>>>> client performance drops from about 660 reqs/sec to 70 reqs/sec.
>>>>
>>>> When we then clear dentries and inodes using "sync; echo 2 >
>>>> /proc/sys/vm/drop_caches" on the client, the request go up to ~660
>>>> again just to drop again when reaching about 170k caps.
>>>>
>>>> See the attached screenshots.
>>>>
>>>> When we stop the chown process for a while and restart it ~25min
>>>> later again it still performs very slowly and the MDS reqs/sec remain
>>>> low (~60/sec.). Clearing the cache (dentries and inodes) on the
>>>> client restores the performance again.
>>>>
>>>> When we run the same chown on another client in parallel, it starts
>>>> again with reasonable good performance (while the first client is
>>>> poorly performing) but eventually it gets slow again just like the
>>>> first client.
>>>>
>>>> Can someone comment on this and explain it?
>>>> How can this be solved, so that the performance remains stable?
>>>
>>> The MDS has a (soft) limit for number of caps per client. If a clients
>>> starts to requests more caps, the MDS will ask it to release caps.
>>> This will add an extra network round trip, thus increasing processing
>>> time. The setting is 'mds_max_caps_per_client'. The default value is 1
>>> million caps per client, but maybe this setting was changed in our
>>> configuration or the overall cap limit for the MDS is restricting it.
>>>
>>>
>>> Since each assigned cap increases the memory consumption of the MDS,
>>> setting an upper limit helps to control the overall amount of memory
>>> the MDS is using. So the memory target also affects the number of
>>> active caps an MDS can manage. You need to adjust both values to your
>>> use case.
>>>
>>>
>>> I would also recommend to monitor the cap usafe of the MDS, e.g. by
>>> running 'ceph daemonperf mds.<mds name>' in a shell on the mds server.
>>> Other methods using the various monitoring interfaces provided by ceph
>>> are also possible.
>>>
>>>
>>> There are also settings that control how fast a client is releasing
>>> caps for files; maybe tweaking these settings on the client side may
>>> also help in your case.
>>>
>>>
>>> Regards,
>>>
>>> Burkhard
>>>
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users@xxxxxxx
>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>>
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>
>
> --
> _________________________________________
> D i e t m a r  R i e d e r, Mag.Dr.
> Innsbruck Medical University
> Biocenter - Institute of Bioinformatics
> Innrain 80, 6020 Innsbruck
> Phone: +43 512 9003 71402
> Fax: +43 512 9003 73100
> Email: dietmar.rieder@xxxxxxxxxxx
> Web:   http://www.icbi.at
>
>


--
_________________________________________
D i e t m a r  R i e d e r, Mag.Dr.
Innsbruck Medical University
Biocenter - Institute of Bioinformatics
Innrain 80, 6020 Innsbruck
Phone: +43 512 9003 71402
Fax: +43 512 9003 73100
Email: dietmar.rieder@xxxxxxxxxxx
Web:   http://www.icbi.at

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux