Hi Burkhard, hi list,I checked the 'mds_max_caps_per_client' setting and it turned out that it was set to the default value of 1 million. The 'mds_cache_memory_limit' setting, however I had previously set to 40GB.
Given this, I now started to play around with the max_caps and set 'mds_max_caps_per_client' 64k:
# ceph config set mds mds_max_caps_per_client 65536And this resulted in to a much better an stable performance of ~1.4k req/seq from one client and ~2.9k req/sec when running 2 clients in parallel. Remember it was max ~660 req/sec before using the 1M default and it gradually decreased to ~60 req/sec after some minute never getting higher again unless manually dropping the dentries and inodes from the vm cache on the client. (I guess this is because only 5000 caps are recalled after reaching the mds_max_caps_per_client)
I'll keep this for now and observe if it has any impact on other operations or situations.
Still I wonder why a higher number (i.e. >64k) of caps on the client destroys the performance completely.
Thanks again Dietmar On 1/18/21 6:20 PM, Dietmar Rieder wrote:
Hi Burkhard, thanks so much for the quick reply and the explanation and suggestions. I'll check these settings and eventually change them and report back. Best Dietmar On 1/18/21 6:00 PM, Burkhard Linke wrote:Hi, On 1/18/21 5:46 PM, Dietmar Rieder wrote:Hi all,we noticed a massive drop in requests per second a cephfs client is able to perform when we do a recursive chown over a directory with millions of files. As soon as we see about 170k caps on the MDS, the client performance drops from about 660 reqs/sec to 70 reqs/sec.When we then clear dentries and inodes using "sync; echo 2 > /proc/sys/vm/drop_caches" on the client, the request go up to ~660 again just to drop again when reaching about 170k caps.See the attached screenshots.When we stop the chown process for a while and restart it ~25min later again it still performs very slowly and the MDS reqs/sec remain low (~60/sec.). Clearing the cache (dentries and inodes) on the client restores the performance again.When we run the same chown on another client in parallel, it starts again with reasonable good performance (while the first client is poorly performing) but eventually it gets slow again just like the first client.Can someone comment on this and explain it? How can this be solved, so that the performance remains stable?The MDS has a (soft) limit for number of caps per client. If a clients starts to requests more caps, the MDS will ask it to release caps. This will add an extra network round trip, thus increasing processing time. The setting is 'mds_max_caps_per_client'. The default value is 1 million caps per client, but maybe this setting was changed in our configuration or the overall cap limit for the MDS is restricting it.Since each assigned cap increases the memory consumption of the MDS, setting an upper limit helps to control the overall amount of memory the MDS is using. So the memory target also affects the number of active caps an MDS can manage. You need to adjust both values to your use case.I would also recommend to monitor the cap usafe of the MDS, e.g. by running 'ceph daemonperf mds.<mds name>' in a shell on the mds server. Other methods using the various monitoring interfaces provided by ceph are also possible.There are also settings that control how fast a client is releasing caps for files; maybe tweaking these settings on the client side may also help in your case.Regards, Burkhard _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
-- _________________________________________ D i e t m a r R i e d e r, Mag.Dr. Innsbruck Medical University Biocenter - Institute of Bioinformatics Innrain 80, 6020 Innsbruck Phone: +43 512 9003 71402 Fax: +43 512 9003 73100 Email: dietmar.rieder@xxxxxxxxxxx Web: http://www.icbi.at
Attachment:
OpenPGP_signature
Description: OpenPGP digital signature
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx