Re: num_caps

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ah, understand it much better now. Thank you so much for explaining. I hope/assume the caps dont prevent other clients from accessing the stuff in some way, right?

+1, though, for the idea to be able to specify a timeout. We have a rsync backup job which runs over the whole filesystem every few hours to do an incremental backup. If you have only one cronjob like that you consequently have all files as caps - permanently - up to the defined mds cache size. Even if the backup is finished after a few minutes. Which I think goes a bit overboard just for a backup even though you say the actually performance impact of all those caps is not that bad... :-/



Am 15.05.2017 um 14:49 schrieb John Spray:
On Mon, May 15, 2017 at 1:36 PM, Henrik Korkuc <lists@xxxxxxxxx> wrote:
On 17-05-15 13:40, John Spray wrote:
On Mon, May 15, 2017 at 10:40 AM, Ranjan Ghosh <ghosh@xxxxxx> wrote:
Hi all,

When I run "ceph daemon mds.<name> session ls" I always get a fairly
large
number for num_caps (200.000). Is this normal? I thought caps are sth.
like
open/locked files meaning a client is holding a cap on a file and no
other
client can access it during this time.
Capabilities are much broader than that, they cover clients keeping
some fresh metadata in their cache, even if the client isn't doing
anything with the file at that moment.  It's common for a client to
accumulate a large number of capabilities in normal operation, as it
keeps the metadata for many files in cache.

You can adjust the "client cache size" setting on the fuse client to
encourage it to cache metadata on fewer files and thereby hold onto
fewer capabilities if you want.

John
Is there an option (or planned option) for clients to release caps after
some time of inuse?

In my testing I saw that clients tend to hold on caps for indefinite time.

Currently in prod I have use case where are over 8mil caps and little over
800k inodes_with_caps.
Both the MDS and client caches operate on a LRU, size-limited basis.
That means that if they aren't hitting their size thresholds, they
will tend to keep lots of stuff in cache indefinitely.

One could add a behaviour that also actively expires cached metadata
if it has not been used for a certain period of time, but it's not
clear what the right time threshold would be, and whether it would be
desirable for most users.  If we free up memory because the system is
quiet this minute/hour, then it potentially just creates an issue when
we get busy again and need that memory back.

With caching/resources generally, there's a conflict between the
desire to keep things in cache in case they're needed again, and the
desire to evict things from cache so that we have lots of free space
available for new entries.  Which one is better is entirely workload
dependent: there is clearly scope to add different behaviours as
options, but its hard to know how much people would really use them --
the sanity of the defaults is the most important thing.  I do think
there's a reasonable argument that part of the sane defaults should
not be to keep something in cache if it hasn't been used for e.g. a
day or more.

BTW, clients do have an additional behaviour where they will drop
unneeded caps when an MDS restarts, to avoid making a newly started
MDS do a lot of unnecessary work to restore those caps, so the
overhead of all those extra caps isn't quite as much as one might
first imagine.

John



How can I debug this if it is a cause
of concern? Is there any way to debug on which files the caps are held
excatly?

Thank you,

Ranjan

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux