On Tue, Jul 7, 2015 at 4:02 PM, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote: > Hi Greg, > > On Tue, Jul 7, 2015 at 4:25 PM, Gregory Farnum <greg@xxxxxxxxxxx> wrote: >>> 4. "mds cache size = 5000000" is going to use a lot of memory! We have >>> an MDS with just 8GB of RAM and it goes OOM after delegating around 1 >>> million caps. (this is with mds cache size = 100000, btw) >> >> Hmm. We do have some data for each client with a cap, but I think it's >> pretty small in comparison to the size of each inode in memory. The >> number of caps shouldn't impact memory usage very much, although the >> number of inodes in cache definitely will. > > Do I understand this right that having client caps exceeding the limit > is merely an annoyance and shouldn't explode mds memory usage? (I say > annoyance because I find it difficult to run an MDS without the > HEALTH_WARN that clients aren't responding to cache pressure). IOW, do > you expect that the size of the mds inode LRU will stay under the > mds_cache_size even if a client doesn't respond release caps ?? > check_memory_usage prints num_inodes_with_caps and inode_map.size() > Is there a way to see the current LRU size on a running MDS? Well, that's the thing: if a client has capabilities on an inode the MDS can't boot it out of cache. So the number of caps individually isn't that interesting, but it's likely to be limiting how much the MDS can boot out of cache (and when it starts spitting out these warnings that's what it's worrying about). -Greg _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com