Re: Cache pressure fail

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 11, 2015 at 5:30 AM, Dennis Kramer (DT) <dennis@xxxxxxxxx> wrote:
> After setting the debug level to 2, I can see:
> 2015-02-11 13:36:31.922262 7f0b38294700  2 mds.0.cache check_memory_usage
> total 58516068, rss 57508660, heap 32676, malloc 1227560 mmap 0, baseline
> 39848, buffers 0, max 67108864, 8656261 / 9999931 inodes have caps, 10367318
> caps, 1.03674 caps per inode
>
> It doesn't look like it has serious memory problems, unless my
> interpretation is wrong of the output.

The MDS currently requests trimming based on simple dentry counts
rather than actual amount of memory in use. That's a configurable; I
believe mds_max_cache_size? It defaults to 100,000. Looks like you've
already increased it to more like 10 million.

You can go to the clients and run the "status" and "dump_cache"
commands on their admin sockets and see if the kernel is holding
references to their inodes, preventing cap releases.

> It looks like I have the same symptoms as:
> http://tracker.ceph.com/issues/10151
>
> I'm running 0.87 on all my nodes.

That bug is just about whether the health warnings for it show up, and
is resolved in v0.89, so you seeing it is expected.
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux