Re: cephfs/ceph-fuse: mds0: Client XXX:XXX failingtorespond to capability release

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Greg...

Just checking this on my case on 10.2.2

I have mds cache size = 2000000

Current used RAM in the mds is about 9GB

9 GB / 2000000 is much closer to 4k than 2k for the sum of Inodes / Cdir and Cdentries.

Maybe the numbers in

http://docs.ceph.com/docs/master/dev/mds_internals/data-structures/

are kind of outdated?

Cheers
G.

On 09/20/2016 04:22 AM, Gregory Farnum wrote:
On Thu, Sep 15, 2016 at 1:41 AM, Wido den Hollander <wido@xxxxxxxx> wrote:
Op 14 september 2016 om 14:56 schreef "Dennis Kramer (DT)" <dennis@xxxxxxxxx>:


Hi Burkhard,

Thank you for your reply, see inline:

On Wed, 14 Sep 2016, Burkhard Linke wrote:

Hi,


On 09/14/2016 12:43 PM, Dennis Kramer (DT) wrote:
Hi Goncalo,

Thank you. Yes, i have seen that thread, but I have no near full osds and
my mds cache size is pretty high.
You can use the daemon socket on the mds server to get an overview of the
current cache state:

ceph daemon mds.XXX perf dump

The message itself indicates that the mds is in fact trying to convince
clients to release capabilities, probably because it is running out of cache.
My cache is set to mds_cache_size = 15000000, but you are right, it seems
the complete cache is used, but that shouldn't be a real problem if the
clients can release the caps in time. Correct me if i'm wrong but the
cache_size is pretty high compared to the default (100k). I will raise the
mds_cache_size a bit and see if it helps a bit.

The 100k is very, very conservative. Each cached inode will consume roughly 4k of memory.

15.000.000 * 4k =~ 58GB of memory
Is that based on empirical testing? Last time we counted (pretty
recently!) it was about half that for an inode+dentry:
http://docs.ceph.com/docs/master/dev/mds_internals/data-structures/ :)

-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28 | University of Sydney, NSW  2006
T: +61 2 93511937

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux