Re: MDS memory usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Greg,

thanks very much. This is clear to me now.

As for 'MDS cluster', I thought that this was not recommended at this stage? I would very much like to have a number >1 of MDS in my cluster as this would probably help very much to balance the load. But I am afraid what everybody says about stability issues.

Is more than one MDS considered stable enough with hammer?

Thanks and regards,

Mike

On 11/25/15 12:51 PM, Gregory Farnum wrote:
On Tue, Nov 24, 2015 at 10:26 PM, Mike Miller <millermike287@xxxxxxxxx> wrote:
Hi,

in my cluster with 16 OSD daemons and more than 20 million files on cephfs,
the memory usage on MDS is around 16 GB. It seems that 'mds cache size' has
no real influence on the memory usage of the MDS.

Is there a formula that relates 'mds cache size' directly to memory
consumption on the MDS?

The dominant factor should be the number of inodes in cache, although
there are other things too. Depending on version I think it was ~2KB
of memory for each inode+dentry at last count.

In the documentation (and other posts on the mailing list) it is said that
the MDS needs 1 GB per daemon. I am observing that the MDS uses almost
exactly 1 GB per OSD daemon (I have 16 OSD and 16 GB memory usage on the
MDS). Is this the correct formula?

Or is it 1 GB per MDS daemon?

It's got nothing to do with the number of OSDs. I'm not sure where 1GB
per MDS came from, although you can certainly run a reasonable
low-intensity cluster on that.


In my case, the standard 'mds cache size 100000' makes MDS crash and/or the
cephfs is unresponsive. Larger values for 'mds cache size' seem to work
really well.

Right. You need the total cache size of your MDS "cluster" (which is
really just 1) to be larger than your working set size or you'll have
trouble. Similarly if you have any individual directories which are a
significant portion of your total cache it might cause issues.
-Greg

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux