Hi,
in my cluster with 16 OSD daemons and more than 20 million files on
cephfs, the memory usage on MDS is around 16 GB. It seems that 'mds
cache size' has no real influence on the memory usage of the MDS.
Is there a formula that relates 'mds cache size' directly to memory
consumption on the MDS?
In the documentation (and other posts on the mailing list) it is said
that the MDS needs 1 GB per daemon. I am observing that the MDS uses
almost exactly 1 GB per OSD daemon (I have 16 OSD and 16 GB memory usage
on the MDS). Is this the correct formula?
Or is it 1 GB per MDS daemon?
In my case, the standard 'mds cache size 100000' makes MDS crash and/or
the cephfs is unresponsive. Larger values for 'mds cache size' seem to
work really well.
Version trusty 14.04 and hammer.
Thanks and kind regards,
Mike
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com