Best value for "mds_cache_memory_limit" for large (more than 10 Po) cephfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello to everyone

I have a ceph cluster currently serving cephfs.

The size of the ceph filesystem is around 1 Po.
1 Active mds and 1 Standby-replay
I do not have a lot of cephfs clients for now 5 but it may increase to 20
or 30.

Here is some output

Rank  | State          | Daemon                | Activity     | Dentries |
Inodes  | Dirs    | Caps

0     | active         | ceph-g-ssd-4-2.mxwjvd | Reqs: 130 /s | 10.2 M   |
10.1 M  | 356.8 k | 707.6 k

0-s   | standby-replay | ceph-g-ssd-4-1.ixqewp | Evts: 0 /s   | 156.5 k  |
127.7 k | 47.4 k  |  0

It is working really well

I plan to to increase this cephfs cluster up to 10 Po (for now) and even
more

What would be the good value for "mds_cache_memory_limit" ? I have set it
to 80 Gb because I have enough ram on my server to do so.

Was it a good idea ? Or is it counter-productive ?

All the best

Arnaud
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux