Hi,
you can check how much your MDS is currently using:
ceph daemon mds.<MDS> cache status
Does it already scratch your limit? I usually start with lower values
if it's difficult to determine how much it will actually use and
increase it if necessary.
Zitat von Arnaud M <arnaud.meauzoone@xxxxxxxxx>:
Hello to everyone
I have a ceph cluster currently serving cephfs.
The size of the ceph filesystem is around 1 Po.
1 Active mds and 1 Standby-replay
I do not have a lot of cephfs clients for now 5 but it may increase to 20
or 30.
Here is some output
Rank | State | Daemon | Activity | Dentries |
Inodes | Dirs | Caps
0 | active | ceph-g-ssd-4-2.mxwjvd | Reqs: 130 /s | 10.2 M |
10.1 M | 356.8 k | 707.6 k
0-s | standby-replay | ceph-g-ssd-4-1.ixqewp | Evts: 0 /s | 156.5 k |
127.7 k | 47.4 k | 0
It is working really well
I plan to to increase this cephfs cluster up to 10 Po (for now) and even
more
What would be the good value for "mds_cache_memory_limit" ? I have set it
to 80 Gb because I have enough ram on my server to do so.
Was it a good idea ? Or is it counter-productive ?
All the best
Arnaud
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx