Hi All
When I compare a 'ceph daemon mds.id perf dump mds' on my active MDS with my standby-replay MDS, the inodes count on the standby is a lot less than the active. I would expect to see a very similar number of inodes or have I misunderstood this feature? My understanding was the replay daemon will maintain the same cache as the active.
If I stop the mds daemon on the active, the standby-replay rejoins quickly, I'm just curious about the discrepancy in the inode count.
This is Jewel 10.2.2
On the active server I see:
"inode_max": 200000,
"inodes": 200015,
On the standby-replay:
"inode_max": 200000,
"inodes": 98000,
mds section from my ceph.conf (hostnames changed):
[mds]
mds data = "">
keyring = /var/lib/ceph/mds/mds.$host/mds.$host.keyring
mds standby replay = true
[mds.active]
host = active
standby for rank = 0
mds_cache_size = 200000
[mds.standbyreplay]
host = standbyreplay
standby for rank = 0
mds_cache_size = 200000
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
- Follow-Ups:
- Re: CephFS: cached inodes with active-standby
- From: Gregory Farnum
- Re: CephFS: cached inodes with active-standby
- Prev by Date: ceph keystone integration
- Next by Date: Testing Ceph cluster for future deployment.
- Previous by thread: ceph keystone integration
- Next by thread: Re: CephFS: cached inodes with active-standby
- Index(es):