On Mon, Aug 15, 2016 at 5:02 AM, David <dclistslinux@xxxxxxxxx> wrote: > Hi All > > When I compare a 'ceph daemon mds.id perf dump mds' on my active MDS with > my standby-replay MDS, the inodes count on the standby is a lot less than > the active. I would expect to see a very similar number of inodes or have I > misunderstood this feature? My understanding was the replay daemon will > maintain the same cache as the active. > > If I stop the mds daemon on the active, the standby-replay rejoins quickly, > I'm just curious about the discrepancy in the inode count. My recollection is that the standby gets slightly less information at different times than the active MDS does about which inodes/dentries are in what kind of use, so it trims a lot more aggressively to remove recently-but-not-currently used stuff out of the cache in preference to leaking them. You don't need to worry about it if you aren't seeing any issues. -Greg > > This is Jewel 10.2.2 > > On the active server I see: > > "inode_max": 200000, > "inodes": 200015, > > On the standby-replay: > > "inode_max": 200000, > "inodes": 98000, > > mds section from my ceph.conf (hostnames changed): > > [mds] > mds data = /var/lib/ceph/mds/mds.$host > keyring = /var/lib/ceph/mds/mds.$host/mds.$host.keyring > mds standby replay = true > > [mds.active] > host = active > standby for rank = 0 > mds_cache_size = 200000 > > [mds.standbyreplay] > host = standbyreplay > standby for rank = 0 > mds_cache_size = 200000 > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com