CephFS: cached inodes with active-standby

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All

When I compare a  'ceph daemon mds.id perf dump mds' on my active MDS with my standby-replay MDS, the inodes count on the standby is a lot less than the active. I would expect to see a very similar number of inodes or have I misunderstood this feature? My understanding was the replay daemon will maintain the same cache as the active.

If I stop the mds daemon on the active, the standby-replay rejoins quickly, I'm just curious about the discrepancy in the inode count.

This is Jewel 10.2.2

On the active server I see:

 "inode_max": 200000,
        "inodes": 200015,

On the standby-replay:

 "inode_max": 200000,
        "inodes": 98000,

mds section from my ceph.conf (hostnames changed):

[mds]
  mds data = "">
  keyring = /var/lib/ceph/mds/mds.$host/mds.$host.keyring
  mds standby replay = true

[mds.active]
  host = active
  standby for rank = 0
  mds_cache_size = 200000

[mds.standbyreplay]
  host = standbyreplay
  standby for rank = 0
  mds_cache_size = 200000
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux