Too many active mds servers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm running Luminous 12.2.5 and I'm testing cephfs.

However, I seem to have too many active mds servers on my test cluster.

How do I set one of my mds servers to become standby?

I've run ceph fs set cephfs max_mds 2 which set the max_mds from 3 to 2 but has no effect on my running configuration.

$ ceph status
  cluster:
    id:     
    health: HEALTH_WARN
            insufficient standby MDS daemons available
 
  services:
    mon: 3 daemons, quorum mon1-c2-vm,mon2-c2-vm,mon3-c2-vm
    mgr: mon2-c2-vm(active), standbys: mon1-c2-vm
    mds: cephfs-3/3/2 up  {0=mon1-c2-vm=up:active,1=mon3-c2-vm=up:active,2=mon2-c2-vm=up:active}
    osd: 250 osds: 250 up, 250 in
    rgw: 2 daemons active
 
  data:
    pools:   4 pools, 8456 pgs
    objects: 13492 objects, 53703 MB
    usage:   427 GB used, 1750 TB / 1751 TB avail
    pgs:     8456 active+clean

$ ceph fs get cephfs
Filesystem 'cephfs' (1)
fs_name cephfs
epoch 187
flags c
created 2018-05-03 10:25:21.733597
modified 2018-05-03 10:25:21.733597
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
last_failure 0
last_failure_osd_epoch 1369
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2}
max_mds 2
in 0,1,2
up {0=43808,1=43955,2=27318}
failed
damaged
stopped
data_pools [1,11]
metadata_pool 2
inline_data disabled
balancer
standby_count_wanted 1
43808: xx.xx.xx.xx:6800/3009065437 'mon1-c2-vm' mds.0.171 up:active seq 45
43955: xx.xx.xx.xx:6800/2947700655 'mon2-c2-vm' mds.1.174 up:active seq 28
27318: xx.xx.xx.xx:6800/652878628 'mon3-c2-vm' mds.2.177 up:active seq 8

Thanks,
Tom
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux