ceph status not showing correct monitor services

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

Why is "ceph -s" showing only two monitors while three monitor services are running ?

# ceph versions
{   "mon": {        "ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)": 2     },
    "mgr": {         "ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)": 2     },
    "osd": {         "ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)": 36     },
    "mds": {         "ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)": 1     },
    "rgw": {         "ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)": 3     },
    "overall": {    "ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)": 44     }     }

# ceph orch ls
NAME             PORTS                  RUNNING  REFRESHED  AGE  PLACEMENT
crash                                       3/3  8m ago     2y   label:ceph
ingress.nfs.nfs  10.45.128.8:2049,9049      4/4  8m ago     2y   count:2
mds.cephfs                                  3/3  8m ago     2y   count:3;label:mdss
mgr                                         3/3  8m ago     23M  a001s016;a001s017;a001s018;count:3
mon                                         3/3  8m ago     16h  a001s016;a001s017;a001s018;count:3   <== [ 3 monitor services running]
nfs.nfs          ?:12049                    3/3  8m ago     2y   a001s016;a001s017;a001s018;count:3
node-exporter    ?:9100                     3/3  8m ago     2y   *
osd.unmanaged                             36/36  8m ago     -    <unmanaged>
prometheus       ?:9095                     1/1  10s ago    23M  count:1
rgw.ceph         ?:8080                     3/3  8m ago     19h  count-per-host:1;label:rgws
root@a001s017:~# ceph -s
  cluster:
    id:     604d56db-2fab-45db-a9ea-c418f9a8cca8
    health: HEALTH_OK

  services:
    mon: 2 daemons, quorum a001s018,a001s017 (age 16h)  <== [ shows ONLY 2 monitors running]
    mgr: a001s017.bpygfm(active, since 13M), standbys: a001s016.ctmoay
    mds: 1/1 daemons up, 2 standby
    osd: 36 osds: 36 up (since 54s), 36 in (since 2y)
    rgw: 3 daemons active (3 hosts, 1 zones)

  data:
    volumes: 1/1 healthy
    pools:   43 pools, 1633 pgs
    objects: 51.81M objects, 77 TiB
    usage:   120 TiB used, 131 TiB / 252 TiB avail
    pgs:     1631 active+clean
             2    active+clean+scrubbing+deep

  io:
    client:   220 MiB/s rd, 448 MiB/s wr, 251 op/s rd, 497 op/s wr

# ceph orch ls --service_name=mon
NAME  PORTS  RUNNING  REFRESHED  AGE  PLACEMENT
mon              3/3  8m ago     16h  a001s016;a001s017;a001s018;count:3  <== [ 3 monitors running ]

# ceph orch ps --daemon_type=mon
NAME          HOST      PORTS  STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID
mon.a001s016  a001s016         running (19h)     9m ago  19h     706M    2048M  16.2.5   6e73176320aa  8484a912f96a
mon.a001s017  a001s017         running (16h)    66s ago  19h     949M    2048M  16.2.5   6e73176320aa  e5e5cb6c256c   <== [ 3  mon daemons running ]
mon.a001s018  a001s018         running (5w)      2m ago   2y    1155M    2048M  16.2.5   6e73176320aa  7d2bb6d41f54

a001s016# systemctl --type=service | grep @mon
  ceph-604d56db-2fab-45db-a9ea-c418f9a8cca8@mon.a001s016.service<mailto:ceph-604d56db-2fab-45db-a9ea-c418f9a8cca8@mon.a001s016.service>                loaded active running Ceph mon.a001s016 for 604d56db-2fab-45db-a9ea-c418f9a8cca8
a001s017# systemctl --type=service | grep @mon
  ceph-604d56db-2fab-45db-a9ea-c418f9a8cca8@mon.a001s017.service<mailto:ceph-604d56db-2fab-45db-a9ea-c418f9a8cca8@mon.a001s017.service>                loaded active running Ceph mon.a001s017 for 604d56db-2fab-45db-a9ea-c418f9a8cca8
a001s018# systemctl --type=service | grep @mon
  ceph-604d56db-2fab-45db-a9ea-c418f9a8cca8@mon.a001s018.service<mailto:ceph-604d56db-2fab-45db-a9ea-c418f9a8cca8@mon.a001s018.service>                loaded active running Ceph mon.a001s018 for 604d56db-2fab-45db-a9ea-c418f9a8cca8


Thank you,
Anantha
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux