ceph orch ps shows unknown in version, container and image id columns

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi 

Has anybody noticed this issue? 

For all mgr, mon and osd daemons orch ps  shows version, container and image ids as unknown.  Ceph health is ok and all daemons are running fine. cephadm ls  shows correct details of  version, container and image ids .  

What could be  the issue? and how to resolve it? 

ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
cephadm shell ceph -s
Inferring fsid a6f52598-e5cd-4a08-8422-7b6fdb1d5dbe
Using recent ceph image ceph/daemon@sha256:261bbe628f4b438f5bf10de5a8ee05282f2697a5a2cb7ff7668f776b61b9d586
  cluster:
    id:     a6f52598-e5cd-4a08-8422-7b6fdb1d5dbe
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum cr21meg16ba0101,cr21meg16ba0102,cr21meg16ba0103 (age 4d)
    mgr: cr21meg16ba0101(active, since 42h), standbys: cr21meg16ba0103, cr21meg16ba0102
    mds: 1/1 daemons up, 2 standby
    osd: 72 osds: 72 up (since 4d), 72 in (since 5w)
    rgw: 3 daemons active (3 hosts, 1 zones)

  data:
    volumes: 1/1 healthy
    pools:   16 pools, 577 pgs
    objects: 3.86M objects, 12 TiB
    usage:   18 TiB used, 108 TiB / 126 TiB avail
    pgs:     577 active+clean

  io:
    client:   255 B/s rd, 115 KiB/s wr, 0 op/s rd, 3 op/s wr


mds.cephfs.cr21meg16ba0103.yacxeu                      cr21meg16ba0103               running (5w)     2m ago   5w    38.2M        -  16.2.5          6e73176320aa  79599f7ca3c8
mgr.cr21meg16ba0101                                    cr21meg16ba0101               running          2m ago   5w        -        -  <unknown>       <unknown>     <unknown>
mgr.cr21meg16ba0102                                    cr21meg16ba0102               running          2m ago   5w        -        -  <unknown>       <unknown>     <unknown>
mgr.cr21meg16ba0103                                    cr21meg16ba0103               running          2m ago   5w        -        -  <unknown>       <unknown>     <unknown>
mon.cr21meg16ba0101                                    cr21meg16ba0101               running          2m ago   5w        -    2048M  <unknown>       <unknown>     <unknown>
mon.cr21meg16ba0102                                    cr21meg16ba0102               running          2m ago   5w        -    2048M  <unknown>       <unknown>     <unknown>
mon.cr21meg16ba0103                                    cr21meg16ba0103               running          2m ago   5w        -    2048M  <unknown>       <unknown>     <unknown>
nfs.nfs-1.0.63.cr21meg16ba0102.kkxpfh                  cr21meg16ba0102  *:12049      running (5w)     2m ago   5w     132M        -  3.5             6e73176320aa  fd617b99d70a
nfs.nfs-1.1.0.cr21meg16ba0101.qhdmrq                   cr21meg16ba0101  *:12049      running (4d)     2m ago   5w     117M        -  3.5             6e73176320aa  15e21064041d
nfs.nfs-1.2.0.cr21meg16ba0103.wvmquo                   cr21meg16ba0103  *:12049      running (5w)     2m ago   5w     136M        -  3.5             6e73176320aa  edb9df63489d
node-exporter.cr21meg16ba0101                          cr21meg16ba0101  *:9100       running (4d)     2m ago   5w    79.3M        -  0.17.0          b3e7f67a1480  e94d14814946
node-exporter.cr21meg16ba0102                          cr21meg16ba0102  *:9100       running (5w)     2m ago   5w     102M        -  0.17.0          b3e7f67a1480  86165fa7b916
node-exporter.cr21meg16ba0103                          cr21meg16ba0103  *:9100       running (5w)     2m ago   5w    88.1M        -  0.17.0          b3e7f67a1480  898008baaa66
osd.0                                                  cr21meg16ba0101               running          2m ago   5w        -    22.0G  <unknown>       <unknown>     <unknown>
osd.1                                                  cr21meg16ba0103               running          2m ago   5w        -    22.0G  <unknown>       <unknown>     <unknown>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux