Ceph's mgr/prometheus module is not available

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi
i upgraded my cluster from 16.2.6 to 16.2.9
and i have this error in dashboard but not in command line

The mgr/prometheus module at opcpmfpsbpp0103.fst.20.10.in-addr.arpa:9283 is
unreachable. This could mean that the module has been disabled or the mgr
itself is down. Without the mgr/prometheus module metrics and alerts will
no longer function. Open a shell to ceph and use 'ceph -s' to to determine
whether the mgr is active. If the mgr is not active, restart it, otherwise
you can check the mgr/prometheus module is loaded with 'ceph mgr module ls'
and if it's not listed as enabled, enable it with 'ceph mgr module enable
prometheus'


#ceph orch ps

 opcpmfpsbpp0101: Sun May 29 09:53:16 2022

NAME                               HOST             PORTS        STATUS
    REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID
alertmanager.opcpmfpsbpp0101       opcpmfpsbpp0101  *:9093,9094  running
(27m)     2m ago   4d    24.9M        -           ba2b418f427c  d8c8664b8d84
alertmanager.opcpmfpsbpp0103       opcpmfpsbpp0103  *:9093,9094  running
(25m)     2m ago   3h    27.4M        -           ba2b418f427c  bb6035a27201
alertmanager.opcpmfpsbpp0105       opcpmfpsbpp0105  *:9093,9094  running
(25m)     2m ago   3h    23.1M        -           ba2b418f427c  1911a6c14209
crash.opcpmfpsbpp0101              opcpmfpsbpp0101               running
(27m)     2m ago   4d    7560k        -  16.2.9   3520ead5eb19  5f27bd36a62d
crash.opcpmfpsbpp0103              opcpmfpsbpp0103               running
(25m)     2m ago   3d    8452k        -  16.2.9   3520ead5eb19  2bc971ac4826
crash.opcpmfpsbpp0105              opcpmfpsbpp0105               running
(25m)     2m ago   3d    8431k        -  16.2.9   3520ead5eb19  3dcd3809beb3
grafana.opcpmfpsbpp0101            opcpmfpsbpp0101  *:3000       running
(27m)     2m ago   4d    51.3M        -  8.3.5    dad864ee21e9  ad6608c1426b
grafana.opcpmfpsbpp0103            opcpmfpsbpp0103  *:3000       running
(25m)     2m ago   3h    49.4M        -  8.3.5    dad864ee21e9  7b39e1ec7986
grafana.opcpmfpsbpp0105            opcpmfpsbpp0105  *:3000       running
(25m)     2m ago   3h    53.1M        -  8.3.5    dad864ee21e9  0c178fc5e202
iscsi.ca-1.opcpmfpsbpp0101.jpixcv  opcpmfpsbpp0101               running
(23m)     2m ago  23m    82.2M        -  3.5      3520ead5eb19  8724836ea2cd
iscsi.ca-1.opcpmfpsbpp0103.xgceen  opcpmfpsbpp0103               running
(23m)     2m ago  23m    71.4M        -  3.5      3520ead5eb19  3b046ad06877
iscsi.ca-1.opcpmfpsbpp0105.uyskvc  opcpmfpsbpp0105               running
(23m)     2m ago  23m    67.8M        -  3.5      3520ead5eb19  b7dbec1aabdf
mgr.opcpmfpsbpp0101.dbwmph         opcpmfpsbpp0101  *:8443,9283  running
(27m)     2m ago   4d     442M        -  16.2.9   3520ead5eb19  3dddac975409
mgr.opcpmfpsbpp0103.sihfoj         opcpmfpsbpp0103  *:8443,9283  running
(25m)     2m ago   3d     380M        -  16.2.9   3520ead5eb19  15d8e94f966e
mon.opcpmfpsbpp0101                opcpmfpsbpp0101               running
(27m)     2m ago   4d     154M    2048M  16.2.9   3520ead5eb19  90571a4ff6fc
mon.opcpmfpsbpp0103                opcpmfpsbpp0103               running
(25m)     2m ago   3d     103M    2048M  16.2.9   3520ead5eb19  4d4de3d69288
mon.opcpmfpsbpp0105                opcpmfpsbpp0105               running
(25m)     2m ago   3d     103M    2048M  16.2.9   3520ead5eb19  db14ad0ef6b6
node-exporter.opcpmfpsbpp0101      opcpmfpsbpp0101  *:9100       running
(27m)     2m ago   4d    24.5M        -           1dbe0e931976  541eddfabb2c
node-exporter.opcpmfpsbpp0103      opcpmfpsbpp0103  *:9100       running
(25m)     2m ago   3d    10.4M        -           1dbe0e931976  c63b991e5cf7
node-exporter.opcpmfpsbpp0105      opcpmfpsbpp0105  *:9100       running
(25m)     2m ago   3d    8328k        -           1dbe0e931976  75404a20b7ab
osd.0                              opcpmfpsbpp0101               running
(27m)     2m ago   4h    71.0M    4096M  16.2.9   3520ead5eb19  c413da18938c
osd.1                              opcpmfpsbpp0103               running
(25m)     2m ago   4h    72.1M    4096M  16.2.9   3520ead5eb19  b259d4262430
osd.2                              opcpmfpsbpp0105               running
(25m)     2m ago   4h    64.7M    4096M  16.2.9   3520ead5eb19  c4ed30712d15
prometheus.opcpmfpsbpp0101         opcpmfpsbpp0101  *:9095       running
(27m)     2m ago   4d    82.2M        -           514e6a882f6e  34c846d9946b
prometheus.opcpmfpsbpp0103         opcpmfpsbpp0103  *:9095       running
(25m)     2m ago   3h    77.4M        -           514e6a882f6e  cec307f490c4
prometheus.opcpmfpsbpp0105         opcpmfpsbpp0105  *:9095       running
(25m)     2m ago   3h    74.1M        -           514e6a882f6e  d13f02d1bb72
# ceph -s
  cluster:
    id:     c41ccd12-dc01-11ec-9e25-00505695f8a8
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum opcpmfpsbpp0101,opcpmfpsbpp0103,opcpmfpsbpp0105
(age 25m)
    mgr: opcpmfpsbpp0101.dbwmph(active, since 27m), standbys:
opcpmfpsbpp0103.sihfoj
    osd: 3 osds: 3 up (since 25m), 3 in (since 28h)

  data:
    pools:   2 pools, 33 pgs
    objects: 2 objects, 3.2 KiB
    usage:   70 MiB used, 60 GiB / 60 GiB avail
    pgs:     33 active+clean

  io:
    client:   2.5 KiB/s rd, 2 op/s rd, 0 op/s wr



#ceph mgr module enable prometheus
module 'prometheus' is already enabled

why ?
How can I solve it?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux