Error CephMgrPrometheusModuleInactive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



i have error im dashboard ceph
------
CephMgrPrometheusModuleInactive
description
The mgr/prometheus module at opcpmfpskup0101.p.fnst.10.in-addr.arpa:9283 is
unreachable. This could mean that the module has been disabled or the mgr
itself is down. Without the mgr/prometheus module metrics and alerts will
no longer function. Open a shell to ceph and use 'ceph -s' to to determine
whether the mgr is active. If the mgr is not active, restart it, otherwise
you can check the mgr/prometheus module is loaded with 'ceph mgr module ls'
and if it's not listed as enabled, enable it with 'ceph mgr module enable
prometheus'

and in log container mgr i have this error
---------
debug 2022-06-01T07:47:13.929+0000 7f21d6525700  0 log_channel(cluster) log
[DBG] : pgmap v386352: 1 pgs: 1 active+clean; 0 B data, 16 MiB used, 60 GiB
/ 60 GiB avail
debug 2022-06-01T07:47:14.039+0000 7f21c7b08700  0 [progress INFO root]
Processing OSDMap change 29..29
debug 2022-06-01T07:47:15.128+0000 7f21a7b36700  0 [dashboard INFO request]
[10.60.161.64:63651] [GET] [200] [0.011s] [admin] [933.0B] /api/summary
debug 2022-06-01T07:47:15.866+0000 7f21bdfe2700  0 [prometheus INFO
cherrypy.access.139783044050056] 10.56.0.223 - - [01/Jun/2022:07:47:15]
"GET /metrics HTTP/1.1" 200 101826 "" "Prometheus/2.33.4"
10.56.0.223 - - [01/Jun/2022:07:47:15] "GET /metrics HTTP/1.1" 200 101826
"" "Prometheus/2.33.4"
debug 2022-06-01T07:47:15.928+0000 7f21d6525700  0 log_channel(cluster) log
[DBG] : pgmap v386353: 1 pgs: 1 active+clean; 0 B data, 16 MiB used, 60 GiB
/ 60 GiB avail
debug 2022-06-01T07:47:16.126+0000 7f21a6333700  0 [dashboard INFO request]
[10.60.161.64:63651] [GET] [200] [0.003s] [admin] [69.0B]
/api/feature_toggles
debug 2022-06-01T07:47:17.129+0000 7f21cd313700  0 [progress WARNING root]
complete: ev f9e995f4-d172-465f-a91a-de6e35319717 does not exist
debug 2022-06-01T07:47:17.129+0000 7f21cd313700  0 [progress WARNING root]
complete: ev 1bb8e9ee-7403-42ad-96e4-4324ae6d8c15 does not exist
debug 2022-06-01T07:47:17.130+0000 7f21cd313700  0 [progress WARNING root]
complete: ev 6b9a0cd9-b185-4c08-ad99-e7fc2f976590 does not exist
debug 2022-06-01T07:47:17.130+0000 7f21cd313700  0 [progress WARNING root]
complete: ev d9bffc48-d463-43bf-a25b-7853b2f334a0 does not exist
debug 2022-06-01T07:47:17.130+0000 7f21cd313700  0 [progress WARNING root]
complete: ev c5bf893d-2eac-4bb6-994f-cbcf3822c30c does not exist
debug 2022-06-01T07:47:17.131+0000 7f21cd313700  0 [progress WARNING root]
complete: ev 43511d64-6636-455e-8df5-bed1aa853f3e does not exist
debug 2022-06-01T07:47:17.131+0000 7f21cd313700  0 [progress WARNING root]
complete: ev 857aabc5-e61b-4a76-90b2-62631bfeba00 does not exist


10.56.0.221 - - [01/Jun/2022:07:47:00] "GET /metrics HTTP/1.1" 200 101830
"" "Prometheus/2.33.4"
debug 2022-06-01T07:47:01.632+0000 7f21a7b36700  0 [dashboard ERROR
exception] Internal Server Error
Traceback (most recent call last):
  File "/lib/python3.6/site-packages/cherrypy/lib/static.py", line 58, in
serve_file
    st = os.stat(path)
FileNotFoundError: [Errno 2] No such file or directory:
'/usr/share/ceph/mgr/dashboard/frontend/dist/en-US/prometheus_receiver'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/share/ceph/mgr/dashboard/services/exception.py", line 47, in
dashboard_exception_handler
    return handler(*args, **kwargs)
  File "/lib/python3.6/site-packages/cherrypy/_cpdispatch.py", line 54, in
__call__
    return self.callable(*self.args, **self.kwargs)
  File "/usr/share/ceph/mgr/dashboard/controllers/home.py", line 135, in
__call__
    return serve_file(full_path)
  File "/lib/python3.6/site-packages/cherrypy/lib/static.py", line 65, in
serve_file
    raise cherrypy.NotFound()

but my cluster show everythings  is ok

#ceph -s
  cluster:
    id:     868c3ad2-da76-11ec-b977-005056aa7589
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum opcpmfpskup0105,opcpmfpskup0101,opcpmfpskup0103
(age 38m)
    mgr: opcpmfpskup0105.mureyk(active, since 8d), standbys:
opcpmfpskup0101.uvkngk
    osd: 3 osds: 3 up (since 38m), 3 in (since 84m)

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   16 MiB used, 60 GiB / 60 GiB avail
    pgs:     1 active+clean

anyone can explain this ?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux