Re: Host Info Missing from Dashboard, Differences in /etc/ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dave,

Please take into account that Nautilus reached End of Life, so the first
recommendation would be for you to upgrade to a supported release ASAP.

That said, the Grafana panels come from 2 different sources: Node Exporter
(procfs data: CPU, RAM, ...) and Ceph Exporter (Ceph-intrinsic data: OSDs,
Pools, ...).

Based on your comments (missing CPU, RAM, etc.), it might be:

   - Node Exporter container not running on the new nodes,
   - Prometheus configuration not updated to scrape those new nodes,
   - Some issue with the Prometheus query.

You can visit the Prometheus Web UI and check whether the new targets are
properly scraped, and eventually run the following PromQL query:

   - node_memory_MemTotal_bytes: you should get the latest memory value for
   each host.

That should help you debug the issue.

Kind Regards,
Ernesto


On Fri, Nov 5, 2021 at 7:17 PM Dave Hall <kdhall@xxxxxxxxxxxxxx> wrote:

> Hello,
>
> This is Nautilus 14.2.21 with 9 OSD hosts and 3 MGR/MON hosts.  I've just
> added 3 new OSD hosts, deploying the OSDs with Ceph-Ansible, and adjusting
> the backfill process with pgremapper as recently described.
>
> I noticed today that for the 3 new OSD hosts the Dashboard doesn't have any
> data in the CPU/RAM/Network or other graph panes,  although it does have
> although it does report the number of OSDs and the total raw capacity.
>
> In investigating this I also noticed a variation in the keyrings in
> /etc/ceph across the 9 OSD hosts, although /etc/ceph/ceph.conf is
> consistent across all.
>
> Please advise on how to correct.  Should I copy the missing keyrings?  And
> which ones?  On the older OSD hosts I see ceph.client.admin,
> ceph.client.crash, and ceph.mgr.xyz for the 3 MGR nodes, but not all are
> on
> every OSD host.  It's also not clear if this will correct the graphs
> missing from the Dashboard, or do I need to do something else.
>
> Also, is there a Ceph-Ansible trick that I missed that could have caused
> this?
>
> Thanks.
>
> -Dave
>
> --
> Dave Hall
> Binghamton University
> kdhall@xxxxxxxxxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux