Ok, so I tried the new ceph dashboard by "set-prometheus-api-host"
(note "host" and not "url") and it returns the wrong data. We have 4
ceph clusters going into the same prometheus instance. How does it
know which data to pull? Do I need to pass a promql query?
The capacity widget at the top right (not using prometheus) shows 35%
of 51 TiB used (test cluster data)... This is correct. The chart shows
use capacity is 1.7 PiB, which is coming from the production cluster
(incorrect).
Ideas?
On 2023-10-30 11:30, Nizamudeen A wrote:
Ah yeah, probably that's why the utilization charts are empty
because it relies on
the prometheus info.
And I raised a PR to disable the new dashboard in quincy.
https://github.com/ceph/ceph/pull/54250
Regards,
Nizam
On Mon, Oct 30, 2023 at 6:09 PM Matthew Darwin <bugs@xxxxxxxxxx> wrote:
Hello,
We're not using prometheus within ceph (ceph dashboards show in our
grafana which is hosted elsewhere). The old dashboard showed the
metrics fine, so not sure why in a patch release we would need
to make
configuration changes to get the same metrics.... Agree it
should be
off by default.
"ceph dashboard feature disable dashboard" works to put the old
dashboard back. Thanks.
On 2023-10-30 00:09, Nizamudeen A wrote:
> Hi Matthew,
>
> Is the prometheus configured in the cluster? And also the
> PROMETHUEUS_API_URL is set? You can set it manually by ceph
dashboard
> set-prometheus-api-url <url-of-prom>.
>
> You can switch to the old Dashboard by switching the feature
toggle in the
> dashboard. `ceph dashboard feature disable dashboard` and
reloading the
> page. Probably this should have been disabled by default.
>
> Regards,
> Nizam
>
> On Sun, Oct 29, 2023, 23:04 Matthew Darwin<bugs@xxxxxxxxxx> wrote:
>
>> Hi all,
>>
>> I see17.2.7 quincy is published as debian-bullseye packages.
So I
>> tried it on a test cluster.
>>
>> I must say I was not expecting the big dashboard change in a
patch
>> release. Also all the "cluster utilization" numbers are all
blank now
>> (any way to fix it?), so the dashboard is much less usable now.
>>
>> Thoughts?
>> _______________________________________________
>> ceph-users mailing list --ceph-users@xxxxxxx
>> To unsubscribe send an email toceph-users-leave@xxxxxxx
>>
> _______________________________________________
> ceph-users mailing list --ceph-users@xxxxxxx
> To unsubscribe send an email toceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx