Re: cephadm grafana per host certificate

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Seems like the per-host config was actually introduced in 16.2.11:
https://github.com/ceph/ceph/pull/48103

So I'm gonna have to wait for 16.2.13. Sorry for the noise.

Zitat von Eugen Block <eblock@xxxxxx>:

I looked a bit deeper and compared to a similar customer cluster (16.2.11) where I had to reconfigure grafana after an upgrade anyway. There it seems to work as expected with the per-host certificate. I only added the host-specific certs and keys and see the graphs in the dashboard while on our 16.2.10 cluster this doesn't work that way. So I assume there must be a difference between .10 and .11 regarding grafana, could anyone confirm this?

Zitat von Eugen Block <eblock@xxxxxx>:

Hi,

thanks for the suggestion, I'm aware of a wildcard certificate option (which brings its own issues for other services). But since the ceph config seems to support this per-host based certificates I would like to get this running.

Thanks,
Eugen

Zitat von Reto Gysi <rlgysi@xxxxxxxxx>:

Hi Eugen,

I've created a certificate with subject alternative names, so the
certificate is valid on each node of the cluster.
[image: image.png]

Cheers

Reto

Am Do., 20. Apr. 2023 um 11:42 Uhr schrieb Eugen Block <eblock@xxxxxx>:

Hi *,

I've set up grafana, prometheus and node-exporter on an adopted
cluster (currently running 16.2.10) and was trying to enable ssl for
grafana. As stated in the docs [1] there's a way to configure
individual certs and keys per host:

ceph config-key set mgr/cephadm/{hostname}/grafana_key -i $PWD/key.pem
ceph config-key set mgr/cephadm/{hostname}/grafana_crt -i
$PWD/certificate.pem

So I did that, then ran 'ceph orch reconfig grafana' but I still get a
bad cert error message:

Apr 20 10:21:19 ceph01 conmon[3772491]: server.go:3160: http: TLS
handshake error from <IP>:46084: remote error: tls: bad certificate

It seems like the cephadm generated cert/key pair
(mgr/cephadm/grafana_key; mgr/cephadm/grafana_crt) supersedes the
per-host certs, and even after removing the generated cert/key (and
then reconfigure) cephadm regenerates a them and leaves me with the
same problem. Is this a known issue and what would be the fix? I
didn't find anything on tracker, but I might have missed it.
To confirm that my custom certs actually work I replaced the general
cert with my custom cert and the error doesn't appear, I can see the
grafana graphs in the dashboard. I could leave it like this, but if
grafana would failover it wouldn't work anymore, of course.
Any hints are greatly appreciated.

Thanks,
Eugen

[1]

https://docs.ceph.com/en/latest/cephadm/services/monitoring/#configuring-ssl-tls-for-grafana
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux