Re: Help with "27 osd(s) are not reachable" when also "27 osds: 27 up.. 27 in"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the notion!  I did that, the result was no change to the problem, but with the added ceph -s complaint "Public/cluster network defined, but can not be found on any host"  -- with otherwise totally normal cluster operations.  Go figure.  How can ceph -s be so totally wrong, the dashboard reporting critical problems -- except there are none.   Makes me really wonder whether any actual testing on ipv6 is ever done before releases are marked 'stable'.

HC


On 10/14/24 21:04, Anthony D'Atri wrote:
Try failing over to a standby mgr

On Oct 14, 2024, at 9:33 PM, Harry G Coin<hgcoin@xxxxxxxxx> wrote:

I need help to remove a useless "HEALTH ERR" in 19.2.0 on a fully dual stack docker setup with ceph using ip v6, public and private nets separated, with a few servers.   After upgrading from an error free v18 rev, I can't get rid of the 'health err' owing to the report that all osds are unreachable.  Meanwhile ceph -s reports all osds up and in and the cluster otherwise operates normally.   I don't care if it's 'a real fix'  I just need to remove the false error report.   Any ideas?

Thanks

Harry Coin

_______________________________________________
ceph-users mailing list --ceph-users@xxxxxxx
To unsubscribe send an email toceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux