Re: Help with "27 osd(s) are not reachable" when also "27 osds: 27 up.. 27 in"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Try failing over to a standby mgr

> On Oct 14, 2024, at 9:33 PM, Harry G Coin <hgcoin@xxxxxxxxx> wrote:
> 
> I need help to remove a useless "HEALTH ERR" in 19.2.0 on a fully dual stack docker setup with ceph using ip v6, public and private nets separated, with a few servers.   After upgrading from an error free v18 rev, I can't get rid of the 'health err' owing to the report that all osds are unreachable.  Meanwhile ceph -s reports all osds up and in and the cluster otherwise operates normally.   I don't care if it's 'a real fix'  I just need to remove the false error report.   Any ideas?
> 
> Thanks
> 
> Harry Coin
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux