help with failed osds after reboot

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I had 5 of 10 osds fail on one of my nodes, after reboot the other 5 osds failed to start.

I have tried running ceph-disk activate-all and get back and error message about the cluster fsid not matching in /etc/ceph/ceph.conf

Has anyone experienced an issue such as this?



*******************************************************************
IMPORTANT MESSAGE FOR RECIPIENTS IN THE U.S.A.:
This message may constitute an advertisement of a BD group's products or services or a solicitation of interest in them. If this is such a message and you would like to opt out of receiving future advertisements or solicitations from this BD group, please forward this e-mail to optoutbygroup@xxxxxx. [BD.v1.0]
*******************************************************************
This message (which includes any attachments) is intended only for the designated recipient(s). It may contain confidential or proprietary information and may be subject to the attorney-client privilege or other confidentiality protections. If you are not a designated recipient, you may not review, use, copy or distribute this message. If you received this in error, please notify the sender by reply e-mail and delete this message. Thank you.
*******************************************************************
Corporate Headquarters Mailing Address: BD (Becton, Dickinson and Company) 1 Becton Drive Franklin Lakes, NJ 07417 U.S.A.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux