Ceph node failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,




I have a 3 nodes CEPH cluster, each node with 4 OSD and 1 monitor and replication across hosts defined in the crushmap. 2 pools (1 size=3, min-size=2 , 1 size=2 min-size 2)

One of the nodes failed (hardware failure) so I removed it from the crush map and  change replication from host to osd.

After reconstruction cluster health is ok now (with a warning for the monitor down)


While I managed to restore the 3rd node is there a way to cover a 2nd node failure, leaving just 1 node in the cluster (disk space used does not exceed 1 node capacity).

I assume i will have to set up an external monitor, but what else ?


Olivier



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux