Re: Ceph node failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, Olivier!

  Try to check recent ML thread titled "Changing replica size of a running pool". David Turner described the ways to utilize 2 nodes cluster with some kind of redundancy.

Best regards,
Vladimir

2017-05-06 16:55 GMT+05:00 Olivier Roch <olivierrochvilato@xxxxxxxxx>:
Hi all,




I have a 3 nodes CEPH cluster, each node with 4 OSD and 1 monitor and replication across hosts defined in the crushmap. 2 pools (1 size=3, min-size=2 , 1 size=2 min-size 2)

One of the nodes failed (hardware failure) so I removed it from the crush map and  change replication from host to osd.

After reconstruction cluster health is ok now (with a warning for the monitor down)


While I managed to restore the 3rd node is there a way to cover a 2nd node failure, leaving just 1 node in the cluster (disk space used does not exceed 1 node capacity).

I assume i will have to set up an external monitor, but what else ?


Olivier




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--

С уважением,
Дробышевский Владимир
Компания "АйТи Город"
+7 343 2222192

ИТ-консалтинг
Поставка проектов "под ключ"
Аутсорсинг ИТ-услуг
Аутсорсинг ИТ-инфраструктуры
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux