Re: questions about monitor data and ceph recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!

> 2. One node (with 8 osds) goes offline. Will ceph automatically replicate all objects on the remaining node to maintain number of replicas = 2?
> No, because it can no longer satisfy your CRUSH rules. Your crush rule states 1x copy pr. node and it will keep it that way. The cluster will go into a degraded state until you can bring up another node (ie all your data now is very vulnerable). I think it is often suggested to run with 3x replica if possible - or at the very least nr_nodes = replicas + 1. If you had to make it replicate on the remaining node you'd have to change your CRUSH rule to replicate based on OSD and not node. But then you'll most likely have problems when 1 node dies because objects could easily be on 2x OSD on the failed node. 

Is it possible to define "fallback" crush rule, which must work if main rule cannot obtain needed number of replicas?

Pavel.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux