Re: How much space?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Replication is set on a per pool basis. You can set some, or all, pools to replica size of 2 instead of 3.

Thank you very much. I saw this is to be setted in the global configuration (osd pool default size).
So it's up to me to configure Ceph to be rendundant and fault tolerant?
If I set "osd pool default size" to 2, I will be sure that if a cluster node goes down my data will be safe?
 
Ceph uses replication not erasure coding (unlike RAID). So data is completely duplicated in multiple copies. Erasure coding is scheduled for the Firefly release, according to the roadmap.

Ad I said, I expect having something similar to RAID5: if one hard drive per cluster node fails, my data will be safe. If an entire cluster node fails, my data will be safe. Could you help me to understand the correct configuration for this situation?

Thank you very much for your help!
Bye.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux