Re: 2x replication: A BIG warning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Wido,

> As a Ceph consultant I get numerous calls throughout the year to help people
> with getting their broken Ceph clusters back online.
> 
> The causes of downtime vary vastly, but one of the biggest causes is that
> people use replication 2x. size = 2, min_size = 1.

We are building a Ceph cluster for our OpenStack and for data integrity reasons we have chosen to set size=3. But we want to continue to access data if 2 of our 3 osd server are dead, so we decided to set min_size=1.

Is it a (very) bad idea?

Regards / Cordialement,
___________________________________________________________________
PSA Groupe
Loïc Devulder (loic.devulder@xxxxxxxx)
Senior Linux System Engineer / Linux HPC Specialist
DF/DDCE/ISTA/DSEP/ULES - Linux Team
BESSONCOURT / EXTENSION RIVE DROITE / B19
Internal postal address: SX.BES.15
Phone Incident - Level 3: 22 94 39
Phone Incident - Level 4: 22 92 40
Office: +33 (0)9 66 66 69 06 (27 69 06)
Mobile: +33 (0)6 87 72 47 31
___________________________________________________________________

This message may contain confidential information. If you are not the intended recipient, please advise the sender immediately and delete this message. For further information on confidentiality and the risks inherent in electronic communication see http://disclaimer.psa-peugeot-citroen.com.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux