Re: 2x replication: A BIG warning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Op 7 december 2016 om 15:54 schreef LOIC DEVULDER <loic.devulder@xxxxxxxx>:
> 
> 
> Hi Wido,
> 
> > As a Ceph consultant I get numerous calls throughout the year to help people
> > with getting their broken Ceph clusters back online.
> > 
> > The causes of downtime vary vastly, but one of the biggest causes is that
> > people use replication 2x. size = 2, min_size = 1.
> 
> We are building a Ceph cluster for our OpenStack and for data integrity reasons we have chosen to set size=3. But we want to continue to access data if 2 of our 3 osd server are dead, so we decided to set min_size=1.
> 
> Is it a (very) bad idea?
> 

I would say so. Yes, downtime is annoying on your cloud, but data loss if even worse, much more worse.

I would always run with min_size = 2 and manually switch to min_size = 1 if the situation really requires it at that moment.

Loosing two disks at the same time is something which doesn't happen that much, but if it happens you don't want to modify any data on the only copy which you still have left.

Setting min_size to 1 should be a manual action imho when size = 3 and you loose two copies. In that case YOU decide at that moment if it is the right course of action.

Wido

> Regards / Cordialement,
> ___________________________________________________________________
> PSA Groupe
> Loïc Devulder (loic.devulder@xxxxxxxx)
> Senior Linux System Engineer / Linux HPC Specialist
> DF/DDCE/ISTA/DSEP/ULES - Linux Team
> BESSONCOURT / EXTENSION RIVE DROITE / B19
> Internal postal address: SX.BES.15
> Phone Incident - Level 3: 22 94 39
> Phone Incident - Level 4: 22 92 40
> Office: +33 (0)9 66 66 69 06 (27 69 06)
> Mobile: +33 (0)6 87 72 47 31
> ___________________________________________________________________
> 
> This message may contain confidential information. If you are not the intended recipient, please advise the sender immediately and delete this message. For further information on confidentiality and the risks inherent in electronic communication see http://disclaimer.psa-peugeot-citroen.com.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux