Re: ceph's replicas question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

> On 27. Aug 2019, at 14:43, Paul Emmerich <paul.emmerich@xxxxxxxx> wrote:
> 
> 100% agree, this happens *all the time* with min_size 1.
> 
> If you really care about your data then 2/1 just doesn't cut it.

Just to make this more specific and less fictional: a very easy way to trigger this is by shutting down your whole cluster and starting it up again, including your network equipment. It’s normal that this is a period where cluster activity is quite flaky and this has caused multiple instances of data loss for us when we had clusters with min_size 1.

Cheers,
Christian

--
Christian Theune · ct@xxxxxxxxxxxxxxx · +49 345 219401 0
Flying Circus Internet Operations GmbH · http://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick

Attachment: signature.asc
Description: Message signed with OpenPGP

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux