Re: Ceph cluster with 2 replicas

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From my experience with Ceph in production over the last +8 years, it is only a matter of time that pools with replication size of 2 or erasure coding with m = 1 will lead to service outages and/or data loss and cause problems in day2 operations.

___________________________________
Clyso GmbH - Ceph Foundation Member
support@xxxxxxxxx
https://www.clyso.com

Am 17.08.2021 um 16:54 schrieb Anthony D'Atri:
There are cerrtain sequences of events that can result in Ceph not knowing which copy of a PG (if any) has the current information.  That’s one way you can effectively lose data.

I ran into it myself last year on a legacy R2 cluster.

If you *must* have a 2:1 raw:usable ratio, you’re better off with 2,2 EC. Asuming you have at least 4 failure domains.

There are only two ways that size=2 can go:
A) You set min_size=1 and risk data loss
B) You set min_size=2 and your cluster stops every time you lose a
drive or reboot a machine

Neither of these are good options for most use cases; but there's
always an edge case. You should stay with size=3, min_size=2 unless
you have an unusual use case.

On Tue, Aug 17, 2021 at 10:33 AM Michel Niyoyita <micou12@xxxxxxxxx> wrote:
Hi all ,

Going to deploy a ceph cluster in production with  replicas size of 2 . Is
there any inconvenience on the service side ?  I am going to change the
default (3) to 2.

Please advise.

Regards.

Michel
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux