Re: (Ceph Octopus) Repairing a neglected Ceph cluster - Degraded Data Reduncancy, all PGs degraded, undersized, not scrubbed in time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



All good points (also replying to Frank Schilder)

On 11/16/20 2:36 PM, Janne Johansson wrote:
Not trying to say you don't understand this, but rather that people who run small ceph clusters tend to start out with R=2 or K+1 EC because the larger faults are easier to imagine.
TBH, I think I did kind of underestimate this with EC. The implications were clear-cut for me in the case of 2xReplication.

My wrong rationale was, I never had down time with my 2+1 EC with min_size=2, even when doing maintenance so I'm doing this right! But the min_size should of course be '3' for data integrity, for reasons that you and Frank have described so well in this thread.

Thanks.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux