Re: about replica size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This keeps coming up, which is not surprising, considering it is a core question.

Here's how I look at it:
The Ceph team has chosen to default to N+2 redundancy.  This is analogous to RAID 6 (NOT RAID 1).

The basic reasoning for N+2 in storage is as follows:
If you experience downtime (either routine, or non-routine), then you can survive an additional failure during recovery.

The scenario goes like this:
Update software on OSD host
Reboot
All hosted OSDs are marked down
Host come online, hosted OSDs come online
Recovery begins
Drive in another host is found to have an unreadable sector

If you only have single redundancy (N+1, i.e. R2, m=X n=1, RAID1, RAID5), then you have now lost data.

If you have double redundancy (N+2, R3, m=X n=2, RAID6), then there is a third method to get the good data, and both redundancy layers can be rebuilt.

RAID6 came into being because the longer you spend recovering, the more likely you are to run into an undetected failure.

Ceph takes the same view; it is built for MASSIVE storage, with LONG recovery times.  My pools are built on 10Tb drives, others in this list use 12Tb and 14Tb drives.

It all comes down to this: are you absolutely certain that you will never encounter a latent failure, while executing routine maintenance?

If you look at it from the stand point of Risk Management, using Dr. Reason's Swiss Cheese model, then each redundancy layer is a barrier against failure.

Thank you,

Dominic L. Hilsbos, MBA 
Director - Information Technology 
Perform Air International, Inc.
DHilsbos@xxxxxxxxxxxxxx 
www.PerformAir.com



-----Original Message-----
From: Zhenshi Zhou [mailto:deaderzzs@xxxxxxxxx] 
Sent: Thursday, July 9, 2020 7:11 PM
To: ceph-users
Subject:  about replica size

Hi,

As we all know, the default replica setting of 'size' is 3 which means
there
are 3 copies of an object. What is the disadvantages if I set it to 2,
except
I get fewer copies?

Thanks
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux