Re: why was osd pool default size changed from 2 to 3.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Am 23.10.2015 um 20:53 schrieb Gregory Farnum:
>> On Fri, Oct 23, 2015 at 8:17 AM, Stefan Eriksson <stefan@xxxxxxxxxxx> wrote:
>>
>> Nothing changed to make two copies less secure. 3 copies is just so
>> much more secure and is the number that all the companies providing
>> support recommend, so we changed the default.
>> (If you're using it for data you care about, you should really use 3 copies!)
>> -Greg
>
> I assume that number really depends on the (number of) OSDs you have in your crush rule for that pool. A replication of
> 2 might be ok for a pool spread over 10 osds, but not for one spread over 100 osds....
>
> Corin
>

I'm also interested in this, what changes when you add 100+ OSDs (to warrant 3 replicas instead of 2), and the reasoning as to why "the companies providing support recommend 3." ?
Theoretically it seems secure to have two replicas.
If you have 100+ OSDs, I can see that maintenance will take much longer, and if you use "set noout" then a single PG will be active when the other replica is under maintenance.
But if you "crush reweight to 0" before the maintenance this would not be an issue.
Is this the main reason? 

From what I can gather even if you add new OSDs to the cluster and the balancing kicks in, it still maintains its two replicas.

thanks.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux