Re: ceph's replicas question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If you have decent CPU and RAM on the OSD nodes, you can try Erasure Coding, even just 4:2 should keep the cost per GB/TB lower than 2:1 replica (as that's basically 1.5:1 for cost) and much safer (same protection as 3:1 replica). We use that on our biggest production SSD pool. 

From: Wesley Peng <weslepeng@xxxxxxxxx>
Sent: Sunday, 25 August 2019 9:11 PM
To: Wido den Hollander <wido@xxxxxxxx>
Cc: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Subject: [ceph-users] Re: ceph's replicas question
 
Ok thanks.

Wido den Hollander <wido@xxxxxxxx>于2019年8月25日 周日上午4:47写道:


> Op 24 aug. 2019 om 16:36 heeft Darren Soothill <darren.soothill@xxxxxxxx> het volgende geschreven:
>
> So can you do it.
>
> Yes you can.
>
> Should you do it is the bigger question.
>
> So my first question would be what type of drives are you using? Enterprise class drives with a low failure rate?
>

Doesn’t matter. From my experience: With 2x replication you will loose data at some point.

As a consultant I have just seen too many cases of data loss with 2x.

Please, don’t do it.

> Then you have to ask yourself are you feeling lucky?
>
> If you do a scrub and 1 drive returns 1 value and another drive returns another value which one is correct?
>
> What happens should you have a drive failure and you have any other error? A node failure? Another disk failure? A disk read error? All of these could mean data loss.
>
> How important is the data you are storing and do you have a backup of it as you will need that backup at some point.
>
> Darren
>
> Sent from my iPhone
>
>> On 24 Aug 2019, at 14:01, Wesley Peng <weslepeng@xxxxxxxxx> wrote:
>>
>> 
>> Hi,
>>
>> We have all SSD disks as ceph's backend storage.
>> Consider the cost factor, can we setup the cluster to have only two replicas for objects?
>>
>> thanks & regards
>> Wesley
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux