Re: EC Profiles & DR

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Patrick,

If your hardware is new and you are confident in the support of your
hardware and can consider future expansion, you can possibly start with a
k=3 and m=2.
It is true that we generally prefer to divide (k) the data by an exponent
2, but k=3 does the job

Be careful, it is difficult/painful to change profiles later (need data
migration).
________________________________________________________

Cordialement,

*David CASIER*

________________________________________________________



Le mar. 5 déc. 2023 à 12:35, Patrick Begou <
Patrick.Begou@xxxxxxxxxxxxxxxxxxxxxx> a écrit :

> Ok, so I've misunderstood the meaning of failure domain. If there is no
> way to request using 2 osd/node and node as failure domain, with 5 nodes
> k=3+m=1 is not secure enough and I will have to use k=2+m=2, so like a
> raid1  setup. A little bit better than replication in the point of view of
> global storage capacity.
>
> Patrick
>
> Le 05/12/2023 à 12:19, David C. a écrit :
>
> Hi,
>
> To return to my comparison with SANs, on a SAN you have spare disks to
> repair a failed disk.
>
> On Ceph, you therefore need at least one more host (k+m+1).
>
> If we take into consideration the formalities/delivery times of a new
> server, k+m+2 is not luxury (Depending on the growth of your volume).
>
> ________________________________________________________
>
> Cordialement,
>
> *David CASIER*
>
> ________________________________________________________
>
>
>
> Le mar. 5 déc. 2023 à 11:17, Patrick Begou <
> Patrick.Begou@xxxxxxxxxxxxxxxxxxxxxx> a écrit :
>
>> Hi Robert,
>>
>> Le 05/12/2023 à 10:05, Robert Sander a écrit :
>> > On 12/5/23 10:01, duluxoz wrote:
>> >> Thanks David, I knew I had something wrong  :-)
>> >>
>> >> Just for my own edification: Why is k=2, m=1 not recommended for
>> >> production? Considered to "fragile", or something else?
>> >
>> > It is the same as a replicated pool with size=2. Only one host can go
>> > down. After that you risk to lose data.
>> >
>> > Erasure coding is possible with a cluster size of 10 nodes or more.
>> > With smaller clusters you have to go with replicated pools.
>> >
>> Could you explain why 10 nodes are required for EC ?
>>
>> On my side, I'm working on building my first (small) Ceph cluster using
>> E.C. and I was thinking about 5 nodes and k=4 m=2. With a failure domain
>> on host and several osd by nodes, in my mind this setup may run degraded
>> with 3 nodes using 2 distincts osd by node and the ultimate possibility
>> to loose an additional node without loosing data.  Of course with
>> sufficient free storage available.
>>
>> Am I totally wrong in my first ceph approach ?
>>
>> Patrick
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux