Re: Can I create 8+2 Erasure coding pool on 5 node?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Here's a crush ruleset for 8+2 that will choose 2 osds per host:


rule cephfs_data_82 {
        id 4
        type erasure
        min_size 3
        max_size 10
        step set_chooseleaf_tries 5
        step set_choose_tries 100
        step take default class hdd
        step choose indep 5 type host
        step choose indep 2 type osd
        step emit
}



This is kind of useful because if you set min_size to 8, you could even
lose an entire host and stay online.

Cheers, dan





On Thu, Mar 25, 2021, 7:02 PM by morphin <morphinwithyou@xxxxxxxxx> wrote:

> Hello.
>
> I have 5 node Cluster in A datacenter. Also I have same 5 node in B
> datacenter.
> They're gonna be 10 node 8+2 EC cluster for backup but I need to add
> the 5 node later.
> I have to sync my S3 data with multisite on the 5 node cluster in A
> datacenter and move
> them to the B and add the other 5 node to the same cluster.
>
> The question is: Can I create 8+2 ec pool on 5 node cluster and add
> the 5 node later? How can I rebalance the data after that?
> Or is there any better solution in my case? what should I do?
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux