Re: Sanity check

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

your crush rule distributes each chunk on a different host, so your failure domain is host. The crush-failure-domain=osd from the EC profile most likely is from the initial creation, maybe it was supposed to be OSD during initial tests or whatever, but the crush rule is key here.

We thought we testing this by turning off 2 hosts, we have had one host offline recently and the cluster was still serving clients - did we get lucky?

No, you didn't get lucky. By default, an EC pool's min_size is k + 1, which is 7 in your case. You have 8 chunks in total distributed across different hosts, turning off one host results in 7 available chunks, so the pool is still serving clients. If you shut down one more host, the pool will become inactive.

Regards,
Eugen

Zitat von Adam Witwicki <Adam.Witwicki@xxxxxxxxxxxx>:

Hello,

Can someone please let me know what failure domain my erasure code pool is, osd or host? We thought we testing this by turning off 2 hosts, we have had one host offline recently and the cluster was still serving clients - did we get lucky?

ceph osd pool get <pool> crush_rule
crush_rule: ecpool

ceph osd pool get <pool> erasure_code_profile
erasure_code_profile: 6-2

rule ecpool {
        id 3
        type erasure
        min_size 3
        max_size 10
        step set_chooseleaf_tries 5
        step set_choose_tries 100
        step take default
        step chooseleaf indep 0 type host
        step emit
}


ceph osd erasure-code-profile get 6-2
crush-device-class=hdd
crush-failure-domain=osd
crush-root=default
jerasure-per-chunk-alignment=false
k=6
m=2
plugin=jerasure
technique=reed_sol_van
w=8



octopus

Regards


Adam


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux