Re: Relation between crushmap and erasure code profile

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Greg,

thank You for answer.

>> how  crush-failure-domain  from  EC  profile  is  related to rule from
>> crushmap ?

> The failure domain specified in the EC profile is used to specify the
> type of CRUSH bucket the chooseleaf command operates on. If the
> failure domain is set to host, CRUSH will select independent hosts for
> each OSD in a PG. As you've reconfigured it, CRUSH will just select
> the requisite number of OSDs without worrying about they're on
> independent machines — that is a terrible idea, as you have no
> resiliency against hardware failures!
> -Greg

I  have  the  same assumption as You (about resiliency). I try to 'fit
in'  with m=6 and k=6 on 3 machines and 4 OSDs on each machine without
warning   from  ceph  nor 'active+undersized' PGs... So I changed rule
from 'type host' to 'type osd'.

Simply  put:  I need add more disks to hosts, or add another host with
proper number of OSD to fulfill requirement of 18 OSD ?

Now it looks like this:

pool 36 'default.rgw.buckets.data' erasure size 12 min_size 7 crush_rule 10 object_hash rjenkins pg_num 256 pgp_num 256 last_change 509 lfor 296/505 flags hashpspool tiers 7 read_tier 7 write_tier 7 stripe_width 24576 application rgw

rule default.rgw.buckets.data {
        id 10
        type erasure
        min_size 3
        max_size 24
        step set_chooseleaf_tries 5
        step set_choose_tries 100
        step take default
        step chooseleaf indep 0 type osd
        step emit
}
-- 
Regards,
 Luk




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux