Re: Relation between crushmap and erasure code profile

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 20, 2018 at 1:28 PM, Luk <skidoo@xxxxxxx> wrote:
> Hello,
>
> I have following definition in crushmap:
>
> rule default.rgw.buckets.data {
>         id 10
>         type erasure
>         min_size 3
>         max_size 24
>         step set_chooseleaf_tries 5
>         step set_choose_tries 100
>         step take default
>         step chooseleaf indep 0 type osd
>         step emit
> }
>
> 'step  chooseleaf  indep  0  type  osd`  which I changed from:
> 'step chooseleaf indep 0 type host'
>
> but I used following profile to create pool:
>
> crush-device-class=
> crush-failure-domain=host
> crush-root=default
> jerasure-per-chunk-alignment=false
> k=6
> m=6
> plugin=jerasure
> technique=reed_sol_van
> w=8
>
> how  crush-failure-domain  from  EC  profile  is  related to rule from
> crushmap ?

The failure domain specified in the EC profile is used to specify the
type of CRUSH bucket the chooseleaf command operates on. If the
failure domain is set to host, CRUSH will select independent hosts for
each OSD in a PG. As you've reconfigured it, CRUSH will just select
the requisite number of OSDs without worrying about they're on
independent machines — that is a terrible idea, as you have no
resiliency against hardware failures!
-Greg

>
>
> --
> Regards,
>  Luk
>



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux