Re: EC pool creation results in incorrect M value?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the info regarding min_size in the crush rule - does this seem like a bug to you then? Is anyone else able to reproduce this?

-----Original Message-----
From: Paul Emmerich <paul.emmerich@xxxxxxxx> 
Sent: Monday, January 27, 2020 11:15 AM
To: Smith, Eric <Eric.Smith@xxxxxxxx>
Cc: ceph-users@xxxxxxx
Subject: Re:  EC pool creation results in incorrect M value?

min_size in the crush rule and min_size in the pool are completely different things that happen to share the same name.

Ignore min_size in the crush rule, it has virtually no meaning in almost all cases (like this one).


Paul

--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Mon, Jan 27, 2020 at 3:41 PM Smith, Eric <Eric.Smith@xxxxxxxx> wrote:
>
> I have a Ceph Luminous (12.2.12) cluster with 6 nodes. I’m attempting to create an EC3+2 pool with the following commands:
>
> Create the EC profile:
>
> ceph osd erasure-code-profile set es32 k=3 m=2 plugin=jerasure w=8 
> technique=reed_sol_van crush-failure-domain=host crush-root=sgshared
>
> Verify profile creation:
>
> [root@mon-1 ~]# ceph osd erasure-code-profile get es32
>
> crush-device-class=
>
> crush-failure-domain=host
>
> crush-root=sgshared
>
> jerasure-per-chunk-alignment=false
>
> k=3
>
> m=2
>
> plugin=jerasure
>
> technique=reed_sol_van
>
> w=8
>
> Create a pool using this profile:
>
> ceph osd pool create ec32pool 1024 1024 erasure es32
>
> List pool detail:
>
> pool 31 'es32' erasure size 5 min_size 4 crush_rule 11 object_hash 
> rjenkins pg_num 1024 pgp_num 1024 last_change 1568 flags hashpspool 
> stripe_width 12288 application ES
>
> Here’s the crush rule that’s created:
>     {
>
>         "rule_id": 11,
>
>         "rule_name": "es32",
>
>         "ruleset": 11,
>
>         "type": 3,
>
>         "min_size": 3,
>
>         "max_size": 5,
>
>         "steps": [
>
>             {
>
>                 "op": "set_chooseleaf_tries",
>
>                 "num": 5
>
>             },
>
>             {
>
>                 "op": "set_choose_tries",
>
>                 "num": 100
>
>             },
>
>             {
>
>                 "op": "take",
>
>                 "item": -2,
>
>                 "item_name": "sgshared"
>
>             },
>
>             {
>
>                 "op": "chooseleaf_indep",
>
>                 "num": 0,
>
>                 "type": "host"
>
>             },
>
>             {
>
>                 "op": "emit"
>
>             }
>
>         ]
>
>     },
>
>
>
> From the output of “ceph osd pool ls detail” you can see min_size=4, the crush rule says min_size=3 however the pool does NOT survive 2 hosts failing.
>
>
>
> Am I missing something?
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux