I have a Ceph Luminous (12.2.12) cluster with 6 nodes. I’m attempting to create an EC3+2 pool with the following commands:
[root@mon-1 ~]# ceph osd erasure-code-profile get es32 crush-device-class= crush-failure-domain=host crush-root=sgshared jerasure-per-chunk-alignment=false k=3 m=2 plugin=jerasure technique=reed_sol_van w=8
"rule_id": 11, "rule_name": "es32", "ruleset": 11, "type": 3, "min_size": 3, "max_size": 5, "steps": [ { "op": "set_chooseleaf_tries", "num": 5 }, { "op": "set_choose_tries", "num": 100 }, { "op": "take", "item": -2, "item_name": "sgshared" }, { "op": "chooseleaf_indep", "num": 0, "type": "host" }, { "op": "emit" } ] }, From the output of “ceph osd pool ls detail” you can see min_size=4, the crush rule says min_size=3 however the pool does NOT survive 2 hosts failing. Am I missing something? |
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx