Changing the failure-domain of an erasure coded pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi ceph enthusiasts,

We have a ceph cluster with cephfs and two pools: one replicated for metadata on ssd and one with ec (4+2) on hdd. Recently, we upgraded from 4 to 7 nodes and now want to change the failure domain of the erasure coded pool from 'OSD' to 'HOST'.

What we did was to create a new crush-rule and changed the rule of our ec pool. It still uses the old profile. Details can be found below.

Now there are a couple of questions:

1) Is this equivalent to changing the profile? Below you can see in the profile 'crush-failure-domain=osd' and in the crush-rule '"op": "chooseleaf_indep", "type": "host"'.

2) If we need to change the failure-domain in the profile, can this be done without creating a new pool, which seems troublesome?

3) Finally, if we really need to create a new pool to do this... what is the best way? For the record: our cluster is now (after the upgrade) ~40% full (400TB/1Pb) with 173 OSDs.


Cheers,
Max



some more details:

[root@ceph-node-a ~]# ceph osd lspools
1 ec42
2 cephfs_metadata

[root@ceph-node-a ~]# ceph osd pool get ec42 erasure_code_profile
erasure_code_profile: ec42

[root@ceph-node-a ~]# ceph osd pool get ec42 crush_rule
crush_rule: ec42_host_hdd

[root@ceph-node-a ~]# ceph osd erasure-code-profile get ec42
crush-device-class=
crush-failure-domain=osd
crush-root=default
jerasure-per-chunk-alignment=false
k=4
m=2
plugin=jerasure
technique=reed_sol_van
w=8

[root@ceph-node-a ~]# ceph osd crush rule dump ec42_host_hdd
{
    "rule_id": 6,
    "rule_name": "ec42_host_hdd",
    "ruleset": 6,
    "type": 3,
    "min_size": 3,
    "max_size": 6,
    "steps": [
        {
            "op": "set_chooseleaf_tries",
            "num": 5
        },
        {
            "op": "set_choose_tries",
            "num": 100
        },
        {
            "op": "take",
            "item": -2,
            "item_name": "default~hdd"
        },
        {
            "op": "chooseleaf_indep",
            "num": 0,
            "type": "host"
        },
        {
            "op": "emit"
        }
    ]
}

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux