Re: Changing the failure-domain of an erasure coded pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is good news! Thanks for the fast reply.

We will now wait for Ceph to place all objects correctly and then check if we are happy with the setup.

Cheers
Max
________________________________________
From: Paul Emmerich <paul.emmerich@xxxxxxxx>
Sent: Thursday, February 13, 2020 2:54 PM
To: Neukum, Max (ETP)
Cc: ceph-users@xxxxxxx
Subject:  Re: Changing the failure-domain of an erasure coded pool

The CRUSH-related information from the ec profile is only used for the
initial creation of the crush rule for pool. You can just change the
crush rule and everything else will happen automatically.
Or you can create a new crush rule and assign it to the pool like you
did, that's also fine.

Unrelated: It's usually not recommended to run the default data pool
on an ec pool, but I guess it's fine if it is the only pool.

Paul

--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Thu, Feb 13, 2020 at 2:47 PM Neukum, Max (ETP) <max.neukum@xxxxxxx> wrote:
>
> Hi ceph enthusiasts,
>
> We have a ceph cluster with cephfs and two pools: one replicated for metadata on ssd and one with ec (4+2) on hdd. Recently, we upgraded from 4 to 7 nodes and now want to change the failure domain of the erasure coded pool from 'OSD' to 'HOST'.
>
> What we did was to create a new crush-rule and changed the rule of our ec pool. It still uses the old profile. Details can be found below.
>
> Now there are a couple of questions:
>
> 1) Is this equivalent to changing the profile? Below you can see in the profile 'crush-failure-domain=osd' and in the crush-rule '"op": "chooseleaf_indep", "type": "host"'.
>
> 2) If we need to change the failure-domain in the profile, can this be done without creating a new pool, which seems troublesome?
>
> 3) Finally, if we really need to create a new pool to do this... what is the best way? For the record: our cluster is now (after the upgrade) ~40% full (400TB/1Pb) with 173 OSDs.
>
>
> Cheers,
> Max
>
>
>
> some more details:
>
> [root@ceph-node-a ~]# ceph osd lspools
> 1 ec42
> 2 cephfs_metadata
>
> [root@ceph-node-a ~]# ceph osd pool get ec42 erasure_code_profile
> erasure_code_profile: ec42
>
> [root@ceph-node-a ~]# ceph osd pool get ec42 crush_rule
> crush_rule: ec42_host_hdd
>
> [root@ceph-node-a ~]# ceph osd erasure-code-profile get ec42
> crush-device-class=
> crush-failure-domain=osd
> crush-root=default
> jerasure-per-chunk-alignment=false
> k=4
> m=2
> plugin=jerasure
> technique=reed_sol_van
> w=8
>
> [root@ceph-node-a ~]# ceph osd crush rule dump ec42_host_hdd
> {
>     "rule_id": 6,
>     "rule_name": "ec42_host_hdd",
>     "ruleset": 6,
>     "type": 3,
>     "min_size": 3,
>     "max_size": 6,
>     "steps": [
>         {
>             "op": "set_chooseleaf_tries",
>             "num": 5
>         },
>         {
>             "op": "set_choose_tries",
>             "num": 100
>         },
>         {
>             "op": "take",
>             "item": -2,
>             "item_name": "default~hdd"
>         },
>         {
>             "op": "chooseleaf_indep",
>             "num": 0,
>             "type": "host"
>         },
>         {
>             "op": "emit"
>         }
>     ]
> }
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux