Re: Changing failure domain

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for you advices.
I thus created a new replica profile :
{
     "rule_id": 2,
     "rule_name": "replicated3over2rooms",
     "ruleset": 2,
     "type": 1,
     "min_size": 3,
     "max_size": 4,
     "steps": [
         {
             "op": "take",
             "item": -1,
             "item_name": "default"
         },
         {
             "op": "choose_firstn",
             "num": 0,
             "type": "room"
         },
         {
             "op": "chooseleaf_firstn",
             "num": 2,
             "type": "host"
         },
# et c'est fini
         {
             "op": "emit"
         }
     ]
}
It works well.

Now I am concerned by the pool in erasure coding. The point is that it's the data pool for cephfs (the metadata is in replica 3 and now replicated over our two rooms).
For now, the data pool for cephfs is in erasure coding k=3, m=2 (at the creation of the cluster we had only 5 osd servers). As noticed befors by Paul Emmerich, this cannot be redundantly splitted over 2 rooms (as 3 chunks are required to reconstruct the datas).
Now, we have 6 OSD servers, and soon it will be 7, thus I was thinking to create a new pool (eg. k=4, m=2 or k=3, m=3) and a rule to split the chunks over our 2 rooms and to use this new pool as cache tier to migrate softly all the datas from the old pool to the new one. But according to https://documentation.suse.com/ses/6/html/ses-all/ceph-pools.html#pool-migrate-cache-tier
"You can use the cache tier method to migrate from a replicated pool to either an erasure coded or another replicated pool. Migrating from an erasure coded pool is not supported."
Warning: You Cannot Migrate RBD Images and CephFS Exports to an EC Pool
You cannot migrate RBD images and CephFS exports from a replicated pool to an EC pool. EC pools can store data but not metadata. The header object of the RBD will fail to be flushed. The same applies for CephFS.


Thus my question is how can I migrate a data pool in EC of a cephfs to another EC pool ?
Thanks for your advices.
F.


Le 03/12/2019 à 04:07, Konstantin Shalygin a écrit :
On 12/2/19 5:56 PM, Francois Legrand wrote:
For replica, what is the best way to change crush profile ? Is it to create a new replica profile, and set this profile as crush rulest for the pool (something like ceph osd pool set {pool-name} crush_ruleset my_new_rule) ?

Indeed. Then you can delete/do what you want with old crush rule.



k



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux