Thanks for you advices. I thus created a new replica profile : { "rule_id": 2, "rule_name": "replicated3over2rooms", "ruleset": 2, "type": 1, "min_size": 3, "max_size": 4, "steps": [ { "op": "take", "item": -1, "item_name": "default" }, { "op": "choose_firstn", "num": 0, "type": "room" }, { "op": "chooseleaf_firstn", "num": 2, "type": "host" }, # et c'est fini { "op": "emit" } ] }It works well. Now I am concerned by the pool in erasure coding. The point is that it's the data pool for cephfs (the metadata is in replica 3 and now replicated over our two rooms). For now, the data pool for cephfs is in erasure coding k=3, m=2 (at the creation of the cluster we had only 5 osd servers). As noticed befors by Paul Emmerich, this cannot be redundantly splitted over 2 rooms (as 3 chunks are required to reconstruct the datas). Now, we have 6 OSD servers, and soon it will be 7, thus I was thinking to create a new pool (eg. k=4, m=2 or k=3, m=3) and a rule to split the chunks over our 2 rooms and to use this new pool as cache tier to migrate softly all the datas from the old pool to the new one. But according to https://documentation.suse.com/ses/6/html/ses-all/ceph-pools.html#pool-migrate-cache-tier "You can use the cache tier method to migrate from a replicated pool to either an erasure coded or another replicated pool. Migrating from an erasure coded pool is not supported." Warning: You Cannot Migrate RBD Images and CephFS Exports to an EC PoolYou cannot migrate RBD images and CephFS exports from a replicated pool to an EC pool. EC pools can store data but not metadata. The header object of the RBD will fail to be flushed. The same applies for CephFS.Thus my question is how can I migrate a data pool in EC of a cephfs to another EC pool ? Thanks for your advices. F. Le 03/12/2019 à 04:07, Konstantin
Shalygin a écrit :
On 12/2/19 5:56 PM, Francois Legrand wrote: |
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx