Hi Dev, ----- Le 5 Fév 25, à 19:38, Devender Singh <devender@xxxxxxxxxx> a écrit : > Hello all > Thanks for your reply. > I already tried few things.. > 1. Tried deleting the old ec profile and thought to create it with same name but > with osd failure domain, it did not allow deleting from pool. That's expected as the profile is used by a pool. > 2. Changed crush rule with osd too. It reverted automatically to host. What did revert it? If you change the crush rule's chooseleaf type from 'host' to 'osd' directly in the crush map like Anthony suggested, nothing will change it back to 'host' afterwards. Whatever you set for crush-failure-domain when you created the ec profile, you're free to change the chooseleaf type (failure domain) of the crush rule afterwards. You can also create a new crush rush rule and set it to the pool. The only thing one cannot edit once set on a pool is the ec profile. Actually... there are --force and --yes-i-really-mean-it options one could use to edit ec profiles but I cannot recommend to do that. And i would still not change existing crush rule's chooseleaf type. What you want is to edit the crush rule directly in the crush map and set its chooseleaf type to whatever you want. You can edit the crush map with the text editor of your choice with the below commands: $ ceph osd getcrushmap -o cc ; crushtool -d cc -o dc $ vim dc $ crushtool -c dc -o cc ; ceph osd setcrushmap -i cc and set chooseleaf type to the 'bucket' you want, like region, datacenter, rack, etc. > 3. Then Created new ec profile and created a new rule with it and set it to > pools, shows attached too. > But pool still shows old EC profile attached which is with host failure domain, That's expected. You cannot change the ec profile set on a pool, as that may imply rewriting existing data to match the new profile. > 4. Tried pausing osd read/write and then tried to delete the profile but no > luck. That's expected. Could you do that then you wouldn't access the data anymore even after un-pausing the cluster. > 5. Replicated pool is easy to change in crush map directly which is great. > But seems issue with EC pool profile. Yes, that's because of the data placement scheme. Its not just like asking for one more replica on a replicated pool (easy to do), changing the ec profile would probably mean rewriting all the data existing into that pool, since every object is split into data and parity chunk matching k+m. > Now the last option which I see is to migrate the data by crating new pool with > new osd failure profile which seems a long time consuming process(may need down > time), not looking for it, > Do we see any other way? Yes, as suggested by Anthony, edit the crush map (with the commands I wrote above) and change the ec crush rule's chooseleaf type to whatever you need. Regards, Frédéric. > Regards > Dev >> On Feb 5, 2025, at 1:27 AM, Frédéric Nass <frederic.nass@xxxxxxxxxxxxxxxx> >> wrote: >> Hi Jane, >> I totally agree with you and Eugen about not using 'osd' as a failure domain, >> but the initial question was about the profile. ;-) >> Cheers, >> Frédéric. >> ----- Le 5 Fév 25, à 10:16, Janne Johansson icepic.dz@xxxxxxxxx a écrit : >>>> Hi Dev, >>>> You can't. There's no 'ceph osd erasure-code-profile modify' command and the >>>> 'ceph osd erasure-code-profile set' will fail on output below when run on an >>>> existing profile. See below: >>> I think you are answering the wrong question. >>> You are right that one cannot change the EC profile, but you can >>> change the crush rules, so that the failure domain changes from "host" >>> to "osd" which is what I think was asked for, not changing the EC >>> profile. >>> I agree with Eugene that it is a bad idea in the long run, but it can be done. >>> -- >>> May the most significant bit of your life be positive. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx