Thanks for the feedback Alex! If you have any issue or ideas for improvements please do submit them on the GH repo: https://github.com/digitalocean/pgremapper/ Last Thursday I did a Ceph at DO tech talk, I talked about how we use pgremapper to do augments on HDD clusters. The recording is not available yet but the gist is: - set nobackfill/norebalance - create all your osds -> PGs are in backill state but not data movement - cancel all backfills with pgremapper -> PGs are back to active+clean - unset nobackfill/norebalance -> nothing happens - turn on ceph balancer or use pgremapper undo-upmap to do your augment in a controlled way Our main motivation to do it this way is that on HDD clusters flapping is a fact of life and creates recovery PGs that are blocked by the backfill reservations from the augment. As more flapping occurs, the number of degraded objects increases which is always uncomfortable. Doing an augment this way allows us to have N backfills at a time, wait for completion -> let recovery happen -> undo N more upmaps -> etc. which dramatically lowers the amount of time a cluster is degraded. On Sat, Sep 25, 2021 at 10:15 AM Alex Gorbachev <ag@xxxxxxxxxxxxxxxxxxx> wrote: > > Hi Ceph community, > > I think this is so important operationally that bears repeating ( > https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/GJ35EL73A4LV6NPA74M6H6IN7BXMMHYA/ > ) > > Digital Ocean has released the pgremapper tool, with which one can cancel > pending backfills (in case bad decisions were made by balancer, or other > tools) - in my case this was a necessity to reweight many OSDs back to 1. > This tool saved many days of waiting for an unneeded rebalance. > > I found the tool at https://golangrepo.com/repo/digitalocean-pgremapper > -- > Alex Gorbachev > https://alextelescope.blogspot.com > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx