hello everyone, I would like to know how does the autoscale or manual scaling actually works to prevent my cluster from running out of disk space. Let's say i want to scale a pool of 8 PGs each ~400Gb to 32 PGs. 1) does each placement group get split in 4 pieces IN-PLACE all at the same time ? 2) does autoscaling choose one of the existing random placement groups for example X.Y and creates new empty placement groups and migrates data upon them and then continues to the next big PG with or without deleting the original PG? 3) something else ? I am more concerned about the time period when both the initial/pre-existing PGs and the newly created ones co-exist in the cluster to prevent full osds. In my case each pg has many small files and deleting stray pgs takes a long time. Would it be better if i used something like ceph osd pool set default.rgw.buckets.data pg_num 32 and then increase pgp_num in increments of 8 assuming one of the original PGs is affected at a time. But my assumption may be wrong again I could not find something relevant in the documentation Thank you _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx