TL;DR: add the OSDs and then split the PGs They are different commands for different situations... changing the weight is to have a bigger number of nodes/devices. Depending on the size of cluster, the size of the devices, how busy it is and by how much you're growing it will have some different impacts. Usually people add the devices and slowly increase the OSDs weight to slowly increase the usage and data on them. There are some way to improve performance and/or reduce the impact of that operation, like the number of allowable concurrent backfills and op/backfill priority settings. The other one will get *all* of the objects in the existing PGs and redistribute them in another set of PGs. The number of PGs doesn't change with the number of OSDs, so the more OSDs you have to do the splitting the better - because amount of work is the same, the more workers the least each will have to do. If impact/IO is important - for example cluster is busy - then you can additionally set the noscrub/nodeep-scrub flags.... On Tue, May 2, 2017 at 7:16 AM, M Ranga Swami Reddy <swamireddy@xxxxxxxxx> wrote: > Hello, > I have added 5 new Ceph OSD nodes to my ceph cluster. Here, I wanted > to increase PG/PGP numbers of pools based new OSDs count. Same time > need to increase the newly added OSDs weight from 0 -> 1. > > My question is: > Do I need to increase the PG/PGP num increase and then reweight the OSDs? > Or > Reweight the OSDs first and then increase the PG/PGP num. of pool(s)? > > Both will cause the rebalnce...but wanted to understand, which one > should be preferable to do on running cluster. > > Thanks > Swami > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com