"143 active+clean
17 activating"
Wait until all of the PG's finish activating and you should be good. Let's revisit your 160 PG's, though. If you had 128 PGs and 8TB of data in your pool, then you each PG would have about 62.5GB in size. Because you set it to 160 instead of a base 2 number, what that does is starts splitting some of your PGs in half, but not all of them. What you would end up in a cluster with 160 PGs and 8TB of data is 96 PGs that have 62.5GB each and 64 PGs that have 31.25GB each. Base 2 numbers make balancing a cluster much easier (64, 128, 256, 512, etc)
On Wed, Jun 14, 2017 at 11:00 AM David Turner <drakonstein@xxxxxxxxx> wrote:
You increased your pg_num and it finished creating them "160 active+clean". Now you need to increase your pgp_num to match the 160 and you should be good to go.On Wed, Jun 14, 2017 at 10:57 AM Stéphane Klein <contact@xxxxxxxxxxxxxxxxxxx> wrote:2017-06-14 16:40 GMT+02:00 David Turner <drakonstein@xxxxxxxxx>:Once those PG's have finished creating and the cluster is back to normal
How can I see Cluster migration progression?
Now I have:# ceph statuscluster 800221d2-4b8c-11e7-9bb9-cffc42889917health HEALTH_WARNpool rbd pg_num 160 > pgp_num 64monmap e1: 2 mons at {ceph-storage-rbx-1=172.29.20.30:6789/0,ceph-storage-rbx-2=172.29.20.31:6789/0}election epoch 4, quorum 0,1 ceph-storage-rbx-1,ceph-storage-rbx-2osdmap e19: 6 osds: 6 up, 6 inflags sortbitwise,require_jewel_osdspgmap v45: 160 pgs, 1 pools, 0 bytes data, 0 objects30923 MB used, 22194 GB / 22225 GB avail160 active+clean
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com