PG Scaling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Everyone,

I am deploying an openstack deployment with Fuel 4.1 and have a 20 node ceph deployment of c6220’s with 3 osd’s and 1 journaling disk per node. When first deployed each storage pool is configured with the correct size and min_size attributes however fuel doesn’t seem to apply the correct number of pg’s to the pools based on the number of osd’s that we actually have.

I make the adjustments using the following

(20 nodes * 3 OSDs)*100 / 3 replicas = 2000

ceph osd pool volumes set size 3
ceph osd pool volumes set min_size 3
ceph osd pool volumes set pg_num 2000
ceph osd pool volumes set pgp_num 2000

ceph osd pool images set size 3
ceph osd pool images set min_size 3
ceph osd pool images set pg_num 2000
ceph osd pool images set pgp_num 2000

ceph osd pool compute set size 3
ceph osd pool compute set min_size 3
ceph osd pool compute set pg_num 2000
ceph osd pool compute set pgp_num 2000

Here are the questions I am left with concerning these changes:
  1. How long does it take for ceph to apply the changes and recalculate the pg’s?
  2. When is it safe to do this type of operation? before any data is written to the pools or is doing this while pools are used acceptable?
  3. Is it possible to scale down the number of pg’s ?
Thank you for your input.

Karol Kozubal
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux