Correction: Sorry min_size is at 1 everywhere.
Thank you. Karol Kozubal
From: Karol Kozubal <karol.kozubal@xxxxxxxxx>
Date: Wednesday, March 12, 2014 at 12:06 PM To: "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx> Subject: PG Scaling Hi Everyone,
I am deploying an openstack deployment with Fuel 4.1 and have a 20 node ceph deployment of c6220’s with 3 osd’s and 1 journaling disk per node. When first deployed each storage pool is configured with the correct size and min_size attributes however fuel
doesn’t seem to apply the correct number of pg’s to the pools based on the number of osd’s that we actually have.
I make the adjustments using the following
(20 nodes * 3 OSDs)*100 / 3 replicas = 2000
ceph osd pool volumes set size 3
ceph osd pool volumes set min_size 3
ceph osd pool volumes set pg_num 2000
ceph osd pool volumes set pgp_num 2000
ceph osd pool images set size 3
ceph osd pool images set min_size 3
ceph osd pool images set pg_num 2000
ceph osd pool images set pgp_num 2000
ceph osd pool compute set size 3
ceph osd pool compute set min_size 3
ceph osd pool compute set pg_num 2000
ceph osd pool compute set pgp_num 2000
Here are the questions I am left with concerning these changes:
Thank you for your input.
Karol Kozubal
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com