https://access.redhat.com/solutions/2457321
It says it is a very intensive process and can affect cluster performance.
Should I increase it gradually? Or set pg as 512 in one step ?
It says it is a very intensive process and can affect cluster performance.
Our Version is Luminous 12.2.2
And we are using erasure coding profile for a pool 'ecpool' with k=5 and m=3
Current PG number is 256 and it has about 20 TB of data.
Should I increase it gradually? Or set pg as 512 in one step ?
Karun Josy
On Tue, Jan 2, 2018 at 9:26 PM, Hans van den Bogert <hansbogert@xxxxxxxxx> wrote:
Please refer to standard documentation as much as possible,Han’s is also incomplete, since you also need to change the ‘pgp_num’ as well.Regards,HansOn Jan 2, 2018, at 4:41 PM, Vladimir Prokofev <v@xxxxxxxxxxx> wrote:Increased number of PGs in multiple pools in a production cluster on 12.2.2 recently - zero issues.______________________________CEPH claims that increasing pg_num and pgp_num are safe operations, which are essential for it's ability to scale, and this sounds pretty reasonable to me. [1]2018-01-02 18:21 GMT+03:00 Karun Josy <karunjosy1@xxxxxxxxx>:______________________________Hi,Initial PG count was not properly planned while setting up the cluster, so now there are only less than 50 PGs per OSDs.What are the best practises to increase PG number of a pool ?We have replicated pools as well as EC pools.Or is it better to create a new pool with higher PG numbers?Karun_________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com