Re: Adding new OSDs, need to increase PGs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, I would recommend increasing PGs in your case. 

The pg_num and pgp_num recommendations are designed to be fairly broad to cover a wide range of different hardware that a ceph user might be utilizing. You basically should be using a number that will ensure data granularity across all your OSDs. Setting your pg_num and pgp_num to say... 1024 would A) increase data granularity, B) likely lend no noticeable increase to resource consumption, and C) allow some room for future OSDs to be added while still within range of acceptable pg numbers. You could probably safely double even that number if you plan on expanding at a rapid rate and want to avoid splitting PGs every time a node is added.

In general, you can conservatively err on the larger side when it comes to pg/p_num. Any excess resource utilization will be negligible (up to a certain point). If you have a comfortable amount of available RAM, you could experiment with increasing the multiplier in the equation you are using and see how it affects your final number.

The pg_num and pgp_num parameters can safely be changed before or after your new nodes are integrated.

~Brian

On Sat, Nov 30, 2013 at 11:35 PM, Indra Pramana <indra@xxxxxxxx> wrote:

Dear all,

Greetings to all, I am new to this list. Please mind my newbie question. :)

I am running a Ceph cluster with 3 servers and 4 drives / OSDs per server. So total currently there are 12 OSDs running on the cluster. I set PGs (Placement Groups) value to 600 based on recommendation of calculating number of PGs = number of OSDs * 100 / number of replicas, which is 2.

Now I am going to add one more OSD node with 4 drives (OSDs) to make up to 16 OSDs in total.

My question, do I need to increase the PGs value to 800? Or leave it at 600? If I need to increase, on which step I need to increase the value (pg_num and pgp_num) during the insertion of the new node?

Any advice is greatly appreciated, thank you.

Cheers.


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux