Re: Total number PGs using multiple pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Although the documentation is not great, and open to interpretation, there is a pg calculator here http://ceph.com/pgcalc/.
With it you should be able to simulate your use case, and generate number based on your scenario.

On Mon, Jan 26, 2015 at 8:00 PM, Italo Santos <okdokk@xxxxxxxxx> wrote:
Thanks for your answer.

But what I’d like to understand is if this numbers are per pool bases or per cluster bases? If this number were per cluster bases I’ll plan on cluster deploy how much pools I’d like to have on that cluster and their replicas

Regards.

Italo Santos

On Saturday, January 17, 2015 at 07:04, lidchen@xxxxxxxxxx wrote:

Here are a few values commonly used:

  • Less than 5 OSDs set pg_num to 128
  • Between 5 and 10 OSDs set pg_num to 512
  • Between 10 and 50 OSDs set pg_num to 4096
  • If you have more than 50 OSDs, you need to understand the tradeoffs and how to calculate the pg_num value by yourself
But i think 10 OSD is to small for rados cluster.

 
Date: 2015-01-17 05:00
Subject: [ceph-users] Total number PGs using multiple pools
Hello,

Into placement groups documentation we have the message bellow:

When using multiple data pools for storing objects, you need to ensure that you balance the number of placement groups per pool with the number of placement groups per OSD so that you arrive at a reasonable total number of placement groups that provides reasonably low variance per OSD without taxing system resources or making the peering process too slow.

This means that, if I have a cluster with 10 OSD and 3 pools with size = 3 each pool can have only ~111 PGs?

Ex.: (100 * 10 OSDs) / 3 replicas = 333 PGs / 3 pools = 111 PGS per pool

I don't know if reasoning is right… I’ll glad for any help.

Regards.

Italo Santos



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux