Placement groups on a 216 OSD cluster with multiple pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

We’ll be going into production with our Ceph cluster shortly and I’m looking for some advice on the number of PGs per pool we should be using.

We have 216 OSDs totalling 588TB of storage. We’re intending on having several pools, with not all pools sharing the same replica count - so, some pools will not have any replicas, others may have one or two replicas depending on need. It’s not currently possible to reliably estimate which pools will see the most traffic due to our use case.

Looking through the CERN slides from the London Ceph day[1], they seem to be using 1-4k PGs per pool over 11 pools (~19500PGs total). It’s not entirely clear how many OSDs they’ve got, but if they’re running 1 OSD per disk it’ll be over a thousand. That PG count seems a little low to me, but there are smarter people than me at CERN so I’m willing to accept it!

For our system, using the formula from the documentation, a single pool spread over 216 OSDs would require 10800 PGs for a ’size’ of 2. Does 1-4k PGs sound like a more reasonable number for 200+ OSDs with around 6 pools? Would you need more PGs for a lower replica count (e.g. 4-6k PGs for a ‘size’ of 0 [2]) or would it not matter?

Thanks

Dane

[2] - I realise the dangers/stupidity of a replica size of 0, but some of the data we wish to store just isn’t /that/ important.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux