Re: amount of PGs/pools/OSDs for your openstack / Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Our use case is not Openstack but we have a cluster with similar size to what you are looking at. Our cluster has 540 OSDs with 4PB of raw storage spread across 9 nodes at this point.

2 pools
   - 512 PGs - 3 way redundancy
   - 32768 PGs - RS(6,3) erasure coding (99.9% of data in this pool)

The reason we chose to go with ~550PGs/OSD currently is to reduce the number of data moves that will happen when OSDs are added to the cluster and the number of PGs need to be expanded. We have enough memory on the nodes to handle the high number of PGs. 512GB for 60 OSDs/ node. For testing the cluster about 2.5TB of data was written to the EC pool using "rados bench" at 2-3GB/s of sustained throughput. The cluster is being used with librados and objects are directly stored in the pools. Did not hit any major issues with simulated scenarios like drive replacement and recovery.

We also tested with double the number of PGs in each pool - 1024 and 65536. The cluster started showing instability at that point. Whenever an OSD went down, cascading failures started to occur during recovery i.e more OSDs would fail during the peering process when a failed OSD tried to rejoin the cluster.

Keeping the OSD usage balanced becomes very important as the cluster fills up. A few OSDs that have much higher usage than the others can stop all writes into the cluster and it is very hard to recover from it when the usage is very close to the capacity thresholds.

Subhachandra


On Sat, Apr 7, 2018 at 7:01 PM, Christian Wuerdig <christian.wuerdig@xxxxxxxxx> wrote:
The general recommendation is to target around 100 PG/OSD. Have you tried the https://ceph.com/pgcalc/ tool?

On Wed, 4 Apr 2018 at 21:38, Osama Hasebou <osama.hasebou@xxxxxx> wrote:
Hi Everyone,

I would like to know what kind of setup had the Ceph community been using for their Openstack's Ceph configuration when it comes to number of Pools & OSDs and their PGs.

Ceph documentation briefly mentions it for small cluster size, and I would like to know from your experience, how much PGs have you created for your openstack pools in reality for a ceph cluster ranging from 1-2 PB capacity or 400-600 number of OSDs that performs well without issues.

Hope to hear from you!

Thanks.

Regards,
Ossi

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux