Re: amount of PGs/pools/OSDs for your openstack / Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The general recommendation is to target around 100 PG/OSD. Have you tried the https://ceph.com/pgcalc/ tool?

On Wed, 4 Apr 2018 at 21:38, Osama Hasebou <osama.hasebou@xxxxxx> wrote:
Hi Everyone,

I would like to know what kind of setup had the Ceph community been using for their Openstack's Ceph configuration when it comes to number of Pools & OSDs and their PGs.

Ceph documentation briefly mentions it for small cluster size, and I would like to know from your experience, how much PGs have you created for your openstack pools in reality for a ceph cluster ranging from 1-2 PB capacity or 400-600 number of OSDs that performs well without issues.

Hope to hear from you!

Thanks.

Regards,
Ossi

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux