amount of PGs/pools/OSDs for your openstack / Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Everyone,

I would like to know what kind of setup had the Ceph community been using for their Openstack's Ceph configuration when it comes to number of Pools & OSDs and their PGs.

Ceph documentation briefly mentions it for small cluster size, and I would like to know from your experience, how much PGs have you created for your openstack pools in reality for a ceph cluster ranging from 1-2 PB capacity or 400-600 number of OSDs that performs well without issues.

Hope to hear from you!

Thanks.

Regards,
Ossi

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux