What is the upper limit of the numer of PGs in a ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Cepher,

My client has 10 volume, each volume was assigned 8192 PGs, in total 81920 PGs. The ceph is with Luminous Bluestore. During a power outage, the cluster restarted, and we observed that OSD peering consumed a lot of CPU and memory resources, evne leading to some OSD flappings. 

My question is thus, 1) how to speed up OSD peering and avoid OSD flappings when there are a lot PGs in the Ceph cluster,   and 2) Is there a practical limit on the number of PGs for a single cluster?

best regards,

Samuel 



huxiaoyu@xxxxxxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux